Computer Hackers

The meaning of Hacker is one who accesses a computer which is supposably not able to be accessed to non authorised people of the community. Hackers may use any type of system to access this information depending on what they intend on doing in the system.

Methods

Hackers may use a variety of ways to hack into a system. First if the hacker is experienced and smart the hacker will use telnet to access a shell on another machine so that the risk of getting caught is lower than doing it using their own system. Ways in which the hacker will break into the system are:

1) Guess/cracking passwords. This is where the hacker takes guesses at the password or has a crack program to crack the password protecting the system.

2) Finding back doors is another way in which the hacker may get access to the system. This is where the hacker tries to find flaws in the system they are trying to enter.

3) One other way in which a hacker may try to get into a system is by using a program called a WORM. This program is specially programmed to suit the need of the user. This programme continually tries to connect to a machine at over 100 times a second until eventually the system lets in and the worm executes its program. The program could be anything from getting password files to deleting files depending on what it has been programmed to do.

Protection

The only way that you or a company can stop a Hacker is by not having your computer connected to the net. This is the only sure fire way in which you can stop a hacker entering your system. This is mainly because hackers use a phone line to access the system. If it is possible for one person to access the system then it is possible for a hacker to gain access to the system. One of the main problems is that major companies need to be networked and accessible over the net so that employees can do overdue work or so that people can look up things on that company. Also major companies network their offices so that they can access data from different positions.

One way which is used to try to prevent hackers gaining access is a program used by companies called a Firewall. A Firewall is a program which stops other connections from different servers to the firewall server. This is very effective in stopping hackers entering the system. Tho this is not a fool proof way of stopping hackers as it can be broken and hackers can get in. Tho this is a very good way of protecting your system on the InterNet.

Major Hacks

Some of the major hacks that have been committed have been done by young teens aged between 14 and 18. These computer geniuses as they are known have expert knowledge on what they are doing and also know the consequences. Tho the consequences do not really enter there mind when they are doing it.

This hack occurred on February 10, 1997, and again on February 14, 1997 Portuguese hackers launched a political attack on the web page of the Indonesian government, focusing on that country’s continued oppression of East Timor. The attack was online for about 3 hours from 7.00 PM to 10.00 PM (Portuguese Time) at the web site of the Department of Foreign Affairs, Republic of Indonesia. The hackers did not delete or change anything. The said We just hack pages.

Another major hack that occurred was on April 1 1981 by a single user. This hacker who was situated in an east coast brokage house was interested in the stock market. SO he purchased $100,000 worth of shares in the stock market. Then he hacked into the stock markets main computers and stole $80 million dollars. The hacker was eventually caught although $53 million dollars was not recovered.

On Wednesday, March 5 1997 The home page of the National Aeronautics and Space Administration’s was recently hacked and the contents changed. The group known as H4G1S. This group of hackers managed to change the contents of the webpage The hacking group changed the webpage and left a little message for all. It said Gr33t1ngs fr0m th3 m3mb3rs 0f H4G1S. Our mission is to continue where our colleagues the ILF left off. During the next month, we the members of H4G1S, will be launching an attack on corporate America. All who profit from the misuse of the InterNet will fall victim to our upcoming reign of digital terrorism. Our privileged and highly skilled members will stop at nothing until our presence is felt nationwide. Even your most sophisticated firewalls are useless. We will demonstrate this in the upcoming weeks.

The homepage of the United States Air Force was recently hacked and the contents had been changed. The webpage had been changed completely as the hackers had inserted pornographic pictures saying this is what we are doing to you and had under the image screwing you. The hackers have changed it and shown their views on the political system.

One other major hack which was committed was by a 16 year old boy in Europe. This boy hacked into the British Airforce and downloaded confidential information on Ballistic missiles. The boy hacked into the site and down loaded this information because he was interested and wanted to know more about them. This boy was fined a sum of money.

In conclusion it can be said that hackers are sophisticated and very talented when it comes to the use of a computer. Hacking is a process of learning not following any manual. Hackers learn as they go and use a method of trial and error. Most people who say they are hackers most of the time are not. Real hackers do not delete or destroy any information on the system they hack. Hackers hack because they love the thrill of getting into a system that is supposably unable to be entered. Overall hackers are smart and cause little damage to the system they enter. So hackers are not really terrorists in a way they help companies find out flaws in their system.

An Evaluation Of Nullsoft Winamp

Nullsoft Winamp is a fast, flexible, high fidelity music player for Windows 95/98/NT. Winamp supports MP3, MP2, CD, MOD, WAV and other audio formats. Winamp also supports custom interfaces called skins, audio visualization and audio effect plug-ins. Nullsoft also provides a high quality website at http://www.winamp.com. The Winamp homepage provides support, information, software downloads, and music downloads for Nullsofts music products. Winamp is a high quality music player for your personal computer.

The first thing to look for when considering a program to play music on your computer is sound quality. Nullsoft Winamp has the ability to play CD quality sound from MP3, MP2, CD, MOD, WAV and other audio formats. Winamp has a ten band graphic equalizer and built-in pre-amplifier that allows the user greater control over sound quality even before the music passes through a sound card or speakers. If you are not comfortable with changing the equalizer settings yourself, Winamp has hundreds of preset settings which are categorized by music type. Examples of this include Jazz, Rock, Reggae, and many more. Winamp users even have the ability to create and save song-specific pre-amplifier and equalizer settings.

Another important factor in choosing a music program for your computer is customizable features. Winamp meets this criterion well. The ability to customize your music player makes the program easier to use. The user has the ability to make a Play list from the music files that are stored on the hard drive of the users computer. Play lists are easy to load and are not difficult to create. The Nullsoft Winamp website has a Plugin and Skin collection available for downloads to further customize your copy of Winamp. There are hundreds of different plugins and skins to choose from. Plugins for Winamp range from audio visualization oscilloscopes to audio effects like distortion and surround sound. Skin categories range from different colors to cartoons and artwork. Technically advanced users can even create their own skins.

Customer service and technical support services are important with any product, especially when a user is unfamiliar with the product. The Winamp program can be difficult to learn and use without some instruction. However, Nullsoft Winamp provides a stable and easy to navigate website that includes many helpful services. Customer service and technical support are available through chat and via email from the Winamp homepage. Customers have the ability to read step-by-step instructions on how to use Winamp and all of its custom features by clicking on easy to see links. Nullsoft also provides free downloads of Winamp and upgrades whenever they become available. Plugin and skin downloads are not only available from the Winamp website, but from hundreds of other sites on the Internet. To find Winamp on the Internet, just search for the keyword Winamp on the search engine of your choice. All Winamp products are considered freeware and, as the title suggests, free of charge.

Nullsoft Winamp is a versatile music player for your personal computer. It has the ability to process several sound file types including MP3, MP2, CD, MOD, WAV, and other audio formats. Winamp has many customizable features. Users have the ability to create their own play lists, and the Nullsoft Winamp homepage has hundreds of skins and plugins to choose from. Skins and Plugins are also available from various websites on the Internet. Winamp may be slightly difficult to learn to use, but customer service and technical support are easily available from the Nullsoft Winamp homepage. Winamp is an excellent music player for your computer.

Computer crime and terrorism

Almost every major political candidate in recent history has used, and often exploited, on of the many facets of crime to attempt to convince American voters that they can feel secure and safe with the candidate at the leadership helm, coveted feelings in a tumultuous world. As a result, we have more policemen patrolling the streets, laws such as the death penalty and three strikes your out, proposed increases to defenses spending for both the latest weapons and more armed service men, beefed up airport security measures, and catchy candidate slogans such as tough on crime.

And yet, the one facet of crime that no one is talking about is the one issue that has the potential to destroy the American economy, the American political system, and the lives of the American citizens. As new cops patrol the streets at night and technologys most advanced weapons sit ready to deploy in order to protect our land, some of the worlds most vicious and dangerous criminals invade our homes and our countrys borders each day almost without a trace. Computer crime and terrorism are plaguing our nation, and our leaders continue to march the nation into the criminals hands.

Today, more and more, computer crime has made its way into the headlines, the evening news, and major newspapers. In recent years, we have seen attacks on major websites such as Yahoo, America Online, the FBIs home page, and many others. These attacks, known as hacking, have ranged from defacing web pages to shutting down entire sites, costing thousands and sometimes hundreds of thousands of dollars each time (Quinn, 1). In other areas of computer crime, individuals use credit card fraud to buy items online in an environment where they are not checked by anything more than a card number, expiration date, and billing address.

In a one year period, it is estimated that computer crime causes hundreds of millions of dollars in damage (Carter, 3). These crimes and attacks have become so prevalent that a name has been given to the perpetrators: hackers. Hackers, and the crimes that they commit, have become a very serious threat to the well being of society in the United States. Hacking is defined by the CCI Online Computer Dictionary as using ingenuity and creativity to solve computer-programming problems, and to overcome the limitations of a system and expand its capabilities (CCI Computing).

Though by this definition, hacking seems harmless, and maybe even useful, the act is more commonly associated with negative actions and aggressive crimes. The PC Webopedia defines hacking as modifying a program, often in an unauthorized manner, by changing the code itself (PC Webopedia). The term hacker references the latter definition, and refers to the person who invades the affected computers. Perhaps the most threatening aspect of hackers, though, is that anyone with more than a cursory knowledge of computer programming can quickly begin wreaking havoc.

The average person tends to think of hackers as malicious, professional computer criminals who spend all of their time writing destructive computer viruses to break into the most secure and important computer systems to steal information or money (Hayward, 2). However, hackers are more often the guy next door, young teenagers or college students, or men and women that work for technologys leading firms. Even more frightening is the fact that home computers are just as susceptible to becoming the next victims of computer crime as the Pentagons computer systems.

David Dunagen, a resident of Dallas, Texas, knows this threat all too well. Dunagen, a computer security expert, was a victim of computer hacking fraud. A computer criminal stole his identity and credit card number, and then used them to order a notebook computer over the Internet. The Dallas police refused to look into his case, claiming that they did not have the time or the resources to track down the individual (Riggs, 1). Dunagen was left without recourse for the damages of the disconcerting crime (2).

The most concerning groups of hackers are those that are using computer crime as a terrorist tool. As American society grows closer to complete computer and technological dependence, terrorist groups have begun to discover new ways to attack organizations and countries without using physical means. Terrorist groups can bring this countrys economy to a total standstill simply by hacking into the computers that control the major commerce and trading systems. Even our national defense and security systems are not invincible to attacks by hackers.

Worse yet, computer terrorists are virtually untraceable; they can invade American computer systems without setting foot in the hemisphere, while never disclosing their location. In fact with portable computer technology, terrorists can commit their crimes literally on the move. This makes it extremely difficult for organizations like CERT, the FBI, the CIA, and other state and local crime fighters to track down, let alone prosecute, the criminals or terrorists involved. The United States Government seems to be taking the threat seriously; already the FBI is working hard to keep up with technologys top criminals.

The U. S. Department of State is anticipating even greater numbers of attacks on computer systems in the future (Campbell, 1) and the FBI claims that there has already been an increase of over one hundred percent in computer crimes committed in the United States last year (Anderson, 1). This anticipation is well founded: it is a simple fact that as greater numbers of computer systems are developed, there will be more opportunities and areas for terrorist and computer criminals to invade (Parker, 11). The FBI has been developing its technology crime fighting unit for several years now.

Other Federal and State law enforcement agencies, however, have fallen behind. Unfortunately, criminal hackers definitely have the upper hand when it comes to computer crime. Today, this crime is still new enough that federal government lacks sufficient laws to put an end to the reign of technological terror plaguing American Society (Parker, 97). Furthermore, law enforcement agencies lack the manpower necessary to utilize existing laws in their efforts to fight computer crime. In addition to the concerns of non-physical hacking attacks, terrorist groups have found other ways to delay or completely shutdown the flow of information.

Cyber terrorists have not limited their attacks to the digital side of computers; they have realized that a small bomb or fire at a computer facility can do more damage in a shorter amount of time than hacking into whole systems. Terrorist groups have followed through on this approach: as of 1992, they had attacked more than 600 computer facilities using bombs and other physical means to destroy the computers (Campbell, 1). While most of these attacks have occurred over-seas, the threat of an attack happening in the United States is very real.

Many of the largest computer facilities in the United States are virtually unprotected physically; though their computer systems are guarded by the most secure systems know to the world. It would b easy for a terrorist group to conduct bombings on computer facilities and cause extensive damage to computer information systems in the United States, that Campbell claims in his paper A Detailed History of Terrorist and Hostile Intelligence Attacks against Computer Resources That a well directed attack against 100 key computer facilities could bring the American economy to its knees (Campbell, 2).

Computer crime and terrorism has indeed become a grave concern for the United States and the global community. The society of the world is in a serious transition time in which every area of our lives is being affected, and quickly controlled, by computers. Most of these computers remain unprotected, while even those that are secured by top-notch security systems are still penetrable given the skill and desire.

A major attack on these systems could cause society as we know it to completely change: personal financial accounts could be drained of all funds, communication systems could be severely disabled or incapacitated causing a complete communication blackout, and emergency service computer systems could be attacked making it extremely difficult for police and firemen to respond to emergencies, all with the quick execution of a simple program.

The effects of computer crime and terrorism penetrate into almost every single aspect of society. A successful attack on any major computer information system will have a tremendously negative impact on both the United States and the global society as a whole. It is extremely important in the future that everything realistically possible is done to protect the valuable resources that we call computers and the sensitive information that they store.

This countrys leaders need to start to take notice of the negative effects computers could have on our society. Perhaps, instead of scrambling to assume credit for creating the internet which is becoming Americas Achilles heel, our next leader should take the bold step of slowing our technological dependence, a policy that has thus fare been virtually absent in Americas history of progress. But then again, the internet is not the demise of America, people are.

Staffing Orgs DELL

Dell’s mission is to be the most successful computer company in the world at delivering the best customer experience in markets we serve. In doing so, Dell will meet customer expectations of: Individual and company accountability Best-in-class service and support Flexible customization capability Dell’s vision of excellence through quality, innovation, pricing, accountability, service and support, customization, corporate citizenship and financial stability is clear. This mission statement is clear and easy to understand.

Producing quality work that leads to the achievement of these lofty goals becomes much more complicated than writing a simple mission statement. One thing is clear, the core capabilities of any business stem from the employees that comprise it. With over 36,000 employees, Dell is a member of the rapidly changing and expanding computer technology industry. This industry had achieved enormous growth in the last decade. Dell’s stock rose 29,000 percent in the 1990’s and as of the second quarter in 1999; Dell was tied for first place in the market.

Dell faces stiff competition from technology giants such as IBM, Hewlett Packard, and Compaq. With such robust expansion in the technology industry and the economy, it is becoming increasing difficult for companies such as Dell, who experienced a 56 percent growth in workforce in 1999, to fill positions with quality applicants. Dell is currently seeking applicants for positions in sales, corporate finance, engineering, manufacturing, and most especially, information technology. Dell currently hires approximately 2000 employees a quarter.

With such rapid growth and expansion the temptation surfaces to simply fill a position with a body. “Unless you have a good process in place, you run the risk of not always hiring the best people. There can be a tendency to say ‘We need people so badly, a fresh body is better than no body,’” as summed up by Steve Price, vice president of human resources for Dell’s Public and Americas International Group. To avoid this scenario, Dell has created a web-based Organizational Human Resource Planning (OHRP) process.

These processes help a business unit focus on and anticipate growth and staffing needs. In addition the OHRP process allows managers to do their own succession planning, identify key jobs, and formulate competency planning and employee development. The OHRP process also tries to pick out qualities new employees will need by analyzing the skills and qualities of current top performers. This program has been highly successful as Dell’s profitability increased 59 percent in the same period that the workforce grew by 56 percent. Analysis of current recruiting practices

Dell’s rapid growth and expansion requires recruiting processes to seek out and retain large numbers of qualified applicants. Dell begins its on campus recruitment at selected schools in the fall. The on campus recruitment takes place primarily at schools in the midwest, (Big 10), and southeast, (ACC). Dell typically makes three on campus visits to selected schools and when possible spreads these visits out over the term of the recruitment process. First round interviews take place on campus and prospects are notified with 48 hours if they are selected for a second interview.

All second round interviews are conducted at Dell’s headquarters in Austin, Texas. Prospects are typically notified within 48 hours if Dell intends to offer them a position. Applicants who attend schools where Dell does not conduct on campus recruiting may apply on Dells website. Applicants submit a cover letter and resume to the website. Resumes and cover letters are then entered into a database where they are looked over by a Dell recruiter. Acceptable applicants are then contacted via phone for and initial interview. Applicants will be notified within 48 hours if a second interview is requested.

Again all second round interviews are conducted in Austin and applicants that Dell intends to hire are notified within 48 hours. Either recruiting specialists or rotational recruiters who come from specific departments, such as the IT department, generally conduct interviews. Specialists from specific departments are generally used in time of peak hiring demand. These specialists are able to use their knowledge and experience to give a unique prospective, as they are the ones who are actually doing the critical jobs. Dell recruiters expect that interviewees have knowledge of the company and the industry in which they compete.

All the information applicants is available on the Dell website and prospects are strongly encouraged to look this information over. Dell uses a competency based interviewing process. Interviewees are asked to draw on past personal experiences and comment on how the situation was handled and what was taken away from the experience. Dell feels that this allows interviewers to get a good feel for the individuals fit to the jobs required competencies. Dell looks for individuals who posses strong internet and computing skills.

They look for individuals who are self-motivated and can thrive in a fast paced, results oriented environment. Dell has a very relaxed structure and very few concrete policies or ways of doing things. Some individuals to not work well in this setting and the interview process seeks to week those candidates out. Dell attempts to retain the best personnel in the industry by offering industry competitive compensation and perks. Compensation includes: Industry competitive salary, lucrative health benefits packages, 401k programs, profit sharing and bonuses, stock purchase plans and continuing education.

Some of the perks of working for Dell include: On-site health clubs, employee deals on computers, and services to help them manage some of the chores in their personal lives. All of these perks are intended to make it easier for the individuals to concentrate on their role at Dell. Dell feels that workforce diversity is crucial to the success of the company. They feel that diversity is more than just a catch phrase or the right thing to do. They feel that true workforce diversity is a business strategy that fosters creativity and innovation. Dell actively recruits at many culturally sponsored job fares and events.

Through this attitude and these policies, Dell has been successful in creating a diverse workforce, and a strong corporate culture. Throughout the last decade, Dell has experienced staggering growth in the computer industry. They have emerged from an 18-employee basement operation to the leading supplier of computers in the world. During this time of rapid expansion, Dell has maintained a quality workforce that has made great strides in becoming the company envisioned my Michael Dell. While past recruiting practices have been largely successful, I feel specific areas are open for improvement.

It is my recommendation that Dell expand its on campus recruiting efforts to include more schools in the United States, and abroad. I advise that Dell should launch an on campus advertising campaign to promote recruitment through their website for schools without scheduled campus visits. Lastly, I recommend that Dell increase its use of rotational recruiters to provide a better prospective on interviewing. To meet anticipated demand, I feel that Dell should increase the number of prospects by increasing the number of schools it visits.

Recruiting efforts are largely focused on midwestern and southeastern schools, primarily Big 10 and ACC schools. I feel that the company should branch out and extend on campus visits across the country to include Big 12 and Pac 10 schools. These schools are untapped resources for prospective employees. To promote this expansion without dramatically increasing costs, I recommend that Dell cut the number of on campus visits of selected schools from three to two. I feel that this provides adequate exposure to these markets while allowing staff to visit more schools.

In addition to expanding on campus recruiting in the United States, I feel that Dell should expand its recruiting efforts to major universities abroad. Dell feels that diversity is a major competitive advantage that fosters new ideas. I feel that this diversity can be vastly improved by overseas recruiting. I recommend that Dell expand its recruiting efforts to areas such as Europe, India and South East Asia. In addition to gaining exposure through broadening on campus visits, I recommend that Dell launch an on campus advertising program promoting recruitment through the company website.

Simple advertising techniques to increase knowledge of the web presence could include: Contacting on campus job placement offices and providing them with company information and web instructions, on campus distribution of flyers and posters to be places on or around commonly read bulletin boards or boards with job information. Some advertising could be done in school newspapers and magazines as well. I feel that increasing web-recruitment awareness is the most important aspect of my plan to increase correspondence with eligible applicants.

To attract and retain eligible applicants, Dell should broaden it use of a rotational recruiting staff. A rotational recruiting staff places individuals in industry fields in the recruiting market. This provides prospective applicants with insight on the actual requirements and demands of the job, reducing turnover and increasing job fit. For Dell to continue as a leader in the rapidly expanding technology industry, they will have to maintain their recruiting advantage. I feel that the recommendations presented in this paper will keep Dell at the forefront of the technology industry.

Computer Crime Essay

The technological revolution has taken full swing . If a business doesn’t have some form of e-commerce, or if a person does not have some form of an e-mail address, they are seen as living in the stone age. This new world of virtual life, where with the click of a button a person can travel millions of miles in a few seconds, millions of new opportunities have arisen. However, someone has to always ruin the good things in life. Very similar to Hawthorne’s \”The Scarlet Letter,\” where the second thing built in a Utopia was a prison, the advent of computer crime is only becoming more prevelant everyday.

The whole idea of a computer crime is rather absurd indeed. Really, who wants to go around spray painting on computers anyway? Though the definition of computer crime varies from source to source, the most common being,\” any illegal act which oinvolves a computer system\” (\”What is a computer… \” p. 1). This holds true even if the computer contains something as simple as a threatening e-mail. Computer crime in nature ranges from relatively small things such as software piracy to magnificent crimes like fraud. Computer crime itself has metamorphasized from its mere infancy.

In the late 1970’s, a would-be criminal would need direct access to the actual computer terminal. This is because the most computer crime of that time actually involved hardware sabotage and theft, rather than a more software oriented problem. In the late 1970s and early to mid 1980s, computer crime had elevated a notch with the advent of the inter-schiool network. This network was a connection of several major universities through modem lines. Educated computer users were now changing each others ideas and information, but not for the malicious, but instead for the better.

The mid to late 1980s saw the rise of computer \”hackers\” such as Kevin Mitnick. kevin Mitnick was caught at least a half dozen times, with the charges ranging from criminal tresspassing to fraud. Mitnick had broken into several corporations’ servers,n one being the well reknowned Sun Microsystems. When he was arrested Mitnick became a martyr and a heron to many teenage computer enthusiasts. These teens would be determined to carry on the symbolic spirit, or what they thought to be, of Kevin Mitnick.

However, the computer crimes that thses users perpatrate cost small businesses and corporations millions each year, put restraints on legitimate computer users and still remain an extremely dangerous, costly and virtually unstoppable crime. (\”A Brief History of… \” p. 2) Information technologies (IT) specialists walk into work almost every morning only to find one of their servers on the fritz. Now, these problems can arise for several different reasons, ranging from security breeches to mere software conflicts. However, businesses report losses ranging from $5 to $40 billion each year.

The causes for these losses range from having to hire new IT and IS (information systems) specialists to fix a problem, to theft of product and software piracy. Software piracy is by far the largest problem. (\”Latest Web Statistics\” p. 2) An IT specialist’s worst nightmare is a renegade \”hacker\” loose in the system. \”Hacker\” is a slang name given to a person who has knowledge enough to compromise the security of a system. Although less than 11% of all complaints filed in 1998 were of hacking, the extreme danger of a hacker still remains.

Hackers possess the powers to compromise valuable system security features and possibly destroy or alter any data that they have access to. In the year 2000, a projected 773 complaints will be filed in which at least 10% will be of hacking (\”Latest Web Statistics\” p. 1). However, a hacker may only affect a company with internet access, unless of course, they work for the company. Corporate employees would be the least expected people that would harm an employer’s network or systems.

The truth behind this is, that in fact most security breeches that occur, happen because of curious enmployees \”just looking around. Many of these employees do not cause any real harm to anything at all. It just seems that curiosity gets the best of them and they just have to look. However, there are cases where a disgruntled employee will take his/her frustrations and anger out on the corporation or business’s network. At the event that this should happen, the IT department has a very serious problem indeed. This is such a major problem because an employee’s access, in some networking architectures, goes unnoticed and therefore unchecked.

The idea of almost entirely unchecked access makes an angered employee a hundred times worse than any would-be \”hacker,\” which gives employers all the more reason to be nice to everyone, because they never know when someone could rip down there multi-thousand dollar network. The cost of these in-network security breeches and other crimes costs businesses an estimated $3 billion. (Coutourie p. 1) With the advent of both \”hackers\” and disgruntled corporate members, corporations dread everyday. Computer criminals see these large corporations as a prime target because less than 2% of reports lead to convictions.

Companies lost \”roughly\” $2 billion in revenues due to software piracy, much of which was stolen through the internet. Companies such as Sun Microsystems, have had prototype plans and such stolen from them and sold to the competition. Many companies are now hiring convicted computer criminals, and many are paid to bring down the systems of the compettition. (\”Latest Web Statistics\” p. 2) To fight back corporations can only try to prevent such attacks. Using operating system security featurers, especially with the Novell system, corporations can invoke some control over what happens in their network.

Many software manufacturers, such as Norton, make network utility suites that maintain and manage a network, constantly keeping a watchful eye over the precious network. These computer criminals cause terrible problems for legitimate users. \”Hackers\” and \”crackers\” are the reason for the numerous raids on bulletin board systems and their operators. The old saying is that possession is nine tenths of the law, taking that into consideration, most bulletin board owner/ operators, especially those that are members of the \”Underground,\” are guilty of software piracy as well as several other crimes.

This is because hackers often store their stolen information on another public server, and bulletin boards are the perfect target. These computers are massive in power, in size and are definately capable of storing and processing the thousands of requests to download the stolen material. Despite the seemingly negative nature of these bulletin boards, they are the backbone of the internet. Many of the thousands that are out there, exist with no real malicious intent. However, the federasl government has found it necessary to cerack down on almost all bulletin board systems, by issuing search and siezure warrants.

In March of 1990, the 2 year governtment investigation called Operation Sun Devil, executed 27 search warrants (Sulski p. 1). Equipped with these sources of power, Federal agents storm homes and sieze only computer- related equipment. All of this is so that the system can be analyzed under the virtual microscope of more federal agents. Because of these siezures, the Electronic Frontier Foundation (EFF) was formed to protect the 1st Amendment rights of legitimate computer users (Sulski p. 4). Hackers do more than just cause bulletin boards to be raided, but they cause the restriction of access to normal users.

Information that would normally be readily available for the public eye, is now locked tightly behind some virtual door, never to be seen again. All these security measures are a result of fear. The fear that only \”hackers\” can create. These criminals make people afraid of everything that they could say. Shipping addresses and phone numbers cannot be given over the internet for fear that some rogue hacker will steal them and use the new-found information for some malicious purpose. Just like anything, companies must play on the fear of inexperienced users, making them the true fools.

They sell programs that are no more recent than 1 year behind the \”average hacker\” pace, and consumers buy it up thinking that somehow it may protect them and make everything safe and secure. This isn’t the case by far, and hundreds of thousands, if not millions, are spent each year on useless software. To understand how a \”hacker\” can compromise a system, one must know how and why. Through all the stories told about these Jesse Jamess of the computer world, it is hard to differentiate what a hacker is and what a hacker does. A hacker is no more than a computer punk.

Soemtimes smart and intelligent a hacker does no more than compromise a system because they feel they should have free access to ALL data. These usually experienced users often break into systems, steal data and often will destroy data. Recent years have brought utilities and tools to make hacking a point and click event. The essentials of hacking are still baseed around a few norms. Hackers must have a computer and a modem. This is it! \”Working with less than $300 worth of equipment\” hackers sasyt hey can break into systems, snoop through files and make long distance calls and bill them to your own home.

With no more than a computer and a modem , a hacker could be an individual or company’s worse nightmare. By using methods which won’t be named in this paper, hackers can use valid username and passwords to access, often times, restricted or hidden information. In this manner a hacker can take anything from simpl,e programs to entire identities. With all this power, it is a wonder that hackers don’t get a \”God complex. \” To be honest, many do. Many hackers will elevate their crimes, each one growing more and more severe and equally as challenging.

The question still remains, are there any real ethics, a set of rules that these renegades abide by? The answer for those that consider themselves \”true hackers\” is yes. Many of these hackers believe that there are 5 real rules as follows: 1. Never damage any system 2. Never alter any system files aside from logs. 3. Never leave your handle (screen name) on computers you hack. 4. Do NOT hack government computers. 5. Be paranoid. (Revelation p. 4) Hackers have to live by the 5 rules above, if they don’t, their lives as they know it could result in 5 – 15 years in a lovely state or federal prison.

All five of these rules are all preventive maintenenvce type things that will prevent the arrest of hackers. The general public wonders that if so much information is available, why aren’t any of these characters arrested? Well the answer is because there is no proof of wrongdoing. Mere text files that explain what hackers do, how they do it, and how to do it without getting caught are not criminal in nature. Now seeing that hackers obviously do have a set of standards and rules, why does the federal government spread the fears they do and why does Hollywood glorify hacking the way it does?

Well the answers are often unclear. The federal government, just like anyone else, is afraid of what it does not understand. Hollywood seems to glorify it, just for the simple reason that it is a major part of pop-culture and it has no idea what it is talking about. Movies such as \”Hackers,\” \”The Net,\” and \”Mission Impossible\” are all perfect examples of inaccuracy blown to a whole other proportion. The movie \”Hackers\” angered some many \”true hackers\” with its inaccuracy that the web site that supported the movie and advertised was defaced horribly on several occassions.

In fact, in Hollywood it is hard to find even a valid looking e-mail program in a movie! To fight back against these hackers corporations have devised several schemes. Many have invested thousands into new security features for there systems. Where many have beefed up security systems, many corporations have beefed up there legal teams. These legal teams are on the look-out for anyone who can be seen as a computer criminal perpetrating crimes against a company. These head hunters are the cause for, often times, the wrongful convictions of users.

In fact, these legal teams had Kevin Mitnick jailed for 3 years before he was even allowed to see the evidence against him. Since this is the electronic age, all the evidence against him was on electronic media, and being that he was not allowed to use any form of computer, Mitnick was not allowed to view any of this evidence. Whether right or wrong, corporations will spare no expense in protecting their investments, and noone can really blam e them either. The federal and state governments have also taken actions against hackers; however, the governments have the upper-hand in these battles.

With the power to pass laws and restrict all sorts of access, the federal government has decided to bully all computer users by restricting access to certain information and other perhaps questionable items. In fact, Congress has had a bill recently proposed that would make it a crime to publish any unauthorized information on narcotics, especially marijuana. Is this right? Of course not! This itself is a direct violation of the 1st Amendment rights of everyone who would like to publish information on this topic.

This is exactly the type of action the federal government took with the Communications Decency Act, which was later declared unconstitutional on June 26, 1997. This act made it illegal, as do several other laws, to send malicious or threatening information over the internet. There was a case where a gentleman had written a story to post on a BBS (bulletin board system) that depicted sick and disgusting sexual acts and mutalation. This in itslefv is not worng, but he used a fellow classmate’s name and the story was seen as malicious and violent.

The whole ordeal was misconstrewed and it was made to believe that this gentleman had every intent on harming and injuring his classmate. The gentleman was convicted of sending threats across state lines and has just recently been released of his lovely stay in federal prison. However, the Communications Decency Act was not the first attempt of the Federal Government to regulate telecommunications. In fact, the Communications Act of 1934 was passed and was the first attempt to regulate telecommunications at the federal level.

The federal government isn’t the only one getting in on the act, state governments are no better. Many states have initiated there own communications laws banning the obvious, things such as breaking into systems, etc. , But also regulate the type of data and information that can be sent across state lines. Many of these laws have clauses that make a user not only prosecutable by federal laws, but also by state laws. Many question the fairness of it all. Is it really right to try to regulate the American public like this?

No it really isn’t. However, the federal and state governments are trying to protect their backers, large corporations and businesses. To regulate speech and information, that is unconstitutional and should be dealt with in that manner. So the question remains, what is the proper way to deal with computer crime? Is it to take the world off of phone lines and the internet? The answer to that is obviously no. The internet is the fastest growing phenom in the world today, and the end does not appear to be coming any time soon.

Then is the answer to be harsher and more strict with the way we punish computer criminals? No, we as a society can’t be any more harsh. There are computer criminals that will spend more time in jail thatn most convicted murderes and rapists. Kevin Mitnick has received a 35 year sentence for the crimes he perpetrated, as well as millions in restitution. Where as a person convicted of manslaughter has a maximum term of 15 years. In most cases this person can be peroled after 5. Is that fair? No. The real solution is that there is no real solution.

The economy is up, and litlle kids still haven’t stopped stealing bubble gum. The problem is that we have a group of millions that feel that all information and all data should be free to anyone who can use it, and perhaps in this manner, society can improve everything together. On the other side, we have corporations and businesses fighting to make their deserved dollars. They see things that if they just give their information away, can they really make any money? Computer crime is here to stay and almost doubles each year.

The only solution for those corporate entities is to stay away from your main servers being on the internet. Many projects do not require ther internet, so why have that information tied to it at all? There is no reason. Corporations can onmly try to keep their vital information away from the internet and try to beef up their security features. No matter what actions people take to protect themselves, there will always be at least one, one that will break that barrier and crumble the walls that are your blanket of security.

A Computer for All Students

The introduction of the graphing calculator has changed the structure of teaching and learning mathematics. This made it possible for everybody to receive the benefits of a computer-generated visualization without the high cost of a computer. These graphing calculators over the years have lowered in cost, became easier to use, and are more portable.

The next generation of graphing computers has arrived with the recent introduction of the Texas Instrument TI-92. This relative inexpensive calculator will allow more high school teachers to teach an area mostly untouched, computer symbolic algebra and computer interactive geometry, because it has not been practical or possible. The TI-92 is merely the beginning of the new revolution of hand-held computing tools.

The next challenge mathematics teachers are facing is the teaching of traditional paper-and-pencil symbolic algebra skills. This task has been made obsolete by the more accurate and faster computer symbolic algebra algorithms. Students can get a far better illustration of important concepts and applications of mathematics with these new hand-held tools than with the traditional paper-and-pencil task.

The paper-and-pencil task and other traditional skills must still be acquired, but students should spend less time acquiring it. More emphasize must be put on computing tools. Students should take advantage of the computer technology to become powerful and thoughtful “problem solvers.”

The process of changing from traditional methods to a more computer-oriented environment has to be met by the education and mathematics community. Educators should have textbooks that better represents the new technology. Teachers need to be more technology literate. The mathematics community must dispel the image of “doing mathematics” with the traditional paper-and-pencil method.

These reforms can better teach students important skills needed for the future. The use of technology in mathematics will give students an advantage mathematics and related technology. Students will need that advantage if they wish to compete in the twenty-first century.

Opinion

This article stressed very important issues educators, teachers, and the mathematics community must face. The reform will change the course of mathematics in school and elsewhere. As a student, I am very concern about the future of mathematics. My future plans will revolve around mathematics and technology. I understand the need to continue using the paper-and-pencil methods, but computing tools should be added to the current criteria.

The future will be technologically intense and very competitive. Graphing calculators has enhanced mathematics and I think the new powerful computing tools will do the same for the next generation. These hand-held computers are inexpensive and contain very powerful and versatile computer software. This could be the computer for all mathematics students.

Java or C++ Which is Better for Businesses

Today, the world is changing fast in many ways, and the most rapid change that is seen within our society is technology. It is imperative that businesses stay on top of what is new and how they can better their companys outlook by presenting their information in the fastest and most reliable ways. With the two major computer programming languages of today, C++ and Java, which is better for businesses to be able to acquire such speed and consistency?

For years, C++ (C Plus Plus) has dominated the business market place for many different companies and has allowed many computer programmers to obtain vast amounts of knowledge and experience since 1972 when it was first developed by Dennis Ritchie of AT&T Bell Laboratories (Lambert / Nance Page 16). It has been in use for almost thirty years not to mention the years before when its precursor C was developed and commonly used also and has made a great impact on the development of software for business across the world.

It has become a second nature programming language to those that use it and have been forced to stay with C++. The programming language C++ can be used in many ways. It has exploded into the gaming community allowing PC game programmers to have access to a stabile, yet powerful, programming language, utilizing as little code as possible. It has also been used in other commercial software, such as word processors, audio players, screen savers, and other computer desktop tools. Recently C++ has made its way into the Internet community.

For over ten years, business have used C++ for their Internet needs, for example, sending and receiving important data pertaining to their business across the Internet and allowing it to quickly and safely reach the other end of communication and all in one piece. With the high demands of todays Internet users, whether it be an online shopper or one that desires to seek information on a certain topic, it is essential that the information can be sent from the user, to the server, and then back again as swiftly as possible and with utmost dependability all, of course, without the loss of security.

It has been this reason that C++ has stayed on top of the business world, allowing for speed, stability, security, and ease of use all for one computer programming language. It is a fairly simple language to learn and allows for many people learn the language quickly and then share their ideas with others, and learn even more. Microsoft has made C++ an even more popular language via its usage on their Internet web site and its well-known software, the Microsoft Office Suites. In the past year, Microsoft has developed a new development software called .

NET Development and considerably new computer programming language called C# (C Sharp). This is yet another rendition of the C++ computer programming language but is entirely created for the use on the Internet, a great advantage for business that do not wish to convert their computer programmers to the Java computer programming language, or to provide work for new employees that already know Java allowing for the Java programmers a chance to ask for more money at the companys expense.

This new Internet-ready computer programming development system is capable of utilizing an enormous quantity of computer programming languages used by many Internet computer programmers, including C++ (Dragan Page 3). The only drawback is that the version of Java that can be programmed into the . NET infrastructure is a much older version than what is used today and will never have the chance to be upgraded for the recent lawsuit of Microsoft by Sun Microsystems in 1997, rendering newer versions of Java since v1. 0 (version 1. , useless to Microsoft and its supporters (Berger Page 1).

With such a dominating company backing the future of C++ and developing new methods of Internet programming languages, why would any business use any other computer programming language than the one used by the world renowned Microsoft Corporation? Like any software purchased by someone for their computer it is bound to have bugs within waiting to ruin a computer or crash at a most pertinent moment. The C++ computer programming language has been known to give its developers way too much power for their own good.

Meaning that programmers using the C++ computer programming language are able to tweak memory settings or system memory usage, leaving way for the destruction of a computer systems memory and usage ability, resulting in slow performance, instability of the computer system, or a system crash. Most of all, the C++ computer programming language is limited to one type of running system at a time. For example, a program written in C++ is solely written for a computer running Microsoft Windows, or solely for a computer running on a Macintosh operation system.

The computer programming language can never be written so that both types of systems can utilize the same C++ code, allowing for a difficult process of porting the process of changing the code of a program on Microsoft Windows, for example, and then to a Macintosh. This is always a tough feat to accomplish and takes a long time to complete. So then, what is the alternative to this too-powerful, hard-to-port computer programming language? As of 1997, Sun Microsystems has introduced a new Internet-based computer programming language that has more compatibility on all systems that its competitor, C++.

The Java computer programming language is similar to C++ in its coding. To learn C++ is to learn ninety-percent of Java, and vice versa (Liberty Page 11). Yet, Java is a lot different because it runs on a virtual machine, which can be installed on any type of computer system allowing for the same code for any program to run on any system. This is a lifesaver for many business and computer programmers. But of course it all comes with a price a pretty hefty price to some businesses.

Java is known to be slow at sending and receiving data which puts a bad outlook upon this infant of a programming language. And the major goal for one on the internet, other than actually receiving the data, is getting it in a relatively quick amount of time. Also, Java has been developed by Sun Microsystems to make learning the ins and outs of the computer programming language much easier than C++. They have made the program less innovative and much simpler for anyone to come along and learn the language with little to no skills at all (Carucci Page 3).

This poses a risk for allowing mediocre computer programmers to enter a business and waste the time and money of a business and its productivity. It is hard not to point out all of the bad features of the Java programming language for it is very young in the field and has not had the time and attention that C++ has had to be tweaked and enhanced by millions of users. But it is so well developed in its own way that Microsoft has made it a point to copy the functional aspects of Java such as using a virtual machine that prove most useful in the Internet world and have applied these characteristics in their .

NET development software. The whole idea of the . NET development system is so similar to Java that is almost comical. Many argue why Sun Microsystems has not moved Java to ANSI (American National Standards Institute) the institute that describes that standards for many computer programming languages and that they should be able to be compiled and interpreted on other machines (Liberty Page 11-12) and have come to the conclusion that they are trying to create a whole new system to run Java on.

Like Microsoft, who has tried to monopolize many of desktop productivity tools as well as internet browsers, and more, Sun Microsystems is trying to make the Java virtual machine a completely separate system to run all Java applications on, and morph into something like that of an operating system (Carucci Page 5). Sun Microsystems is only trying to be a competitor in the area that the Microsoft Corporation has dominated for so long. When you look at the new development Microsoft is working on, they are trying to accomplish a similar feat.

They [Microsoft] are trying to make all computer programmers to take their skills and flock to the . NET system because they know C++ so well and it is much more powerful than Java, itself. At the same time, Microsoft is allowing for a monopolization of its line of operating systems, Windows, because it is the only operating system supported under the . NET framework. It is now easy to say that C++ should surely be the choice of computer programming languages to build Internet ready applications because of its popularity and enormous community of programmers.

It is best for businesses for the power it offers users, its stability and security that is guaranteed to both ends of the server, and its ability to quickly transmit data back and forth between users and store the information into databases. Tests were performed to prove that C++ could have definitely beaten any Java program when being tested on speed. In one instance, a program was made for each language to add up matrices ranging in size from one by one to 100 by 100. Each program ran five times and amounted to 500 data values.

The results showed that in real time, the C++ program was executed and complete in a mere 0. 24 seconds while the Java program took almost 2 seconds. The C++ program ran at 7. 2 times faster than the Java program (Galyon Page 6). More tests were completed and information was compiled; results showed that all of the C++ programs ran around seven times faster than the Java programs, constantly (Galyon Page 3-9). It is even apparent in todays world that many businesses and companies are making the move, if they are not already there, to C++, and soon the .

NET development system. Major companies like Microsoft Corporation, Lionhead Studios, id Software, and AT&T all utilize the C++ computer programming language, all for different areas of computing, also. Although it would be nice to be able to make use of a computer programming language such as Java so that programs can be written once, and ran everywhere (MacVittie Page 1), not everything can be perfect but the perfect choice for business computer programmers would have to be C++.

The meaning of a hacker

The meaning of a hacker is one who accesses a computer that can supposably not be accessed to non authorized people of the community. Hackers may use any type of system to access this information depending on what they intend on doing in the system. Methods hackers may use a variety of ways to hack into a system. First, if the hacker is experienced and smart the hacker will use telnet to access a shell on another machine so that the risk of getting caught is lower than doing it using their own system.

Ways in which the hacker will break into the system are 1) Guess/cracking passwords. This is where the hacker takes guesses at the password or has a crack program to crack the password protecting the system. 2) Finding back doors is another way in which the hacker may get access to the system. This is where the hacker tries to find flaws in the system they are trying to enter. 3) One other way in which a hacker may try to get into a system is by using a program called a WORM. This program is specially programmed to suit the need of the user. This programmer continually tries to connect to a machine at over 100 times a second until eventually the system lets in and the worm executes its program.

The program could be anything from getting password files to deleting files depending on what it has been programmed to do. The only way that you or a company can stop a hacker is by not having your computer connected to the net. This is the only sure-fire way in which you can stop a hacker entering your system. This is mainly because hackers use a phone line to access the system. If it is possible for one person to access the system then it is possible for a hacker to gain access to the system.

One of the main problems is that major companies need to be networked and accessible over the net so that employees can do overdue work or so that people can look up things on that company. Also, major companies network their offices so that they can access data from different positions. One way which is used to try to prevent hackers gaining access is a program used by companies called a firewall. A firewall is a program that stops other connections from different servers to the firewall server. This is very effective in stopping hackers entering the system.

Although this is not a fool proof way of stopping hackers as it can be broken and hackers can get in. This is a very good way of protecting your system on the Internet. Some of the major hacks that have been committed have been done by young teens aged between 14 and 18. These computer geniuses as they are known have expert knowledge on what they are doing and also know the consequences. This hack occurred on February 10, 1997, and again on February 14, 1997 Portuguese hackers launched a political attack on the web page of the Indonesian government, focusing on that country’s continued oppression of East Timor.

The attack was online for about 3 hours from 7. 00 PM to 10. 00 PM (Portuguese Time) at the web site of the Department of Foreign Affairs, Republic of Indonesia. The hackers did not delete or change anything. The said We just hack pages. Another major hack that occurred was on April 1st, 1981 by a single user. This hacker who was situated in an East Coast brokage house was interested in the stock market. He purchased $100,000 worth of shares in the stock market. Then he hacked into the stock markets main computer and stole $80 million dollars.

The hacker was eventually caught although $53 million dollars were never recovered. On Wednesday, March 5th, 1997, the home page of the National Aeronautics and Space Administration’s or NASA was recently hacked and the contents changed. The group known as H4G1S. This group of hackers managed to change the contents of the webpage The hacking group changed the webpage and left a little message for all. It said Gr33t1ngs fr0m th3 m3mb3rs 0f H4G1S. Our mission is to continue where our colleagues the ILF left off. During the next month, we the members of H4G1S will be launching an attack on corporate America.

All who profit from the misuse of the InterNet will fall victim to our upcoming reign of digital terrorism. Our privileged and highly skilled members will stop at nothing until our presence is felt nationwide. Even your most sophisticated firewalls are useless. We will demonstrate this in the upcoming weeks. Also, the homepage of the United States Air Force was recently hacked and the contents had been changed. The webpage had been changed completely as the hackers had inserted pornographic pictures saying this is what we are doing to you and had under the image screwing you.

The hackers have changed it and shown their views on the political system. One other major hack that was committed was by a 16-year-old boy in Europe. This boy hacked into the British Airforce and downloaded confidential information on Ballistic missiles. The boy hacked into the site and down loaded this information because he was interested and wanted to know more about them. This boy was fined a very hefty sum of money for his deeds. In conclusion, it can be said that hackers are sophisticated and very talented when it comes to the use of a computer. Hacking is a process of learning not following any manual.

Hackers learn as they go and use a method of trial and error. Most people who say they are hackers most of the time are not. Real hackers do not delete or destroy any information on the system that they hack. Hackers hack because they love the thrill of getting into a system that is supposably unable to be entered. Overall, hackers are smart and cause little damage to the system they enter. Hackers are not really terrorists in a way, they can help companies find out flaws in their system and maybe protect them from people that would damage their systems.

CMIP vs. SNMP: Network Management

Imagine yourself as a network administrator, responsible for a 2000 user network. This network reaches from California to New York, and some branches over seas. In this situation, anything can, and usually does go wrong, but it would be your job as a system administrator to resolve the problem with it arises as quickly as possible. The last thing you would want is for your boss to call you up, asking why you haven’t done anything to fix the 2 major systems that have been down for several hours.

How do you explain to him that you didn’t even know about it? Would you even want to tell him that? So now, icture yourself in the same situation, only this time, you were using a network monitoring program. Sitting in front of a large screen displaying a map of the world, leaning back gently in your chair. A gentle warning tone sounds, and looking at your display, you see that California is now glowing a soft red in color, in place of the green glow just moments before. You select the state of California, and it zooms in for a closer look.

You see a network diagram overview of all the computers your company has within California. Two systems are flashing, with an X on top of them indicating that they are experiencing problems. Tagging the two systems, you press enter, and with a flash, the screen displays all the statitics of the two systems, including anything they might have in common causing the problem. Seeing that both systems are linked to the same card of a network switch, you pick up the phone and give that branch office a call, notifying them not only that they have a problem, but how to fix it as well.

Early in the days of computers, a central computer (called a mainframe) was connected to a bunch of dumb terminals using a standard copper wire. Not much thought was put into how this was done because there was only one way to do it: hey were either connected, or they weren’t. Figure 1 shows a diagram of these early systems. If something went wrong with this type of system, it was fairly easy to troubleshoot, the blame almost always fell on the mainframe system.

Shortly after the introduction of Personal Computers (PC), came Local Area Networks (LANS), forever changing the way in which we look at networked systems. LANS originally consisted of just PC’s connected into groups of computers, but soon after, there came a need to connect those individual LANS together forming what is known as a Wide Area Network, or WAN, the result was a complex onnection of computers joined together using various types of interfaces and protocols. Figure 2 shows a modern day WAN.

Last year, a survey of Fortune 500 companies showed that 15% of their total computer budget, 1. 6 Million dollars, was spent on network management (Rose, 115). Because of this, much attention has focused on two families of network management protocols: The Simple Network Management Protocol (SNMP), which comes from a de facto standards based background of TCP/IP communication, and the Common Management Information Protocol (CMIP), which derives from a de jure standards-based background ssociated with the Open Systems Interconnection (OSI) (Fisher, 183).

In this report I will cover advantages and disadvantages of both Common Management Information Protocol (CMIP) and Simple Network Management Protocol (SNMP). , as well as discuss a new protocol for the future. I will also give some good reasons supporting why I believe that SNMP is a protocol that all network administrators should use. SNMP is a protocol that enables a management station to configure, monitor, and receive trap (alarm) messages from network devices. (Feit, 12). It is formally specified in a series of related Request for Comment (RFC) documents, listed here.

Management Information Systems

The term Management Information Systems (MIS) has come to refer to a wide range of applications of computers to data processing and analysis problems in the private and public sectors. The pace of developments in computing in general, and MIS in particular, is breathtaking. Traditional concepts of how computers can and should be integrated into businesses are being challenged by worldwide telecommunications and transmission of sound, graphics, and video alongside of text. Virtually all successful businesses use computers extensively.

If you don’t like computers, and want to have a career in business that involves little use of themthink again. You don’t have to like them, but you will have to deal with them extensively. This is a fact of life along with the hole in the ozone, Oklahoma City, TWA 800, AIDS, and The Real World on MTV (now in its fifth season! ). Computers can have a profound impact on the way that power is distributed in society. Those who ignore computers are apt to be left out of important decisions.

You may even become the person in your firm who has responsibility for your firm’s use of information technology. Nevertheless, many people have little understanding of what computers are and what they can do. There is a desperate need in our society for liberally educated people who are able to balance the enormous possibilities of computing with its potentially harmful consequences. In the business world, there has been a gap between those who are computer smart and those who speak the language of business.

What Is a Computer Virus

During the past six years, computer viruses have caused unaccountable amount of damage – mostly due to loss of time and resources. For most users, the term “computer virus” is a synonym of the worst nightmares that can happen on their system. Yet some well-known researchers keep insisting that it is possible to use the replication mechanism of the viral programs for some useful and beneficial purposes. This paper is an attempt to summarize why exactly the general public appreciates computer viruses as something inherently bad.

It is also considering several of he proposed models of “beneficial” viruses and points out the problems in them. A set of conditions is listed, which every virus that claims to be beneficial must conform to. At last, a realistic model using replication techniques for beneficial purposes is proposed and directions are given in which this technique can be improved further. The paper also demonstrates that the main reason for the conflict between those supporting the idea of a “beneficial virus” and those opposing it, is that the two sides are assuming a different definition of what a computer virus is.

The general public usually associates the term “computer virus” with a small, nasty program, which aims to destroy the information on their machines. As usual, the general public’s understanding of the term is incorrect. There are many kinds of destructive or otherwise malicious computer programs and computer viruses are only one of them. Such programs include backdoors, logic bombs, trojan horses and so on [Bontchev94]. Furthermore, many computer viruses are not intentionally destructive – they simply display a message, play a tune, or even do nothing noticeable at all.

The important thing, however, is that even those ot intentionally destructive viruses are not harmless – they are causing a lot of damage in the sense of time, money and resources spent to remove them – because they are generally unwanted and the user wishes to get rid of them. A much more precise and scientific definition of the term “computer virus” has been proposed by Dr. Fred Cohen in his paper [Cohen84]. This definition is mathematical – it defines the computer virus as a sequence of symbols on the tape of a Turing Machine.

The definition is rather difficult to express exactly in a human language, but an approximate interpretation is that a computer virus s a “program that is able to infect other programs by modifying them to include a possibly evolved copy of itself”. Unfortunately, there are several problems with this definition. One of them is that it does not mention the possibility of a virus to infect a program without modifying it – by inserting itself in the execution path. Some typical examples are the boot sector viruses and the companion viruses [Bontchev94].

However, this is a flaw only of the human-language expression of the definition – the mathematical expression defines the terms “program” and “modify” in a way that clearly includes the kinds of viruses mentioned above. A second problem with the above definition is its lack of recursiveness. That is, it does not specify that after infecting a program, a virus should be able to replicate further, using the infected program as a host. Another, much more serious problem with Dr.

Cohen’s definition is that it is too broad to be useful for practical purposes. In fact, his definition classifies as “computer viruses” even such cases as a compiler which is compiling its own source, a file manager which is used to copy itself, and even the program DISKCOPY when it is on diskette containing the operating system – because it can e used to produce an exact copy of the programs on this diskette. In order to understand the reason of the above problem, we should pay attention to the goal for which Dr.

Cohen’s definition has been developed. His goal has been to prove several interesting theorems about the computational aspects of computer viruses [Cohen89]. In order to do this, he had to develop a mathematical (formal) model of the computer virus. For this purpose, one needs a mathematical model of the computer. One of the most commonly used models is the Turing Machine (TM). Indeed, there are a few others (e. g. the Markoff chains, the Post Machine, etc. ), but they are not as convenient as the TM and all of them are proven to be equivalent to it.

Unfortunately, in the environment of the TM model, we cannot speak about “programs” which modify “other programs” – simply because a TM has only one, single program – the contents of the tape of that TM. That’s why Cohen’s model of a computer virus considers the history of the states of the tape of the TM. If a sequence of symbols on this tape appears at a later moment somewhere else on the tape, then this sequence of symbols is said to be a computer virus for his particular TM.

It is important to note that a computer virus should be always considered as related to some given computing environment – a particular TM. It can be proven ([Cohen89]) that for any particular TM there exists a sequences of symbols which is a virus for that particular TM. Finally, the technical computer experts usually use definitions for the term “computer virus”, which are less precise than Dr. Cohen’s model, while in the same time being much more useful for practical reasons and still being much more correct than the general public’s vague understanding of the term.

One of the best such definitions is ([Seborg]): “We define a computer ‘virus’ as a self-replicating program that can ‘infect’ other programs by modifying them or their environment such that a call to an ‘infected’ program implies a call to a possibly evolved, and in most cases, functionally similar copy of the ‘virus’. ” The important thing to note is that a computer virus is a program that is able to replicate by itself. The definition does not specify explicitly that it is a malicious program. Also, a program that does not replicate is not a virus, regardless of whether it is malicious or not.

Therefore the maliciousness is neither a necessary, nor a sufficient property for a program to be a computer virus. Nevertheless, in the past ten years a huge number of intentionally or non intentionally destructive computer viruses have caused an unaccountable amount of damage – mostly due to loss of time, money, and resources to eradicate them – because in all cases they have been unwanted. Some damage has also been caused by a direct loss of valuable information due to an intentionally destructive payload of some viruses, but this loss is relatively minor when compared to the ain one.

Lastly, a third, indirect kind of damage is caused to the society – many users are forced to spend money on buying and time on installing and using several kinds of anti-virus protection. Does all this mean that computer viruses can be only harmful? Intuitively, computer viruses are just a kind of technology. As with any other kind of technology, they are ethically neutral – they are neither “bad” nor “good” – it is the purposes that people use them for that can be “bad” or “good”. So far they have been used mostly for bad purposes.

It is therefore natural to ask the uestion whether it is possible to use this kind of technology for good purposes. Indeed, several people have asked this question – with Dr. Cohen being one of the most active proponents of the idea [Cohen91]. Some less qualified people have attempted even to implement the idea, but have failed miserably (see section 3). It is natural to ask – why? Let’s consider the reasons why the idea of a “good” virus is usually rejected by the general public. In order to do this, we shall consider why people think that a computer virus is always harmful and cannot be used for beneficial purposes.

Computer Hacker Essay

The meaning of Hacker is one who accesses a computer which is supposably not able to be accessed to non authorised people of the community. Hackers may use any type of system to access this information depending on what they intend on doing in the system. Methods Hackers may use a variety of ways to hack into a system. First if the hacker is experienced and smart the hacker will use telnet to access a shell on another machine so that the risk of getting caught is lower than doing it using their own system.

Ways in which the hacker will break into the system are: 1) Guess/cracking passwords. This is where the hacker takes guesses at the password or has a crack program to crack the password protecting the system. 2) Finding back doors is another way in which the hacker may get access to the system. This is where the hacker tries to find flaws in the system they are trying to enter. 3) One other way in which a hacker may try to get into a system is by using a program called a WORM. This program is specially programmed to suit the need of the user.

This programme continually tries to connect to a machine at over 100 times a second until eventually the system lets in and the worm xecutes its program. The program could be anything from getting password files to deleting files depending on what it has been programmed to do. Protection The only way that you or a company can stop a Hacker is by not having your computer connected to the net. This is the only sure fire way in which you can stop a hacker entering your system. This is mainly because hackers use a phone line to access the system.

If it is possible for one person to access the system then it is possible for a hacker to gain access to the system. One of the main problems is that major companies need to be networked and accessible over the net so that employees can do overdue work or so that people can look up things on that company. Also major companies network their offices so that they can access data from different positions. One way which is used to try to prevent hackers gaining access is a program used by companies called a Firewall. A Firewall is a program which stops other connections from different servers to the firewall server.

This is very effective in stopping hackers entering the system. Tho this is ot a fool proof way of stopping hackers as it can be broken and hackers can get in. Tho this is a very good way of protecting your system on the InterNet. Major Hacks Some of the major hacks that have been committed have been done by young teens aged between 14 and 18. These computer geniuses as they are known have expert knowledge on what they are doing and also know the consequences. Tho the consequences do not really enter there mind when they are doing it.

This hack occurred on February 10, 1997, and again on February 14, 1997 Portuguese hackers launched a political attack n the web page of the Indonesian government, focusing on that country’s continued oppression of East Timor. The attack was online for about 3 hours from 7. 00 PM to 10. 00 PM (Portuguese Time) at the web site of the Department of Foreign Affairs, Republic of Indonesia. The hackers did not delete or change anything. The said We just hack pages. Another major hack that occurred was on April 1 1981 by a single user. This hacker who was situated in an east coast brokage house was interested in the stock market.

SO he purchased $100,000 worth of shares in the stock market. Then he acked into the stock markets main computers and stole $80 million dollars. The hacker was eventually caught although $53 million dollars was not recovered. On Wednesday, March 5 1997 The home page of the National Aeronautics and Space Administration’s was recently hacked and the contents changed. The group known as H4G1S. This group of hackers managed to change the contents of the webpage The hacking group changed the webpage and left a little message for all. It said Gr33t1ngs fr0m th3 m3mb3rs 0f H4G1S.

Our mission is to continue where our colleagues the ILF left off. During the next month, we the members of H4G1S, will be launching an attack on corporate America. All who profit from the misuse of the InterNet will fall victim to our upcoming reign of digital terrorism. Our privileged and highly skilled members will stop at nothing until our presence is felt nationwide. Even your most sophisticated firewalls are useless. We will demonstrate this in the upcoming weeks. The homepage of the United States Air Force was recently hacked and the contents had been changed.

The webpage had been hanged completely as the hackers had inserted pornographic pictures saying this is what we are doing to you and had under the image screwing you. The hackers have changed it and shown their views on the political system. One other major hack which was committed was by a 16 year old boy in Europe. This boy hacked into the British Airforce and downloaded confidential information on Ballistic missiles. The boy hacked into the site and down loaded this information because he was interested and wanted to know more about them. This boy was fined a sum of money.

In conclusion it can be said that hackers are sophisticated and very talented when it comes to the use of a computer. Hacking is a process of learning not following any manual. Hackers learn as they go and use a method of trial and error. Most people who say they are hackers most of the time are not. Real hackers do not delete or destroy any information on the system they hack. Hackers hack because they love the thrill of getting into a system that is supposably unable to be entered. Overall hackers are smart and cause little damage to the system they enter.

Computer Crime Is Increasing

Computer crime is generally defined as any crime accomplished through special knowledge of computer technology. Increasing instances of white-collar crime involve computers as more businesses automate and the information held by the computers becomes an important asset. Computers can also become objects of crime when they or their contents are damaged, for example when vandals attack the computer itself, or when a “computer virus” (a program capable of altering or erasing computer memory) is introduced into a computer system.

As subjects of crime, computers represent the electronic environment in which rauds are programmed and executed; an example is the transfer of money balances in accounts to perpetrators’ accounts for withdrawal. Computers are instruments of crime when they are used to plan or control such criminal acts. Examples of these types of crimes are complex embezzlements that might occur over long periods of time, or when a computer operator uses a computer to steal or alter valuable information from an employer.

Variety and Extent Since the first cases were reported in 1958, computers have been used for most kinds of crime, including fraud, theft, embezzlement, burglary, sabotage, spionage, murder, and forgery. One study of 1,500 computer crimes established that most of them were committed by trusted computer users within businesses i. e. persons with the requisite skills, knowledge, access, and resources. Much of known computer crime has consisted of entering false data into computers. This method of computer crime is simpler and safer than the complex process of writing a program to change data already in the computer.

Now that personal computers with the ability to communicate by telephone are prevalent in our society, increasing numbers of crimes have been perpetrated by omputer hobbyists, known as “hackers,” who display a high level of technical expertise. These “hackers” are able to manipulate various communications systems so that their interference with other computer systems is hidden and their real identity is difficult to trace. The crimes committed by most “hackers” consist mainly of simple but costly electronic trespassing, copyrighted-information piracy, and vandalism.

There is also evidence that organised professional criminals have been attacking and using computer systems as they find their old activities and environments being utomated. Another area of grave concern to both the operators and users of computer systems is the increasing prevalence of computer viruses. A computer virus is generally defined as any sort of destructive computer program, though the term is usually reserved for the most dangerous ones. The ethos of a computer virus is an intent to cause damage, “akin to vandalism on a small scale, or terrorism on a grand scale. ” There are many ways in which viruses can be spread.

A virus can be introduced to networked computers thereby infecting every computer on the etwork or by sharing disks between computers. As more home users now have access to modems, bulletin board systems where users may download software have increasingly become the target of viruses. Viruses cause damage by either attacking another file or by simply filling up the computer’s memory or by using up the computer’s processor power. There are a number of different types of viruses, but one of the factors common to most of them is that they all copy themselves (or parts of themselves).

Viruses are, in essence, self-replicating. We will now consider a “pseudo-virus,” called a worm. People in the computer industry do not agree on the distinctions between worms and viruses. Regardless, a worm is a program specifically designed to move through networks. A worm may have constructive purposes, such as to find machines with free resources that could be more efficiently used, but usually a worm is used to disable or slow down computers. More specifically, worms are defined as, “computer virus programs … [which] propagate on a computer network without the aid of an unwitting human accomplice.

These programs move of their own volition based upon stored knowledge of the network structure. ” Another type of virus is the “Trojan Horse. ” These viruses hide inside another seemingly harmless program and once the Trojan Horse program is used on the computer system, the virus spreads. One of the most famous virus types of recent years is the Time Bomb, which is a delayed action virus of some type. This type of virus has gained notoriety as a result of the Michelangelo virus. This virus was designed to erase the hard drives of people using IBM compatible omputers on the artist’s birthday.

Michelangelo was so prevalent that it was even distributed accidentally by some software publishers when the software developers’ computers became infected. SYSOPs must also worry about being liable to their users as a result of viruses which cause a disruption in service. Viruses can cause a disruption in service or service can be suspended to prevent the spread of a virus. If the SYSOP has guaranteed to provide continuous service then any disruption in service could result in a breach of contract and litigation could ensue.

However, contract rovisions could provide for excuse or deferral of obligation in the event of disruption of service by a virus. Legislation The first federal computer crime law, entitled the Counterfeit Access Device and Computer Fraud and Abuse Act of 1984, was passed in October of 1984. The Act made it a felony to knowingly access a computer without authorisation, or in excess of authorisation, in order to obtain classified United States defence or foreign relations information with the intent or reason to believe that such information would be used to harm the United States or to advantage a foreign ation.

The act also attempted to protect financial data. Attempted access to obtain information from financial records of a financial institution or in a consumer file of a credit reporting agency was also outlawed. Access to use, destroy, modify or disclose information found in a computer system, (as well as to prevent authorised use of any computer used for government business) was also made illegal. The 1984 Act had several shortcomings, and was revised in The Computer Fraud and Abuse Act of 1986. Three new crimes were added to the 1986 Act.

These were a computer fraud ffence, modelled after federal mail and wire fraud statutes, an offence for the alteration, damage or destruction of information contained in a “federal interest computer”, an offence for trafficking in computer passwords under some circumstances. Even the knowing and intentional possession of a sufficient amount of counterfeit or unauthorised “access devices” is illegal. This statute has been interpreted to cover computer passwords “which may be used to access computers to wrongfully obtain things of value, such as telephone and credit card services.

Remedies and Law Enforcement Business crimes of all types are probably decreasing as a direct result of increasing automation. When a business activity is carried out with computer and communications systems, data are better protected against modification, destruction, disclosure, misappropriation, misrepresentation, and contamination. Computers impose a discipline on information workers and facilitate use of almost perfect automated controls that were never possible when these had to be applied by the workers themselves under management edict.

Computer hardware and oftware manufacturers are also designing computer systems and programs that are more resistant to tampering. Recent U. S. legislation, including laws concerning privacy, credit card fraud and racketeering, provide criminal-justice agencies with tools to fight business crime. As of 1988, all but two states had specific computer-crime laws, and a federal computer-crime law (1986) deals with certain crimes involving computers in different states and in government activities. Conclusion There are no valid statistics about the extent of computer crime.

Victims often esist reporting suspected cases, because they can lose more from embarrassment, lost reputation, litigation, and other consequential losses than from the acts themselves. Limited evidence indicates that the number of cases is rising each year because of the increasing number of computers in business applications where crime has traditionally occurred. The largest recorded crimes involving insurance, banking, product inventories, and securities have resulted in losses of tens of millions to billions of dollars and all these crimes were facilitated by computers.

History of the Computer Industry in America

America and the Computer Industry Only once in a lifetime will a new invention come about to touch every aspect of our lives. Such a device that changes the way we work, live, and play is a special one, indeed. A machine that has done all this and more now exists in nearly every business in the U. S. and one out of every two households (Hall, 156). This incredible invention is the computer. The electronic computer has been around for over a half-century, but its ancestors have been around for 2000 years.

However, only in the last 40 years has it changed the American society. From the first wooden abacus to the latest igh-speed microprocessor, the computer has changed nearly every aspect of peoples lives for the better. The very earliest existence of the modern day computers ancestor is the abacus. These date back to almost 2000 years ago. It is simply a wooden rack holding parallel wires on which beads are strung. When these beads are moved along the wire according to “programming” rules that the user must memorize, all ordinary arithmetic operations can be performed (Soma, 14).

The next innovation in computers took place in 1694 when Blaise Pascal invented the first digital calculating machine. It could only add numbers and they ad to be entered by turning dials. It was designed to help Pascals father who was a tax collector (Soma, 32). In the early 1800s, a mathematics professor named Charles Babbage designed an automatic calculation machine. It was steam powered and could store up to 1000 50-digit numbers. Built in to his machine were operations that included everything a modern general-purpose computer would need.

It was programmed by–and stored data on–cards with holes punched in them, appropriately called punchcards. His inventions were failures for the most part because of the lack of precision machining techniques used at the time and the ack of demand for such a device (Soma, 46). After Babbage, people began to lose interest in computers. However, between 1850 and 1900 there were great advances in mathematics and physics that began to rekindle the interest (Osborne, 45). Many of these new advances involved complex calculations and formulas that were very time consuming for human calculation.

The first major use for a computer in the U. S. was during the 1890 census. Two men, Herman Hollerith and James Powers, developed a new punched-card system that could automatically read information on cards without human intervention (Gulliver, 82). Since the population of the U. S. was increasing so fast, the computer was an essential tool in tabulating the totals. These advantages were noted by commercial industries and soon led to the development of improved punch-card business-machine systems by International Business Machines (IBM), Remington-Rand, Burroughs, and other corporations.

By modern standards the punched-card machines were slow, typically processing from 50 to 250 cards per minute, with each card holding up to 80 digits. At the time, however, punched cards were an enormous step forward; they provided a means of input, output, and memory torage on a massive scale. For more than 50 years following their first use, punched-card machines did the bulk of the world’s business computing and a good portion of the computing work in science (Chposky, 73).

By the late 1930s punched-card machine techniques had become so well established and reliable that Howard Hathaway Aiken, in collaboration with engineers at IBM, undertook construction of a large automatic digital computer based on standard IBM electromechanical parts. Aiken’s machine, called the Harvard Mark I, handled 23-digit numbers and could perform all four arithmetic operations. Also, it had special built-in programs to handle logarithms and trigonometric functions. The Mark I was controlled from prepunched paper tape. Output was by card punch and electric typewriter.

It was slow, requiring 3 to 5 seconds for a multiplication, but it was fully automatic and could complete long computations without human intervention (Chposky, 103). The outbreak of World War II produced a desperate need for computing capability, especially for the military. New weapons systems were produced which needed trajectory tables and other essential data. In 1942, John P. Eckert, John W. Mauchley, and their associates at the University of Pennsylvania decided to build a high-speed electronic computer to do the job. This machine became known as ENIAC, for “Electrical Numerical Integrator And Calculator”.

It could multiply two numbers at the rate of 300 products per second, by finding the value of each product from a multiplication table stored in its memory. ENIAC was thus about 1,000 times faster than the previous generation of computers (Dolotta, 47). ENIAC used 18,000 standard vacuum tubes, occupied 1800 square feet of floor space, and used about 180,000 watts of electricity. It used punched-card input and output. The ENIAC was very difficult to program because one had to essentially re-wire it to perform whatever task he wanted the computer to do.

It was, however, efficient in handling the particular programs for which it had been designed. ENIAC is generally accepted as the first successful high-speed electronic digital computer and was used in many applications from 1946 to 1955 (Dolotta, 50). Mathematician John von Neumann was very interested in the ENIAC. In 1945 he undertook a theoretical study of computation that demonstrated that a computer could have a very simple and yet be able to execute any kind of computation effectively by eans of proper programmed control without the need for any changes in hardware.

Von Neumann came up with incredible ideas for methods of building and organizing practical, fast computers. These ideas, which came to be referred to as the stored-program technique, became fundamental for future generations of high-speed digital computers and were universally adopted (Hall, 73). The first wave of modern programmed electronic computers to take advantage of these improvements appeared in 1947. This group included computers using random access memory (RAM), which is a memory designed to give almost constant access to any articular piece of information (Hall, 75).

These machines had punched-card or punched-tape input and output devices and RAMs of 1000-word capacity. Physically, they were much more compact than ENIAC: some were about the size of a grand piano and required 2500 small electron tubes. This was quite an improvement over the earlier machines. The first-generation stored-program computers required considerable maintenance, usually attained 70% to 80% reliable operation, and were used for 8 to 12 years. Typically, they were programmed directly in machine language, although by the mid-1950s progress had been made in everal aspects of advanced programming.

This group of machines included EDVAC and UNIVAC, the first commercially available computers (Hazewindus, 102). The UNIVAC was developed by John W. Mauchley and John Eckert, Jr. in the 1950s. Together they had formed the Mauchley-Eckert Computer Corporation, Americas first computer company in the 1940s. During the development of the UNIVAC, they began to run short on funds and sold their company to the larger Remington-Rand Corporation. Eventually they built a working UNIVAC computer.

It was delivered to the U. S. Census Bureau in 1951 where it was used to help tabulate the U. S. population (Hazewindus, 124). Early in the 1950s two important engineering discoveries changed the electronic computer field. The first computers were made with vacuum tubes, but by the late 1950s computers were being made out of transistors, which were smaller, less expensive, more reliable, and more efficient (Shallis, 40). In 1959, Robert Noyce, a physicist at the Fairchild Semiconductor Corporation, invented the integrated circuit, a tiny chip of silicon that contained an entire electronic circuit.

Gone was the bulky, unreliable, but fast machine; now computers began to become more compact, more reliable and ave more capacity (Shallis, 49). These new technical discoveries rapidly found their way into new models of digital computers. Memory storage capacities increased 800% in commercially available machines by the early 1960s and speeds increased by an equally large margin. These machines were very expensive to purchase or to rent and were especially expensive to operate because of the cost of hiring programmers to perform the complex operations the computers ran.

Such computers were typically found in large computer centers–operated by industry, government, and private laboratories–staffed with many programmers and upport personnel (Rogers, 77). By 1956, 76 of IBMs large computer mainframes were in use, compared with only 46 UNIVACs (Chposky, 125). In the 1960s efforts to design and develop the fastest possible computers with the greatest capacity reached a turning point with the completion of the LARC machine for Livermore Radiation Laboratories by the Sperry-Rand Corporation, and the Stretch computer by IBM.

The LARC had a core memory of 98,000 words and multiplied in 10 microseconds. Stretch was provided with several ranks of memory having slower access for the ranks of greater capacity, the fastest access time being less than 1 icroseconds and the total capacity in the vicinity of 100 million words (Chposky, 147). During this time the major computer manufacturers began to offer a range of computer capabilities, as well as various computer-related equipment.

These included input means such as consoles and card feeders; output means such as page printers, cathode-ray-tube displays, and graphing devices; and optional magnetic-tape and magnetic-disk file storage. These found wide use in business for such applications as accounting, payroll, inventory control, ordering supplies, and billing. Central processing units (CPUs) for such purposes did ot need to be very fast arithmetically and were primarily used to access large amounts of records on file.

The greatest number of computer systems were delivered for the larger applications, such as in hospitals for keeping track of patient records, medications, and treatments given. They were also used in automated library systems and in database systems such as the Chemical Abstracts system, where computer records now on file cover nearly all known chemical compounds (Rogers, 98). The trend during the 1970s was, to some extent, away from extremely powerful, centralized computational centers and toward a broader range of pplications for less-costly computer systems.

Most continuous-process manufacturing, such as petroleum refining and electrical-power distribution systems, began using computers of relatively modest capability for controlling and regulating their activities. In the 1960s the programming of applications problems was an obstacle to the self-sufficiency of moderate-sized on-site computer installations, but great advances in applications programming languages removed these obstacles. Applications languages became available for controlling a great range of manufacturing processes, for computer operation of machine ools, and for many other tasks (Osborne, 146).

In 1971 Marcian E. Hoff, Jr. , an engineer at the Intel Corporation, invented the microprocessor and another stage in the deveopment of the computer began (Shallis, 121). A new revolution in computer hardware was now well under way, involving miniaturization of computer-logic circuitry and of component manufacture by what are called large-scale integration techniques. In the 1950s it was realized that “scaling down” the size of electronic digital computer circuits and parts would increase speed and efficiency and improve performance.

However, at that time the manufacturing ethods were not good enough to accomplish such a task. About 1960 photoprinting of conductive circuit boards to eliminate wiring became highly developed. Then it became possible to build resistors and capacitors into the circuitry by photographic means (Rogers, 142). In the 1970s entire assemblies, such as adders, shifting registers, and counters, became available on tiny chips of silicon. In the 1980s very large scale integration (VLSI), in which hundreds of thousands of transistors are placed on a single chip, became increasingly common.

Many companies, some new to the computer field, introduced in the 1970s programmable inicomputers supplied with software packages. The size-reduction trend continued with the introduction of personal computers, which are programmable machines small enough and inexpensive enough to be purchased and used by individuals (Rogers, 153). One of the first of such machines was introduced in January 1975. Popular Electronics magazine provided plans that would allow any electronics wizard to build his own small, programmable computer for about $380 (Rose, 32).

The computer was called the Altair 8800. Its programming involved pushing buttons and flipping switches on the front of the box. It didnt nclude a monitor or keyboard, and its applications were very limited (Jacobs, 53). Even though, many orders came in for it and several famous owners of computer and software manufacturing companies got their start in computing through the Altair. For example, Steve Jobs and Steve Wozniak, founders of Apple Computer, built a much cheaper, yet more productive version of the Altair and turned their hobby into a business (Fluegelman, 16).

After the introduction of the Altair 8800, the personal computer industry became a fierce battleground of competition. IBM had been the computer industry standard for well over a half-century. They held their position as the standard when they introduced their first personal computer, the IBM Model 60 in 1975 (Chposky, 156). However, the newly formed Apple Computer company was releasing its own personal computer, the Apple II (The Apple I was the first computer designed by Jobs and Wozniak in Wozniaks garage, which was not produced on a wide scale).

Software was needed to run the computers as well. Microsoft developed a Disk Operating System (MS-DOS) for the IBM computer while Apple developed its own software system (Rose, 37). Because Microsoft had now set the software standard for IBMs, every software manufacturer had o make their software compatible with Microsofts. This would lead to huge profits for Microsoft (Cringley, 163). The main goal of the computer manufacturers was to make the computer as affordable as possible while increasing speed, reliability, and capacity.

Nearly every computer manufacturer accomplished this and computers popped up everywhere. Computers were in businesses keeping track of inventories. Computers were in colleges aiding students in research. Computers were in laboratories making complex calculations at high speeds for scientists and physicists. The computer had made its mark everywhere in society and built up a huge ndustry (Cringley, 174). The future is promising for the computer industry and its technology. The speed of processors is expected to double every year and a half in the coming years.

As manufacturing techniques are further perfected the prices of computer systems are expected to steadily fall. However, since the microprocessor technology will be increasing, its higher costs will offset the drop in price of older processors. In other words, the price of a new computer will stay about the same from year to year, but technology will steadily increase (Zachary, 42) Since the end of World War II, the computer industry has grown from a tanding start into one of the biggest and most profitable industries in the United States.

It now comprises thousands of companies, making everything from multi-million dollar high-speed supercomputers to printout paper and floppy disks. It employs millions of people and generates tens of billions of dollars in sales each year (Malone, 192). Surely, the computer has impacted every aspect of peoples lives. It has affected the way people work and play. It has made everyones life easier by doing difficult work for people. The computer truly is one of the most incredible inventions in history.

Year 2000 Problem

Argument for the statement “The Year 2000 bug will have such extensive repercussions that families and individuals should begin planning now for the imminent chaos. ” The Ticking Bomb Introduction A serious problem called the “Millennium Bug”, and also known as the “Year 2000 Problem” and “Y2K”, is bringing a new century celebration into a daunting nightmare. In the 1860s and 1970s, when computer systems were first built, the computer hardware, especially information storage space, was at a premium.

With an effort to minimise storage costs, numeric storage spaces were drained to the mallest possible data type. Ignoring the fact that a software may be run in multiple centuries, programmers started conserving storage spaces by using two digits to specify a year, rather than four. Consequently, on January 1, 2000, unless the software is corrected, most software programs with date or time may malfunction to recognise the entries in the year fields “00” as the year as “1900” instead of “2000” . Year 2000 problem is not restricted only to the above exigency. 20 years ago, everybody understood that a leap year came every 4th year except for every 100th year.

However, a piece of lgorithm has been forgotten by most people a leap year does exist every 400 years. So, under the first two rules, year 2000 is not a leap year, but with the third rule, it actually is. Computing errors will also occur before Year 2000. Values such as 99 are sometimes used for special purposes not related to the date. The number 99 is used in some systems as an expiration date for data to be archived permanently so some computers may lose the data a year before 2000.

Programmers and software developers were surprised to see some of their programs survive for only a few years but failed to anticipate the problems coming by the ear 2000. It is sorrowful to find most programs are still in use or have been incorporated into successor systems. Because of the need for new applications to share data in a common format with existing systems, inheriting the six-digit date field that has become a standard over time. The disaster scenario envisaged is that a great number of computer systems around the world will make processing errors and will either crash or produce incorrect outputs .

As a result financial institutions, businesses organisations, informational technology and even aeroplane radar communications will all then be in a welter of confusion. In military services, the system meltdown may also worsen the appropriate control of nuclear missiles in silos. It is a ticking time bomb destined to wreak havoc on millions of computer systems in every economy, both commercial and residential, and thus need everyone’s serious attention. However, the bug is likely to affect more staggeringly the business computers which imply an alarming economic problem.

Many organisations have not yet started projects to examine the impact of the millennium bug on their systems. By applying The Standish Groups CHAOS research to Year 2000 projects, 73% of Y2K projects ill fail according to the pace now taking. The biggest challenge for these companies is convincing top level management of the severity of the year 2000 problem and the amount of time, money and resources needed to fix it. On that account, to ensure this disaster is minimised, none of us should worm out of devoting resources in preventing the potential anarchy.

It is a costly Task As simple as the problem sounds, the fix for the Millennium Bug will cost up to US$600 billion world-wide, according to estimates by the Gartner Group, a leading information technology consultancy. The software fixes are very ime-consuming, requiring considerable effort to examine millions of lines of source code in order to locate problem date fields and correct them.

The costs to apply the fixes will vary from company to company, but research has given the figure of approximately between US$0. 0 to $2 per line of source code for modification, with these costs expected to escalate as much as 50 per cent for every year that projects are delayed. Unfortunately, this average excludes date conversions on military weapons systems software, which is expected to be significantly more expensive to convert, and the real figure should even be much arger. One of the first steps an organisation needs to take on the way to ensuring Year 2000 compliance is to determine what they have to be changed. The business will need to prepare an inventory of hardware and software utilised to allow assessment of problem areas.

It is hard to address the potential for problems when no clear picture of the problem space is available. Documentation showing the processing steps being performed by the company’s computer system in order to accomplish business functions needs to be available to ensure that all procedures are present and accounted for. There is no “Silver Bullet” The problem looks straightforward, all we need is just to check each line of code, locate the two-digit date fields, expand them to four digit and test the correction.

Unfortunately, these modifications are mostly manual labour not an automatic process. Software Dilemma Six-digit date fields are generally scattered throughout practically every level of computing, from operating systems to software applications and databases. Some dates have numeric representation, while other have alphanumeric representations. This adds to the complexity of the problem from a management and technical point of view. The bug ontaminates a large area that nearly all of the program codes must be examined to ensure that correction is free from side-effects.

A case in point, a typical medium size organisation, a state comptroller’s office in United States, is predicted to spend US$5. 6 million to $6. 2 million to make the software conversion, that is, nearly a billion lines of code must be repaired. Furthermore, there are computing languages still in use today that only a handful of people are even aware of, let alone proficient enough to be called experts. Skills for some older, more obscure languages and systems will, more han likely, make the Y2K a more serious problem.

Some uses of two digit dates may not be obvious. For example, the UK Driving Licence number encodes the holder’s date of birth using a two digit year code. Dates used in this nature will create Year 2000 problems without the obvious use of dates in the program. Some systems use dates fields for non-standard uses, such as special indicators and how your systems have abused the date field is something you can only find out by looking at every line of code, which is a huge costs in time and resources.

With the variety of programming languages and platforms in use hroughout that past three decades, and the multitude of uses for date fields, and the extensiveness of infected programming area, no single “silver bullet” could exist to correct the problem. Moreover, the problem cannot be solved individually. Y2K is a universal problem which will bring a chain effect among industries and firms. No business is immune, every firm is affected either directly in its own operation, or indirectly, by the action or inaction of others.

A Year 2000 compliant computer system may fail to process, produce error messages or generate incorrect data even if it receives contaminated rograms or data from a third party that is not Year 2000 compliant. With all these issues involved, and with remaining time ever decreasing, management awareness must focus on these problems. The Hardware Dilemma If the computer hardware cannot handle dates past 31/12/99 then no software solution can fix it. Some applications request the system date directly from the hardware and cannot be trapped by the operating system, which obviates a software resolution.

For instance, the PC hardware problem can be explained as follows. The standard PC computer system maintains two system dates: one is in the CMOS Real Time Clock hip, a hardware component normally located on the machines motherboard that stores time, date and system information such as drive types; and the other one is in the operating system software, these two dates are represented differently, influencing one another. When the computer boots, it normally initialises its current date by reading the date in the CMOS Real Time Clock and converting it to days since January 1, 1980.

The PC maintains its date as long as the system is running; the CMOS Real Time Clock hardware maintains its date whether the system is running or not, but it does not maintain the century. So, he standard flaw lurks in the CMOS Real Time Clock date when Year 2000 is reached as it reads an out-of-range date. Moreover, a few specific Basic Input/Output Systems cause behaviour other than the standard flaw. Importantly, the Award v4. 50 series BIOS will not allow any date after 1999 and can not be corrected by any software.

Dates are integrated in computer hardware, from mainframe, mid-range machines, all the way down to network infrastructure. Date fields are used in some of the most basic computer functions such as calculating and sorting and will affect a large majority of systems. If year fields are xpanded to 4 digits, this will automatically give rise to the need for additional storage space. In due course, the original reasons for the introduction of 6 digit dates will resurface. Any computer application that accepts or displays dates on the screen or produces a report with date fields will need to be redesigned.

On-line transaction databases will need to be converted and the new expanded database will need to be kept in sync with the old active database during the conversion process. In some cases there will be insufficient space available to accept or display additional data, forcing a major revision. If paper forms are used for input, these will also need to be redesigned. Screen, report and form redesign appear to be a minor issue in the context of the Millennium Bug, but the design of screen and reports are important from a usability perspective, and the redesign process cannot be automated.

Any changes to the way dates are handled in an organisation will need to be coupled with staff training to ensure that all staff are aware of any new standards. Other Dilemma Implied However, to ensure that the corrected work runs free of errors after January 1, 2000 midnight, testing of the changed code must e performed. There is no way around this. As testing is around 50% of all programming tasks, the actual programming tasks are just one small cog in the wheel used to resolve the Millennium Bug.

With the rigidly fixed deadline, and the ever decreasing amount of time, this will require a large investment in resources, to ensure a smooth run from the development to production phases. Less seriously discussed in the Year 2000 issue by the public, as the Year 2000 deadline approaches and the time remaining for corrective work shrinks, companies may choose, or be forced into, outsourcing the resolution of their Millennium Bug to a Year 2000 service provider. The ‘service provider’ would have to load a copy of the software onto its computer system to perform the bug fixes, and this raises the issue of software licensing.

Many licences contain restrictions barring licensees from providing a copy of the software to any third party without the consent of the licenser, and this could present problems in the event of a dispute between vendor and client. Conclusion The year 2000 challenge is inescapable and omnipresent, affecting every businesses and individuals, regardless of age or platform. As discussed, there are many aspects f the Millennium Bug problem that are not immediately obvious, ranging from legal issues such as copyright and licensing, to issues of available resources and existing bugs.

Carrying out a solution in any business involves careful planning in order to be successful. The four steps awareness, planning, implementation, and testing are crucial for a company to run successfully beyond the year 2000. Unlike most other IT projects there is a definite, fixed and immovable deadline for implementation. If there is not enough time to complete the programming and testing, or if unexpected delays occur, the deadline remains fixed and cannot be moved.

Computer Ethics Essay

Computer crime has increased in resent years. The book gives several examples of past computer crimes. Before reading chapter 2 I thought that computer crimes only involved crimes that where associated with hacks. But I learned that a computer crime is a crime that involved a computer in any way. Even if it was just to close a bank account. This chapter gave me a good understanding of what a computer crime is, it also made me think how could I make some money. By reading this chapter I was surprised to learn that most computer crimes are committed by people that dont have an extensive understanding of computers, but by opportunist.

In one of the cases I read about, a group of hackers figured out a way to intrude into the bank system but didnt do any damage to the bank. Then they tried to sell their knowledge to the bank and got arrested. It seemed unfair to me that for trying to help the bank they got arrested. In many cases the people accused of computer crime do it without know what they do. As in the example of the 8 year old boy that transferred 1,000,000 dollars to his account by inserting a envelope with a cartoon of cereal in it and pressing 1 many times.

I thought the book made a good point in saying that most computer crimes are kept secret from the public by the victims especially banks so people wont loose their trust in them. I think all people come to a point in their life that they have the opportunity to enrich them self illegally without getting caught and thats where a descent and ethical person is reveled. Software theft is a very commune type of crime. Crimes that all off us commit, but dont feel neither wrong in committing it nor will stop doing it for several reasons. Software companies charge unrealistically high for software packages.

Users personally wont be penalized for doing so. Nobody wants to pay for something they can get for free. But at the same time programmers want to be compensated for their work. To tell you the truth I dont understand the point of software developers that want for all software to be free. If software was free, who will pay our salaries and who is going to work for free. Their point is that, if the source code would be free that programmers could improve existing programs, but who is going to work for free in improving those programs, I wouldnt.

Its easy to say for programmers like Stallman that are financially sponsored by others, that software should be for free he is getting paid, who is going to pay us? I agree with Pamela Samuelson The existing system of patent laws is still the best vehicle for protecting software. I agree with the opinion that hacking has changed in recent years. Before hacking used to be a demonstration of knowledge of a system or of making a statement that Im smarter than most people.

Most of those attacks where not malicious, now hackers have become malicious and most of them dont demonstrate that they are smart nor demonstrate knowledge of a system. Most hacks are people that have nothing to do and go through the trashcans of corporations in hopes of finding manuals or passwords of systems or going to Tec fairs to peek over someone shoulder to see if they are dialing in into a remote system and try to get they password and username. I dont agree with laws that punish hackers that do innocent penetrations into systems.

I think thats a god thing since those penetrations make the system operators aware of their vulnerability of attack by a malicious hacker. If no malicious penetrations wouldnt be punished and companies would pay for finding loopholes in their systems the number of malicious attack would drastically descend. I agree with the point of view of IBM. I think that the best way of eliminating viruses is by educating programmers about the damage those viruses cause and that they are wrong and dont demonstrate anything except the maliciousness and stupidity of they authors.

It is amazing the amount of money spent each year on useless software that will never would be used or never work properly. As in the example in chapter 5 in which 22 US service men died because of a radio interference in its computer-based fly-by wire control system. In the first section of chapter 6 is about Database Disasters. I dont think those are Database Disasters I think those are Data entry disasters. In the past years I have heard a lot of cases of people which have had their identities stolen or people and later on in their lifes have experienced problems do to crimes committed by the identities thieves.

Even dough government is aware of that officials fail to completely remove the erroneous information from databases. One of the examples I found terrifying in this chapter was the one of the three young men that filled their car with gas and the owner became suspicious and reported them to the police so they could check on them. When police checked on their tag number a record came up that the car was stolen a few years earlier. As a result one of the young men got shot under his nose disfiguring his face for the rest of his life.

In the first example in chapter 7 where the woman died because a miss calculation of a computerized dispensing machine miscalculated the required dosage of a pain relieving drug and as a consequence the woman went into comma a died later on. Despite this error by there is more probability for a human to make a mistake that for this machine to have made the mistake. In the section where it discusses whether computers are intelligent I agree with the people that say that computers are intelligent.

As the example given by the authors where computers beat 99% off all chess players still it is not intelligent because it figures out its plays by brute computational power and not by observation and recognition of past situations. In the section of the chapter discussing id AI is a proper goal? Of course it is! Not to the extend as Donald Mishie believes since it is too dangerous. AI cant replace government and judicial systems. What if such a system would rule to kill every one that makes a traffic violation and no human could overrule that law?

Every human in his life makes a traffic violation without even noticing. What would happen? The human race would get extinct and machine would prevail. I find amusing some of the predictions made by scientists at the end of the chapter since they are so unrealistic, dangerous and crazy. I think the American work environment is the perfect one since it is not that laid back but at the same time doesnt go to the extreme of the Japanese who create stress so workers produce more or even push their workers until they break.

I think there has to be some kind of stress or no work would be done. By stress I mean pressure to do the job or no work would be done at all. Programmers are often subjected to a lot of stress during their careers since programmers always have a deadline and a problem to solve in front of them. In my opinion companies should provide counseling to workers on how to deal with stress and how to make it work in your favor.

Model Train Building

The world of Model Train Building has grown greatly with the aid of computers and technology to enhance the fun of building. Technology has long been a part of Model Train building with the adding of lights, bells, and whistles to capture your interest and imagination. But with the latest generation of building comes the influx of technology and the computer. The computer brings along a new breed of builders who plan track layout, buy parts on the Internet, receive updated news, and chat with other enthusiast.

The most notable difference that computers have brought to the world of Model Train building is in software programming. Now on the market there are numerous different packages of software that enable hobbyist in the challenge of real yard operations on a smaller scale. These programs allow the person to move loads between depots and keep track of your revenues. They allow simulations of operational switches between tracks, multiple train operation, coupling/uncoupling of railcars.

But the greatest benefit that they bring is allowing the person to design a layout using an electronic template and ensuring that all measurements in the layout will work before a single piece of track is laid. Many of these software programs even play off on the hype of using a computer for design in their name, with names of CyberTrack, The Right Track Software, and Design Your Own Railroad, who could not want to become involved in there use. This software ties into many other aspects of building that encourage the use of the Internet in this hobby.

Many of these programs allow the hobbyist a realistic railyard action complete with sights, sounds and even planned crashes. With the event of a crash you are always going to need replacement parts for repair or maybe you just want to upgrade or expand you track system. This brings in the convenience of the use of the Internet in product ordering. With few stores in scattered areas it may be difficult or expensive for some hobbyist to get to these locations for the parts that they need. The Internet brings this store right into their home with online catalogs and parts stores.

One mainstream over the counter catalog, The Atlas Catalog, provides an electronic version, The Atlas Online Catalog, for internet users to order parts on a secure online catalog. Even more important for some people are the Online Magazines that provide up to the minute news breaking information. Online magazines such as Where Bigger is Better or The Nickel Plate Road Historical and Technical Society offer many services and information such as; Workshops online, product reviews, clubs list, train shows, technology updates, and even Toy Train links. These sites have exploded in size and number with the easy use of building Web Pages.

Almost anyone with a Personal Computer can create a page to contact other hobbyist. With this contact comes the use of Chat rooms for these hobbyist to talk shop about their mutual interest. Of course many of these rooms are just a place for people with the same interest to socialize, they also allow them to finds others to trade, get ideas, and receive updated news. As always, manufactures are into selling their product and with all these flashy new selling points of use of a Personal Computer, how could hobbyist in this field not want to become involved?

One of the leading factors driving the trend toward these high-tech trains is the growing power and falling prices of the parts. Computer chips and memory that once cost hundreds of dollars can now be had for a fraction of the cost. As the price of personal computers falls, so does the price of software and components for the home user making the appeal to there use even stronger. The growing use of high-tech components in traditional train building brings, A blurring of the lines between what is a software product and what is a toy product, says Lego spokesman John Dion.

Regardless if the hobbyist is an experienced train builder or a novice in train building, it can be said that the influence of computers and technology will continue to grow as computers expand into more aspects of our modern life. Children are born into a technological world. Their frame of reference is that theres always technology, says Chris Byrne of Playthings Marketwatch. What electronics does is it simply enhances and adds a level of reality to the play experience, Byrne said. As the world of computers expands, so does too the world of Model Train building.

The use of software programming gives the builder a way to build complex and innovative tracks only limited to his imagination with the resources a computer will give him. Ryan Slata, director of marketing for Playthings Toys says, I think kids expect more in their toys these days because technology is all around them, were in the computer age and I think that translates down to the toys. To say that someone has an interest in Model Trains is always an understatement, and with the use of computers and technology this interest brings the experience to a new level.

Discuss the function of the DMA controller chip in a computer system

Here is an Abbreviation of direct memory access, a technique for transferring data from main memory to a device without passing it through the CPU. Computer that has DMA channels can transfer data to and from devices much more quickly than computers without a DMA channel can. This is useful for making quick backups and for real-time applications. Some expansion boards, such as CD-ROM cards, are capable of accessing the computer’s DMA channel. When you install the board, you must specify which DMA channel is to be used, which sometimes involves setting a jumper or DIP switch.

Why is DMA so important? Because it allows data to read and write form the DMA memory without intervention by the CPU. (CMOS setup and ROM Bios) Describe what the CMOS setup chip is used for in a personal computer? Personal computer contain a small amount of battery-powered CMOS memory to hold the date, time, and system setup parameters. Is the CMOS setup chip the same as the ROM BIOS? No they are different. If not, what does a ROM BIOS chip do in a personal computer? The BIOS is built-in software that determines what a computer can do without accessing programs from a disk.

On PCs, the BIOS contain all the code required to control the keyboard, display, disk drives, serial communications, and a number of miscellaneous functions. The BIOS is typically placed in a ROM ship that comes with the computer (it is often called a ROM BIOS). This ensures that the BIOS will always be available and will not be damaged by disk failures. It also makes it possible for a computer to boot itself. Because RAM is faster than ROM, though, many computer manufacturers design systems so that the BIOS is copied from ROM to RAM each time the computer is booted. Describe the different types of ROM technology used in ROM BIOS chips?

There are four types of ROM chips they are ROM, PROM, EPROM, and EEPROM/Flash ROM (EEPROM is also known as flash ROM). ROM is no longer in use that is what is found out. PROM is a blank chip on which data can be written with a special device called a PROM programmer. EPROM is a special type of memory that retains its contents until it is exposed to ultraviolet light. The ultraviolet light clears its contents, making it possible to reprogram the memory. EEPROM/Flash ROM is a special type of EEPROM that can be erased and reprogrammed in blocks instead of one byte at a time.

Many modern PCs have their BIOS stored on a flash memory chip so that it can easily be updated if necessary. Another technology is ROM Shadowing for example, when you turn your computer it reads the BIOS and when it reads it form there it is passes it through the RAM because it is faster. Describe the difference between ROM chips and RAM chips? ROM (read only memory) chip is very slow and RAM (random access memory) chip is faster then ROM chip. On a ROM chip which data has been prerecorded it cannot be removed and it only can be read. In addition when you turn off your computer the data that was written will not lose its contents.

RAM memory can be access randomly that is any byte of memory can be access without touching the preceding byte; whoever, there are two basic types of RAM. They are dynamic RAM and static RAM. The two types differ in the technology they use to hold data, dynamic RAM being the more common type. Dynamic RAM needs to be refreshed thousands of times per second. Static RAM does not need to be refreshed, which makes it faster; but it is also more expensive than dynamic RAM. Both types of RAM are volatile, meaning that they lose their contents when the power is turned off.

Computer Crimes on the Internet

Its the 90’s, the dawn of the computer age. With technology changing and evolving everyday, it may seem hard not to slip behind in this ever changing world. The Information Super-Highway has been following computers throughout the past few years. Along with the Internet, an emerging group of elite cyber-surfers have turned into today’s computer hackers. Most people don’t know about them, most people don’t know they exist, but they are out there, lurking in the shadows, waiting for there next victim. It can be a scary world out there (Welcome to the Internet).

In reality it is not nearly as bad as it sounds, and chances are it won’t happen to you. There are many fields of hacking on the Internet. The most popular type of hacking is software piracy. “According to estimates by the US Software Piracy Association, as much as $7. 5 billion of American software may be illegally copied and distributed annually worldwide”(Ferrell13). Hackers “pirate” software merely by uploading software bought in a store to the Internet. Uploading is send information from point A(client) to point B(host); downloading is the opposite.

Once it is uploaded to the Internet, people all over the world have access to it. From there, hackers trade and distribute the software, which in hacker jargon is warez[AO1]. Industrial Espionage is another main concern on the Internet. Most recently, the FBI’s World Wide Web page hacked and turned into a racial hate page. Anyone can access files from a WWW page, but changing them is very hard. That is why most hackers don’t even bother with it. CNET stated “This Web site should have been among the safest and most secure in the world, yet late in 1996, it got hacked. “(Ferrell18).

To change a web page, hackers simply upload a new, modified version of the web page, in place of the original. But fortunately, almost all Internet Service Providers (ISP), the computer you dial to for Internet access, have protection called a firewall, which kicks off all users trying to gain access of change information that are not authorized. “Theft and destruction of company files is increasing faster than the ability to stop it”(Rothfeder170). Another field of hacking on the Internet is Electronic-mail hacking. A hacker can intercept Email enroute and read it with no detection.

To safeguard this, companies use encryption programs and no one but the sender and its recipient can read it(Rothfeder225). A mail bomb is another type hack on the Net. “A mail bomb is simply an attack unleashed by dumping hundreds or thousands of Email messages onto a specific address”(Ferrell20). The only way to fix this problem is to either sit there and delete each message one by one, or to call you Internet Service Provider for help. Email forgery is also common. A hacker can change the return address on any given piece of Email to anything they want, such as [email protected] com.

This is illegal because you can use someone else’s address to send false Email to people. Oracle Systems CEO Larry Ellison fell victim to forgery when a former employee accused him of sexual harassment and used a forged email message to help plead her case. And Bob Rae, the former premier of Ontario, suffered political embarrassment as a result of a forged and sexually explicit email that appeared on Usenet newsgroups. False or assumed email identities have played a part in espionage, as well. Forged email was the key to Clifford Stoll’s cracking of a spy ring, recounted in his book The Cuckoo’s Egg (Ferrell4)

On the Internet, credit card fraud is also common. Perhaps the most occurring is the ability to create false accounts on America OnLine. At one point, more than 70% of America OnLine users were using fake credit card numbers. The scary part is, that almost none of them were caught. Other people send pyramid schemes and chain letters to people by the thousands. These GET RICK QUICK!!! schemes are illegal scams designed to get your money (Ferrell10). Yet another fraud on the Net is the selling of carded merchandise.

Although is it not common, people take real credit cards numbers and by expensive electronics by phone and then sell them on the Internet for extremely low prices. “The US Secret Service believes that half a billion dollars may be lost annually by consumers who have credit card and calling card numbers stolen from online databases”(Ferrell10). When companies talk about Network Breaches, they mean that someone has gotten a login and corresponding password to the company’s information. Citibank was hacked and had $11 million stolen by a Russian hacker (Rothfeder170).

Once the hacker gets administrative access, that are free to get important, maybe even secret information from the company. They can also destroy the data or implant viruses (see below) sometimes costing companies millions of dollars a year. “Even at secure companies, a single motivated person could attack the machines of a large organization”(Rothfeder180). Another concern is online banking. It allows people to exchange information, mostly money, to and from bank accounts. “At this point, prospective customers have no way of knowing which banks can be trusted with Internet accounts.

Just because a bank is well know, don’t assume its security is air tight”(Rothfeder229). Hackers can use tools such as password sniffers which locks onto an Internet dweller’s IP address (Internet Protocol Address, it is that person’s identity online) and records the information being sent by their computer. Thus the hacker gets the password. Once this is accomplished, they use another tool called an IP spoofer. This tool will fool other computers that you connect to on Internet into thinking the hacker has some else’s IP address, making them that person.

In a sense, it changes your identity. The hacker, now with the password and with an identity change, logs on the remote World Wide Web page and modifies it to his desire. Or he can steel information only accessible through the password such as credit card information, important or secret information, or he could simply cause havoc to that person or company. Computer viruses are not colds for computers. They are destructive time bombs waiting to destroy a computer’s data. They usually attack the most common and the more important files on computers and deletes them.

Unfortunately they are very easy to get, especially on the Internet, and they are not always detectable. On the other hand, 99% percent of viruses can be detected by using an anti-virus program. Hackers often make there own viruses disguised as real programs. Viruses are also can be attached to Email messages. The Internet has many good things about it. But with all good things comes the downside of humanity. Child pornography is unfortunately easily available on the Net. As many people know, the federal courts are now in the process of determining what you are allowed to post on the Internet.

Apocalypticly, all of the tools and programs mentioned in this document are very common on the Net and can be downloaded 24 hours a day, seven days a week, and anyone can get them. Go to your favorite search engine on the Internet and type in hacking, pictures, bombs, anarchy, warez, or cracking for your query and you will find millions of documents with everything a hacker would need, all just a click away. It can be a scary world out there and if you are willing to except both the good and bad then, Welcome to the Internet.

Development Of Operating Systems

An operating system is the program that manages all the application programs in a computer system. This also includes managing the input and output devices, and assigning system resources. Operating systems evolved as the solution to the problems that were evident in early computer systems, and coincide with the changing computer systems. Three cycles are clear in the evolution of computers, the mainframe computers, minicomputers and microcomputers, and each of these stages influenced the development of operating systems.

Now, advances in software and hardware technologies have resulted in an increased demand for more sophisticated and powerful operating systems, with each new generation able to handle and perform more complex tasks. The folowing report examines the development of operating systems, and how the changing tehcnology shaped the evolution of operating systems. First Generation Computers (1945-1955) In the mid-1940’s enormous machines capable of performing numerical calculations were created. The machine consisted of vacuum tubes and plugboards, and programming was done purely in machine code.

Programming languages were unheard of during the early part of the period, and each machine was specifically assembled to carry out a particular calculation. These early computers had no need for an operating system and were operated directly from the operator’s console by a computer programmer, who had immediate knowledge of the computers design. By the early 1950’s punched cards were introduced, allowing programs to be written and read directly from the card, instead of using plugboards. Second Generation Computers (1955-1965) In the mid-1950’s, the transistor was introduced, creating a more reliable computer.

Computers were used primarily for scientific and engineering calculations and were programmed mainly in FORTRAN and assembly language. As computers became more reliable they also became more business orientated, although they were still very large and expensive. Because of the expenditure, the productiveness of the system had to be magnified as to ensure cost effectiveness. Job scheduling and the hiring of computer operators, ensured that the computer was used effectively and crucial time was not wasted. Loading the compliers was a time consuming process as each complier was kept on a magnetic tape, which had to be manually mounted.

This became a problem particularly when there were multiple jobs to execute written in different languages (mainly in Assembly or Fortran). Each card and tape had to individually be installed, executed then removed for each program. To combat this problem, the Batch System was developed. This meant that all the jobs were grouped into batches and read by one computer (usually an IBM 1401) then executed one after the other on the mainframe computer (usually an IBM 7094), eliminating the need to swap tapes or cards between programs.

The first operating system was designed by General Motors for the IBM 701. It was called Input/Output System, and consisted of a small set of code that provided a common set of procedures to be used to access the input and output devices. It also allowed each program to access the code when finished and accepted and loaded the next program. However, there was a need to improve the sharing of programs, which led to the development of the SOS (Share operating system), in 1959. The SOS provided buffer management and supervision for I/O devices as well as support for programming in assembly language.

Around the same time as SOS was being developed, the first operating system to support programming in a high-level language was achieved. FMS (Fortran Monitoring System) incorporated a translator for IBM’s FORTRAN language, which was widely used as most programs where written in this language. Third Generation Computers (1965-1980) In the late 1960’s IBM created the System/360 which was a series of software compatible computers ranging in different power of performance and price. The machines had the same architecture and instruction set, which allowed programs written for one machine to be executed on another.

The operating system required to run on this family of computers has to be able to work on all models, be backwards compatible and be able to run on both small and large systems. The software written to handle these different requirements was OS/360, which consisted of millions of lines of assembly language written by thousands of different programmers. It also contained thousands of bugs, but despite this the operating system satisfactory fulfilled the requirements of most users. A major feature of the new operating system was the ability to implement multiprogramming.

By partitioning the memory into several pieces, programmers where able to use the CPU more effectively then ever before, as a job could be processed whilst another was waiting for I/O to finish. Spooling was another important feature implemented in third generation operating systems. Spooling (Simultaneous Peripheral Operation On-Line) was is ability to load a new program into an empty partition of memory when a pervious job had finished. This technique meant that the IBM 1401 computer was no longer required to read the program from the magnetic tape. ssion of a job and returning of results had increased.

This led designers to the concept of time-sharing, which involved each user communicating with the computer through an their own on-line terminal. The SPU could only be allocated to 3 terminals, each job held in a partition of memory. Many time-sharing operating systems were introduced in the 1960’s, including the MULTICS (Multiplexed Information and Computing Service). Developed by Bell Labs, MULTICS was written almost completely in high-level language, and is known as the first major operating system to have done so.

MULTICS examined many new concepts including segmented memory, device independence, hierarchal file system, I/O redirection, a powerful user interface and protection rings. The 1960’s also gave rise to the minicomputer, starting with the DEC PDP-1. Minicomputers presented the market with an affordable alternative to the large batch systems of that time, but had only a small amount of memory. The early operating system of the minicomputers were Input/Output selectors, and provided an interactive user interface for a single user, and ran only one program at a time. By the 1970’s, DEC introduced a new family of minicomputers.

The PDP-11 series had 3 operating systems available to use on the systems, a simple single user system (RT-11), a time sharing system (RSTS) and a real-time system (RSX-11). RSX-11 was the most advanced operating system for the PDP-11 series. It supported a powerful command language and file system, memory management and muiltprogramming a number of tasks. Around the same time as DEC were implementing their minicomputers, two researchers, ken Thomspson and Dennis Richie were developing a new operating system for the DEC PDP-7. Their aim was to create a new single-user operating, and the first version was officially released in 1971.

This operating system, called UNIX became very popular and is still used widely today. Fourth Generation Computers (1980-1990) By the 1980’s, technology had advanced a great deal from the days of the mainframe computers and vacuum tubes. With the introduction of Large Scale Integration circuits (LSI) and silicon chips consisting of thousands of transistors, computers reached a new level. Microcomputers, which were physically much like the minicomputers of the third generation, however they were much cheaper enabling individuals to now use them, not just large company’s and universities.

These personal computers and required an operating system that was user friendly so that people with little computer knowledge was able to use it. In 1981, IBM was releasing a 16-bit personal computer, and required a more powerful operating system then the ones available at the time, so they turned to Microsoft to deliver it. The software, called Micro Soft Disk Operating System (MS-DOS) became the standard operating system for most personal computers of that era. In the mid-1980’s, networks of personal computers had increased a great deal, requiring a new type of operating system.

The OS had to be able to manage remote and local hardware and software, file sharing and protection, among other things. Two types of systems were introduced, the network operating system in which users can copy from one station to another, and the distributed operating system, in which the computer appears to be a uni-processor system, even though it is actually running programs and storing files in a remote location. One of the best known network operating system for a distributed network is the Network File System (NFS), which was originally designed by Sun Microsystems, for use on UNIX based machines.

An important feature of the NFS is its ability to support different type of computers. This allowed a machine running NFS to communicate with an IBM compatible machine running MS-DOS, which was an important addition to networking computing. In 1983, Microsoft Corporation introduced the MSX-DOS, an operating system for MSX microcomputers that can run 8-bit Microsoft software including the languages BASIC, COBOL-80, and FORTRAN-80, and Multiplan. 1984 saw the release of the Apple Macintosh, a low-cost workstation, which evolved from early Alto computer designs. The Macintosh provided advanced graphics and high performance for its size and cost.

As the Macintosh was not compatible with other systems, it required its own operating system, which is how the Apple operating system was established. MIMIX, based on the UNIX design was also a popular choice for the Macintosh. As computer processors got faster, operating systems also had to improve in order to take advantage of this progression. Microsoft released version 2 of MS-DOS, which adopted the many features that made UNIX so popular, although MS-DOS was designed to be smaller then, but was not as large as the UNIX operating system making it ideal for personal computers.

Modern Operating Systems The past 9 years have seen many advances in computers and their operating systems. Processors continue to increase in speed, each requiring an operating system to handle the new developments. Microsoft Corporation has dominated the IBM compatible world, Windows being the standard operating system for majority of personal computers. Now as computing and information technology becomes more towards the Internet and virtual computing, so too must the operating systems. In 1992, Microsoft for Workgroups 3. 1 was introduced, extending on from the previous versions.

It allowed the sending of electronic mail, and provided advanced networking capabilities to be used as a client on an existing local area network. This was only the one stage in the vast evolution of the worlds most popular operating system, with the most recent being Windows NT and Windows 98, the latter being a fully Internet integrated operating system. Windows, however is not the only operating system in use today. Other’s such as UNIX, Apple Operating System and OS/Warp have also had an impact, each new version more advanced, and more user friendly then the last.

The Electronic Computer

Only once in a lifetime will a new invention come about to touch every aspect of our lives. Such a device that changes the way we work, live, and play is a special one, indeed. A machine that has done all this and more now exists in nearly every business in the U. S. and one out of every two households (Hall, 156). This incredible invention is the computer. The electronic computer has been around for over a half-century, but its ancestors have been around for 2000 years. However, only in the last 40 years has it changed the American society.

From the first wooden abacus to the latest high-speed microprocessor, the computer has changed nearly every aspect of people’s lives for the better. The very earliest existence of the modern day computer’s ancestor is the abacus. These date back to almost 2000 years ago. It is simply a wooden rack holding parallel wires on which beads are strung. When these beads are moved along the wire according to “programming” rules that the user must memorize, all ordinary arithmetic operations can be performed (Soma, 14).

The next innovation in computers took place in 1694 when Blaise Pascal invented the first “digital calculating machine”. It could only add numbers and they had to be entered by turning dials. It was designed to help Pascal’s father who was a tax collector (Soma, 32). In the early 1800’s, a mathematics professor named Charles Babbage designed an automatic calculation machine. It was steam powered and could store up to 1000 50-digit numbers. Built in to his machine were operations that included everything a modern general-purpose computer would need.

It was programmed by-and stored data on-cards with holes punched in them, appropriately called OpunchcardsO. His inventions were failures for the most part because of the lack of precision machining techniques used at the time and the lack of demand for such a device (Soma, 46). After Babbage, people began to lose interest in computers. However, between 1850 and 1900 there were great advances in mathematics and physics that began to rekindle the interest (Osborne, 45). Many of these new advances involved complex calculations and formulas that were very time consuming for human calculation.

The first major use for a computer in the U. S. was during the 1890 census. Two men, Herman Hollerith and James Powers, developed a new punched-card system that could automatically read information on cards without human intervention (Gulliver, 82). Since the population of the U. S. was increasing so fast, the computer was an essential tool in tabulating the totals. These advantages were noted by commercial industries and soon led to the development of improved punch-card business-machine systems by International Business Machines (IBM), Remington-Rand, Burroughs, and other corporations.

By modern standards the punched-card machines were slow, typically processing from 50 to 250 cards per minute, with each card holding up to 80 digits. At the time, however, punched cards were an enormous step forward; they provided a means of input, output, and memory storage on a massive scale. For more than 50 years following their first use, punched-card machines did the bulk of the world’s business computing and a good portion of the computing work in science (Chposky, 73).

By the late 1930s punched-card machine techniques had become so well established and reliable that Howard Hathaway Aiken, in collaboration with engineers at IBM, undertook construction of a large automatic digital computer based on standard IBM electromechanical parts. Aiken’s machine, called the Harvard Mark I, handled 23-digit numbers and could perform all four arithmetic operations. Also, it had special built-in programs to handle logarithms and trigonometric functions. The Mark I was controlled from prepunched paper tape. Output was by card punch and electric typewriter.

It was slow, requiring 3 to 5 seconds for a multiplication, but it was fully automatic and could complete long computations without human intervention (Chposky, 103). The outbreak of World War II produced a desperate need for computing capability, especially for the military. New weapons systems were produced which needed trajectory tables and other essential data. In 1942, John P. Eckert, John W. Mauchley, and their associates at the University of Pennsylvania decided to build a high-speed electronic computer to do the job. This machine became known as ENIAC, for “Electrical Numerical Integrator And Calculator”.

It could multiply two numbers at the rate of 300 products per second, by finding the value of each product from a multiplication table stored in its memory. ENIAC was thus about 1,000 times faster than the previous generation of computers (Dolotta, 47). ENIAC used 18,000 standard vacuum tubes, occupied 1800 square feet of floor space, and used about 180,000 watts of electricity. It used punched-card input and output. The ENIAC was very difficult to program because one had to essentially re-wire it to perform whatever task he wanted the computer to do.

It was, however, efficient in handling the particular programs for which it had been designed. ENIAC is generally accepted as the first successful high-speed electronic digital computer and was used in many applications from 1946 to 1955 (Dolotta, 50). Mathematician John von Neumann was very interested in the ENIAC. In 1945 he undertook a theoretical study of computation that demonstrated that a computer could have a very simple and yet be able to execute any kind of computation effectively by means of proper programmed control without the need for any changes in hardware.

Von Neumann came up with incredible ideas for methods of building and organizing practical, fast computers. These ideas, which came to be referred to as the stored-program technique, became fundamental for future generations of high-speed digital computers and were universally adopted (Hall, 73). The first wave of modern programmed electronic computers to take advantage of these improvements appeared in 1947. This group included computers using random access memory (RAM), which is a memory designed to give almost constant access to any particular piece of information (Hall, 75).

These machines had punched-card or punched-tape input and output devices and RAMs of 1000-word capacity. Physically, they were much more compact than ENIAC: some were about the size of a grand piano and required 2500 small electron tubes. This was quite an improvement over the earlier machines. The first-generation stored-program computers required considerable maintenance, usually attained 70% to 80% reliable operation, and were used for 8 to 12 years. Typically, they were programmed directly in machine language, although by the mid-1950s progress had been made in several aspects of advanced programming.

This group of machines included EDVAC and UNIVAC, the first commercially available computers (Hazewindus, 102). The UNIVAC was developed by John W. Mauchley and John Eckert, Jr. in the 1950’s. Together they had formed the Mauchley-Eckert Computer Corporation, America’s first computer company in the 1940’s. During the development of the UNIVAC, they began to run short on funds and sold their company to the larger Remington-Rand Corporation. Eventually they built a working UNIVAC computer.

It was delivered to the U. S. Census Bureau in 1951 where it was used to help tabulate the U. S. population (Hazewindus, 124). Early in the 1950s two important engineering discoveries changed the electronic computer field. The first computers were made with vacuum tubes, but by the late 1950’s computers were being made out of transistors, which were smaller, less expensive, more reliable, and more efficient (Shallis, 40). In 1959, Robert Noyce, a physicist at the Fairchild Semiconductor Corporation, invented the integrated circuit, a tiny chip of silicon that contained an entire electronic circuit.

Gone was the bulky, unreliable, but fast machine; now computers began to become more compact, more reliable and have more capacity (Shallis, 49). These new technical discoveries rapidly found their way into new models of digital computers. Memory storage capacities increased 800% in commercially available machines by the early 1960s and speeds increased by an equally large margin. These machines were very expensive to purchase or to rent and were especially expensive to operate because of the cost of hiring programmers to perform the complex operations the computers ran.

Such computers were typically found in large computer centers-operated by industry, government, and private laboratories-staffed with many programmers and support personnel (Rogers, 77). By 1956, 76 of IBM’s large computer mainframes were in use, compared with only 46 UNIVAC’s (Chposky, 125). In the 1960s efforts to design and develop the fastest possible computers with the greatest capacity reached a turning point with the completion of the LARC machine for Livermore Radiation Laboratories by the Sperry-Rand Corporation, and the Stretch computer by IBM.

The LARC had a core memory of 98,000 words and multiplied in 10 microseconds. Stretch was provided with several ranks of memory having slower access for the ranks of greater capacity, the fastest access time being less than 1 microseconds and the total capacity in the vicinity of 100 million words (Chposky, 147). During this time the major computer manufacturers began to offer a range of computer capabilities, as well as various computer-related equipment.

These included input means such as consoles and card feeders; output means such as page printers, cathode-ray-tube displays, and graphing devices; and optional magnetic-tape and magnetic-disk file storage. These found wide use in business for such applications as accounting, payroll, inventory control, ordering supplies, and billing. Central processing units (CPUs) for such purposes did not need to be very fast arithmetically and were primarily used to access large amounts of records on file.

The greatest number of computer systems were delivered for the larger applications, such as in hospitals for keeping track of patient records, medications, and treatments given. They were also used in automated library systems and in database systems such as the Chemical Abstracts system, where computer records now on file cover nearly all known chemical compounds (Rogers, 98). The trend during the 1970s was, to some extent, away from extremely powerful, centralized computational centers and toward a broader range of applications for less-costly computer systems.

Most continuous-process manufacturing, such as petroleum refining and electrical-power distribution systems, began using computers of relatively modest capability for controlling and regulating their activities. In the 1960s the programming of applications problems was an obstacle to the self-sufficiency of moderate-sized on-site computer installations, but great advances in applications programming languages removed these obstacles. Applications languages became available for controlling a great range of manufacturing processes, for computer operation of machine tools, and for many other tasks (Osborne, 146).

In 1971 Marcian E. Hoff, Jr. , an engineer at the Intel Corporation, invented the microprocessor and another stage in the deveopment of the computer began (Shallis, 121). A new revolution in computer hardware was now well under way, involving miniaturization of computer-logic circuitry and of component manufacture by what are called large-scale integration techniques. In the 1950s it was realized that “scaling down” the size of electronic digital computer circuits and parts would increase speed and efficiency and improve performance.

However, at that time the manufacturing methods were not good enough to accomplish such a task. About 1960 photoprinting of conductive circuit boards to eliminate wiring became highly developed. Then it became possible to build resistors and capacitors into the circuitry by photographic means (Rogers, 142). In the 1970s entire assemblies, such as adders, shifting registers, and counters, became available on tiny chips of silicon. In the 1980s very large scale integration (VLSI), in which hundreds of thousands of transistors are placed on a single chip, became increasingly common.

Many companies, some new to the computer field, introduced in the 1970s programmable minicomputers supplied with software packages. The size-reduction trend continued with the introduction of personal computers, which are programmable machines small enough and inexpensive enough to be purchased and used by individuals (Rogers, 153). One of the first of such machines was introduced in January 1975. Popular Electronics magazine provided plans that would allow any electronics wizard to build his own small, programmable computer for about $380 (Rose, 32).

The computer was called the OAltair 8800O. Its programming involved pushing buttons and flipping switches on the front of the box. It didn’t include a monitor or keyboard, and its applications were very limited (Jacobs, 53). Even though, many orders came in for it and several famous owners of computer and software manufacturing companies got their start in computing through the Altair. For example, Steve Jobs and Steve Wozniak, founders of Apple Computer, built a much cheaper, yet more productive version of the Altair and turned their hobby into a business (Fluegelman, 16).

After the introduction of the Altair 8800, the personal computer industry became a fierce battleground of competition. IBM had been the computer industry standard for well over a half-century. They held their position as the standard when they introduced their first personal computer, the IBM Model 60 in 1975 (Chposky, 156). However, the newly formed Apple Computer company was releasing its own personal computer, the Apple II (The Apple I was the first computer designed by Jobs and Wozniak in Wozniak’s garage, which was not produced on a wide scale).

Software was needed to run the computers as well. Microsoft developed a Disk Operating System (MS-DOS) for the IBM computer while Apple developed its own software system (Rose, 37). Because Microsoft had now set the software standard for IBMs, every software manufacturer had to make their software compatible with Microsoft’s. This would lead to huge profits for Microsoft (Cringley, 163). The main goal of the computer manufacturers was to make the computer as affordable as possible while increasing speed, reliability, and capacity.

Nearly every computer manufacturer accomplished this and computers popped up everywhere. Computers were in businesses keeping track of inventories. Computers were in colleges aiding students in research. Computers were in laboratories making complex calculations at high speeds for scientists and physicists. The computer had made its mark everywhere in society and built up a huge industry (Cringley, 174). The future is promising for the computer industry and its technology. The speed of processors is expected to double every year and a half in the coming years.

As manufacturing techniques are further perfected the prices of computer systems are expected to steadily fall. However, since the microprocessor technology will be increasing, it’s higher costs will offset the drop in price of older processors. In other words, the price of a new computer will stay about the same from year to year, but technology will steadily increase (Zachary, 42) Since the end of World War II, the computer industry has grown from a standing start into one of the biggest and most profitable industries in the United States.

It now comprises thousands of companies, making everything from multi-million dollar high-speed supercomputers to printout paper and floppy disks. It employs millions of people and generates tens of billions of dollars in sales each year (Malone, 192). Surely, the computer has impacted every aspect of people’s lives. It has affected the way people work and play. It has made everyone’s life easier by doing difficult work for people. The computer truly is one of the most incredible inventions in history.

Humans And Their Ability To Make Mistakes

In today’s pop culture, there is one very popular view of the future. All humans will be free to do as they wish, because robots and computers will work for us. Computers are viewed as the ideal slaves. They work non-stop, never complain, and above all, never make mistakes. It is often said that computers don’t make mistakes, that it is the person using the computer who commits errors. What is it that makes humans err, but not computers? I will prove that it is simply the way humans are built that makes us commit errors. Unlike computers, uilt of mechanical or electronic parts, humans are made of organic matter and nerve pathways.

These same pathways, with the help of the brain are responsible for all the decision making. I shall demonstrate why humans err, despite the fact that we have eyes and ears to sense with. Before I can establish causes for error, I shall define the terms “error” and “mistake”. In the context of this essay, they will simply mean that a human obtained a result different from the expected, correct one. Whether it in be adding two numbers, or calling someone by the wrong name, these are all rrors that a computer would not make. An error can also be interpreted as being a wrong physical move.

If a person is walking in the woods and trips on a branch, it is because the person erred in the sense of watching the path followed. There is no doubt in anyone’s mind that humans make mistakes all the time. Let us simply analyze any process in which there is a chance for someone to commit an error. Take for example a cashier in a grocery store. The cashier obtains the total on the cash register, and receives a twenty dollar bill from the customer. She must now give the patron back his/her change.

The cash register tells the cashier that the client is owed 4. 0$. The cashier then reaches into her change drawer to retrieve the proper set of coins. This is where the opportunity for error increases. What if the cashier only gives the customer back $4. 55, because she mistakenly returned a nickel instead of a dime? What caused this blunder? Would this blunder have happened if the cashier had had 15 minutes to decide on how much change to return instead of 15 seconds? Logically speaking, we can establish that if the cashier had 15 minutes to elect the proper set of coins, she probably wouldn’t have made a mistake.

This is due to the fact that she would have taken more time in figuring out which coins to choose and would even have had time to review her decision several times. What can we deduce from this discussion? Humans are more prone to make mistakes if they are rushed than if they have lots of time to do an operation. There are many other examples. If you give a class a math exam, but restrict them to 15 minutes, we can be almost certain that they will get a lower mark than the same class doing the same test in one hour. The reason is fairly simple. Our brains and senses simply do not react fast enough.

That is why computers are so renown for their dependability in terms of errors. Computers can perform thousands more operations per second than a human with no chance of error. This is due to the construction of these machines. Their inanimate parts are better adapted to executing these operations at very high speeds. Let us take another example. A man is adding up a column of numbers. We will pretend that this individual has a basic knowledge of math. The mistakes he ight make, if any, will not be due to his lack of knowledge of the basic addition rules.

He sits down with a sheet of paper with a list of many three digit numbers. What kinds of errors can he commit, and why? While adding up the numbers, he might mistake a 7 for a 1 and add the numbers together wrong. He might, while adding, disregard a number once in a while. All these possible mistakes would lead to the wrong final answer, but what causes these errors? Once again, the time factor is very important. Given the chance to redo his calculations another 99 times, he would certainly produce the correct final nswer.

The reason he committed errors was simply that he was doing an action faster that his brain and eyes could handle with 100% accuracy. It seems that our brains can compute complex operations that allow us to drive a car through terrible weather conditions, at night, but all these operations cannot be accomplished within too short a time limit. So far, we have discussed the speed at which the brain can compute operations without error. We must consider other factors which can also lead to mistakes. To explain other types of error, I will use a terminology developed nd used by the philosopher Bertrand Russell.

He identifies something called sense data. Sense data is the data received by our senses from the object being “sensed”. For instance, if a person is looking at a red apple, the shape and color and all other aspects of this apple are received in the form of sense data. In the case of the man adding up the numbers, he mistook a 7 for a 1. What really happened is that his senses misidentified the number. The sense data was received by his eyes, which then converted this information into an electrical signal to be sent to the brain for analysis.

There are thus two possibilities. Either the eyes did not transform the signal of the 7 properly, or the brain misunderstood the signal received from the eyes. In both cases, the sense data was analyzed incorrectly, leading to an error in the final calculation. Some skeptics might criticize my position by saying that, no matter how much time a person has to complete a job, he or she might still commit errors. In the example of the cashier that I used earlier, one might say that although she had 15 minutes to select 3 different coins, that she still might make a istake.

One could justify this position by saying that this is due to a lack of attention. If a person has 15 minutes to complete a simple task, they will pay very little attention to the details. If the coin is slightly worn out, and the cashier is not paying attention, then she will pick it up by mistake. Moreover, once the coin is selected, she will assume that it is the right one, so that even if she checks the coins before handing them to the customer, she might simply assume that she has selected the correct amount. My answer to this position is fairly clear.

No matter how little attention she pays to the job she is doing, that is not where the error lies. If she is distracted while picking up the coins in question, then her senses are not receiving and analyzing the sense datum properly, or thoroughly. This is simply a more complex case of what I described earlier, with the man mistaking a 7 for a 1. The individual is not drawing the right conclusion from the sense data received. In light of the examples and discussions presented, I think is safe to say that human error is due to the fact that the brain can only function erfectly up to a certain speed.

Also, the five human senses do not always properly interpret the sense data received, causing the brain to make mistakes. Not paying attention to what one is doing is not a reason for making a mistake. It is the repercussions of this behavior that cause the error, because the person is not using his/her senses properly. In conclusion, it is understandable that humans make mistakes despite the fact that our senses receive sense data from objects surrounding us. After all, if this weren’t true, you would have just finished reading a perfect essay!

Computer Crime Report

A report discussing the proposition that computer crime has increased dramatically over the last 10 years. Computer crime is generally defined as any crime accomplished through special knowledge of computer technology. Increasing instances of white-collar crime involve computers as more businesses automate and the information held by the computers becomes an important asset. Computers can also become objects of crime when they or their contents are damaged, for example when vandals attack the computer itself, or when a “computer virus” (a program capable of altering or rasing computer memory) is introduced into a computer system.

As subjects of crime, computers represent the electronic environment in which frauds are programmed and executed; an example is the transfer of money balances in accounts to perpetrators’ accounts for withdrawal. Computers are instruments of crime when they are used to plan or control such criminal acts. Examples of these types of crimes are complex embezzlements that might occur over long periods of time, or when a computer operator uses a computer to steal or alter valuable information from an employer.

Since the first cases were reported in 1958, computers have been used for most kinds of crime, including fraud, theft, embezzlement, burglary, sabotage, espionage, murder, and forgery. One study of 1,500 computer crimes established that most of them were committed by trusted computer users within businesses i. e. persons with the requisite skills, knowledge, access, and resources. Much of known computer crime has consisted of entering false data into computers. This method of computer crime is simpler and safer than the complex process of writing a program to change data already in the computer.

Now that personal computers with the ability to communicate by telephone are prevalent in our society, increasing numbers of crimes have been perpetrated by computer hobbyists, known as “hackers,” who display a high level of technical expertise. These “hackers” are able to manipulate various communications systems so that their interference with other computer systems is hidden and their real identity is difficult to trace. The crimes committed by most “hackers” consist mainly of simple but costly electronic trespassing, copyrighted-information piracy, and vandalism.

There is also vidence that organised professional criminals have been attacking and using computer systems as they find their old activities and environments being Another area of grave concern to both the operators and users of computer systems is the increasing prevalence of computer viruses. A computer virus is generally defined as any sort of destructive computer program, though the term is usually reserved for the most dangerous ones. The ethos of a computer virus is an intent to cause damage, “akin to vandalism on a small scale, or terrorism on a grand scale. ” There are many ways in which viruses can be spread.

A virus can be introduced to networked computers thereby infecting every computer on the network or by sharing disks between computers. As more home users now have access to modems, bulletin board systems where users may download software have increasingly become the target of viruses. Viruses cause damage by either attacking another file or by simply filling up the computer’s memory or by using up the computer’s processor power. There are a number of different types of viruses, but one of the factors common to most of them is that they all copy themselves (or parts of themselves).

Viruses are, in essence, self-replicating. We will now consider a “pseudo-virus,” called a worm. People in the computer industry do not agree on the distinctions between worms and viruses. Regardless, a worm is a program specifically designed to move through networks. A worm may have constructive purposes, such as to find machines with free resources that could be more efficiently used, but usually a worm is used to disable or slow down computers. More specifically, worms are defined as, “computer virus programs … [which] propagate on a computer network without the aid of an nwitting human accomplice.

These programs move of their own volition based upon stored knowledge of the network structure. ” Another type of virus is the “Trojan Horse. ” These viruses hide inside another seemingly harmless program and once the Trojan Horse program is used on the computer system, the virus spreads. One of the most famous virus types of recent years is the Time Bomb, which is a delayed action virus of some type. This type of virus has gained notoriety as a result of the Michelangelo virus. This virus was designed to erase the hard drives of people using IBM compatible omputers on the artist’s birthday.

Michelangelo was so prevalent that it was even distributed accidentally by some software publishers when the software developers’ computers became infected. SYSOPs must also worry about being liable to their users as a result of viruses which cause a disruption in service. Viruses can cause a disruption in service or service can be suspended to prevent the spread of a virus. If the SYSOP has guaranteed to provide continuous service then any disruption in service could result in a breach of contract and litigation could ensue.

However, contract rovisions could provide for excuse or deferral of obligation in the event of The first federal computer crime law, entitled the Counterfeit Access Device and Computer Fraud and Abuse Act of 1984, was passed in October of 1984. The Act made it a felony to knowingly access a computer without authorisation, or in excess of authorisation, in order to obtain classified United States defence or foreign relations information with the intent or reason to believe that such information would be used to harm the United States or to advantage a foreign The act also attempted to protect financial data.

Attempted access to obtain information from financial records of a financial institution or in a consumer file of a credit reporting agency was also outlawed. Access to use, destroy, modify or disclose information found in a computer system, (as well as to prevent authorised use of any computer used for government business) was also made illegal. The 1984 Act had several shortcomings, and was revised in The Computer Fraud and Abuse Act of 1986.

Three new crimes were added to the 1986 Act. These were a computer fraud offence, modelled after federal mail and wire fraud statutes, an offence for the lteration, damage or destruction of information contained in a “federal interest computer”, an offence for trafficking in computer passwords under some Even the knowing and intentional possession of a sufficient amount of counterfeit or unauthorised “access devices” is illegal.

This statute has been interpreted to cover computer passwords “which may be used to access computers to wrongfully obtain things of value, such as telephone and credit card Business crimes of all types are probably decreasing as a direct result of increasing automation. When a business activity is carried out with computer nd communications systems, data are better protected against modification, destruction, disclosure, misappropriation, misrepresentation, and contamination.

Computers impose a discipline on information workers and facilitate use of almost perfect automated controls that were never possible when these had to be applied by the workers themselves under management edict. Computer hardware and software manufacturers are also designing computer systems and programs that are Recent U. S. legislation, including laws concerning privacy, credit card fraud and racketeering, provide criminal-justice agencies with tools to fight business crime.

As of 1988, all but two states had specific computer-crime laws, and a federal computer-crime law (1986) deals with certain crimes involving computers in different states and in government activities. There are no valid statistics about the extent of computer crime. Victims often resist reporting suspected cases, because they can lose more from embarrassment, lost reputation, litigation, and other consequential losses than from the acts themselves. Limited evidence indicates that the number of cases is rising each year because of the increasing number of computers in business applications where crime has traditionally occurred.

Computer Crime In The 1990’s

We’re being ushered into the digital frontier. It’s a cyberland with incredible promise and untold dangers. Are we prepared ? It’s a battle between modern day computer cops and digital hackers. Essentially just think what is controlled by computer systems, virtually everything. By programming a telephone voice mail to repeat the word yes over and over again a hacker has beaten the system. The hacker of the 1990’s is increasingly becoming more organized very clear in what they’re looking for and very, very sophisticated in their methods of attack..

As hackers have become more ophisticated and more destructive, governments, phone companies and businesses are struggling to defend themselves. Phone Fraud In North America the telecommunications industry estimates long distance fraud costs five hundred million perhaps up to a billion every year, the exact the exact figures are hard to be sure of but in North America alone phone fraud committed by computer hackers costs three, four maybe even up to five billion dollars every year. Making an unwitting company pay for long distance calls is the most popular form of phone fraud today.

The first step is to gain access to private automated branch exchange known as a “PABX” or “PBX”. One of these can be found in any company with twenty or more employees. A “PABX” is a computer that manages the phone system including it’s voice mail. Once inside a “PABX” a hacker looks for a phone whose voice mail has not yet been programmed, then the hacker cracks it’s access code and programs it’s voice mail account to accept charges for long distance calls, until the authorities catch on, not for a few days, hackers can use voice mail accounts to make free and untraceablecalls to all over the world.

The hackers that commit this type of crime are becoming ncreasingly organized. Known as “call cell operators” they setup flyby night storefronts were people off the street can come in and make long distance calls at a large discount, for the call cell operators of course the calls cost nothing, by hacking into a PABX system they can put all the charges on the victimized companies tab. With a set of stolen voice mail access codes known as “good numbers” hackers can crack into any phone whenever a company disables the phone they’re using.

In some cases call cell operators have run up hundreds of thousands of dollars in long distance charges, driving businesses and companies traight into bankruptcy. Hacking into a PABX is not as complicated as some people seem to think. The typical scenario that we find is an individual who has a “demon dialer” hooked up to their personal home computer at home that doesn’t necessarily need to be a high powered machine at all but simply through the connection of a modem into a telephone line system.

Then this “demon dialer” is programmed to subsequently dial with the express purpose of looking for and recording dialtone. A demon dialer is a software program that automatically calls thousands of phone numbers to find ones that are connected to computers. A asic hacker tool that can be downloaded from the internet. They are extremely easy programs to use. The intention is to acquire dialtone, that enables the hacker to move freely through the telephone network. It’s generally getting more sinister. We are now seeing a criminal element now involved in term of the crimes they commit, the drugs, money laundering etc.

These people are very careful they want to hide their call patterns so they’ll hire these people to get codes for them so they can dial from several different calling locations so they cannot be detected. The worlds telephone network is a vast maze, there are many places to hide ut once a hacker is located the phone company and police can track their every move. The way they keep track is by means of a device called a “DNR” or a dial number recorder. This device monitors the dialing patterns of any suspected hacker.

It lists all the numbers that have been dialed from their location, the duration of the telephone call and the time of disconnection. The process of catching a hacker begins at the phone company’s central office were thousands of lines converge to a main frame computer, the technicians can locate the exact line that leads to a suspected hackers phone line by the touch of a button. With the “DNR” device the “computer police” retrieve the number and also why the call was made and if it was made for illegal intention they will take action and this person can be put in prison for up to five years and be fined for up to $ 7500. 0.

The telephone network is a massive electronic network that depends on thousands of computer run software programs and all this software in theory can be reprogrammed for criminal use. The telephone system is in other words a potentially vulnerable system, by cracking the right codes and inputting the correct passwords a hacker can sabotage a switching system for millions of hones, paralyzing a city with a few keystrokes. Security experts say telephone terrorism poses a threat, society hasn’t even begun to fathom ! You have people hacking into systems all the time.

There were groups in the U. S. A in 1993 that shutdown three of the four telephone switch stations on the east coast, if they had shutdown the final switch station as well the whole east coast would have been without phones. Things of this nature can happen and have happened in the past. Back in the old days you had mechanical switches doing crossbars, things of that nature. Today all telephone witches are all computerized, they’re everywhere. With a computer switch if you take the first word “computer” that’s exactly what it is, a switch being operated by a computer.

The computer is connected to a modem, so are you and all the hackers therefore you too can run the switches. Our generation is the first to travel within cyberspace, a virtual world that exists with all the computers that form the global net. For most people today cyberspace is still a bewildering and alien place. How computers work and how they affect our lives is still a mystery to all but the experts, but xpertise doesn’t necessarily guarantee morality. Originally the word hacker meant a computer enthusiasts but now that the internet has revealed it’s potential for destruction and profit the hacker has become the outlaw of cyberspace.

Not only do hackers commit crimes that cost millions of dollars, they also publicize their illegal techniques on the net where they innocent minds can find them and be seduced by the allure of power and money. This vast electronic neighborhood of bits and bytes has stretched the concepts of law and order. Like handbills stapled to telephone polls the internet appears to defy regulation. The subtleties and nuances of this relatively new form to the words “a gray area” and “right and wrong”.

Most self described hackers say they have been given a bad name and that they deserve more respect. For the most part they say hackers abide by the law, but when they do steal a password or break into a network they are motivated by a helping desire for knowledge, not for malicious intent. Teenagers are especially attracted by the idea of getting something for nothing. When system managers try to explain to hackers that it is wrong to break into computer systems there is no point because hackers with the aid of a omputer possess tremendous power.

They cannot be controlled and they have the ability to break into any computer system they feel like. But suppose one day a hacker decides to break into a system owned by a hospital and this computer is in charge of programming the therapy for a patient there if a hacker inputs the incorrect code the therapy can be interfered with and the patient may be seriously hurt. Even though this wasn’t done deliberately. These are the type of circumstances that give hackers a bad reputation. Today anyone with a computer and a modem can enter millions of computer systems around the world.

On he net they say bits have no boundaries this means a hacker half way around the world can steal passwords and credit card numbers, break into computer systems and plant crippling viruses as easily as if they were just around the corner. The global network allows hackers to reach out and rob distant people with lightning speed. If cyberspace is a type of community, a giant neighborhood made up of networked computer users around the world, then it seems natural that many elements of traditional society can be found taking shape as bits and bytes.

With electronic commerce comes electronic merchants, plugged-in educators rovide networked education, and doctors meet with patients in offices on-line. IT should come as no surprise that there are also cybercriminals committing cybercrimes. As an unregulated hodgepodge of corporations, individuals, governments, educational institutions, and other organizations that have agreed in principle to use a standard set of communication protocols, the internet is wide open to exploitation. There are no sheriffs on the information highway waiting to zap potential offenders with a radar gun or search for weapons if someone looks suspicious.

By almost all accounts, this lack of “law enforcement” leaves net sers to regulate each other according to the reigningnorms of the moment. Community standards in cyberspace appear to be vastly different from the standards found at the corner of Markham and Lawrence. Unfortunately, cyberspace is also a virtual tourist trap where faceless, nameless con artists can work the crowds. Mimicking real life, crimes and criminals come in all varieties on the internet. The FBI’s National Computer Squad is dedicated to detecting and preventing all types of computer -related crimes.

Some issues being carefully studied by everyone from the net veterans and law enforcement agencies to radical crimes include: Computer Network Break-Ins Using software tools installed on a computer in a remote location, hackers can break into any computer systems to steal data, plant viruses or trojan horses, or work mischief of a less serious sort by changing user names or passwords. Network intrusions have been made illegal by the U. S. federal government, but detection and enforcement are difficult. Industrial Espionage Corporations, like governments, love to spy on the enemy.

Networked systems provide new opportunities for this , as hackers-for-hire retrieve information about product development and marketing strategies, rarely leaving ehind any evidence of the theft. Not only is tracing the criminal labor- intensive, convictions are hard to obtain when laws are not written with electronic theft in mind. Software Piracy According to estimates by U. S. Software Publisher’s Association, as much as $7. 5 billion of American software may be illegally copied and distributed worldwide. These copies work as well as the originals, and sell for significantly less money.

Piracy is relatively easy, and only the largest rings of distributors are usually to serve hard jail time when prisons are overcrowded with people convicted of more serious crimes. Child Pornography This is one crime that is clearly illegal, both on and off the internet. Crackdowns may catch some offenders, but there are still ways to acquire images of children in varying stages of dress and performing a variety of sexual acts. Legally speaking, people who provide access to child porn face the same charges whether the images are digital or on a piece of paper.

Trials of network users arrested in a recent FBI bust may challenge the validity of those laws as they apply to online services. Mail Bombings Software can be written that will instruct a computer to do almost anything, nd terrorism has hit the internet in the form of mail bombings. By instructing a computer to repeatedly send mail (email) to a specified person’s email address, the cybercriminal can overwhelm the recipient’s personal account and potentially shut down entire systems. This may not be illegal , but it is certainly disruptive.

Password Sniffers Password sniffers are programs that monitor and record the name and password of network users as they log in, jeopardizing security at a site. Whoever installs the sniffer can then impersonate an authorized user and log in to access restricted documents. Laws are not yet up to adequately prosecute a erson for impersonating another person on-line, but laws designed to prevent unauthorized access to information may be effective in apprehending hackers using sniffer programs.

The Wall Street Journal suggest in recent reports that hackers may have sniffed out passwords used by members of America On-line, a service with more than 3. 5 million subscribers. If the reports are accurate, even the president of the service found his account security jeopardized. Spoofing Spoofing is the act of disguising one computer to electronically “look” like another computer in order to gain access to a system that would normally be estricted. Legally, this can be handles in the same manner as password sniffers, but the law will have to change if spoofing is going to be addressed with more than a quick fix solution.

Spoofing was used to access valuable documents stored on a computer belonging to security expert Tsutomu Shimomura (security expert of Nintendo U. S. A) Credit Card Fraud The U. S secret service believes that half a billion dollars may be lost annually by customers who have credit card and calling card numbers stolen from on-line databases. Security measures are improving and traditional methods of aw enforcement seem to be sufficient for prosecuting the thieves of such information.

Bulletin boards and other on-line services are frequent targets for hackers who want to access large databases or credit card information. Such attacks usually result in the implementation of stronger security systems. Since there is no single widely-used definition of computer-related crime, computer network users and law enforcement officials most distinguish between illegal or deliberate network abuse versus behavior that is merely annoying. Legal systems everywhere are busily studying ways of dealing with crimes and criminals on the internet.

Brief History Of Databases

In the 1960’s, the use of main frame computers became widespread in many companies. To access vast amounts of stored information, these companies started to use computer programs like COBOL and FORTRAN. Data accessibility and data sharing soon became an important feature because of the large amount of information recquired by different departments within certain companies. With this system, each application owns its own data files. The problems thus associated with this type of file processing was uncontrolled redundancy, inconsistent data, inflexibility, poor enforcement of standards, and low rogrammer maintenance.

In 1964, MIS (Management Information Systems) was introduced. This would prove to be very influential towards future designs of computer systems and the methods they will use in manipulating data. In 1966, Philip Kotler had the first description of how managers could benefit from the powerful capabilities of the electronic computer as a management tool. In 1969, Berson developed a marketing information system for marketing research. In 1970, the Montgomery urban model was developed stressing the quantitative aspect of management by highlighting a data bank, a model bank, and measurement statistics bank.

All of these factors will be influential on future models of storing data in a pool. According to Martine, in 1981, a database is a shared collection of interrelated data designed to meet the needs of multiple types of end users. The data is stored in one location so that they are independent of the programs that use them, keeping in mind data integrity with respect to the approaches to adding new data, modifying data, and retrieving existing data. A database is shared and perceived differently by multiple users. This leads to the arrival of Database Management Systems.

These systems first appeared around the 1970=s as solutions to problems associated with mainframe computers. Originally, pre-database programs accessed their own data files. Consequently, similar data had to be stored in other areas where that certain piece of information was relevant. Simple things like addresses were stored in customer information files, accounts receivable records, and so on. This created redundancy and inefficiency. Updating files, like storing files, was also a problem. When a customer=s address changed, all the fields where that customer=s address was stored had to be changed.

If a field appened to be missed, then an inconsistency was created. When requests to develop new ways to manipulate and summarize data arose, it only added to the problem of having files attached to specific applications. New system design had to be done, including new programs and new data file storage methods. The close connection between data files and programs sent the costs for storage and maintenance soaring. This combined with an inflexible method of the kinds of data that could be extracted, arose the need to design an effective and efficient system.

Here is where Database Management Systems helped restore order to a ystem of inefficiency. Instead of having separate files for each program, one single collection of information was kept, a database. Now, many programs, known as a database manager, could access one database with the confidence of knowing that it is accessing up to date and exclusive information. Some early DBMS=s consisted of: Condor 3 dBaseIII Knowledgeman Omnifile Please Power-Base R-Base 4000 Condor 3, dBaseIII, and Omnifile will be examined more closely.

Condor 3 Is a relational database management system that evolved in the microcomputer environment since 1977. Condor provides multi-file, menu-driven elational capabilities and a flexible command language. By using a word processor, due to the absence of a text editor, frequently used commands can automated. Condor 3 is an application development tool for multiple-file databases. Although it lacks some of the capabilities like procedure repetition, it makes up for it with its ease to use and quick decent speed. Condor 3 utilizes the advantages of menu-driven design.

Its portability enables it to import and export data files in five different ASCII formats. Defining file structures is a relatively straightforward method by typing the ield names and their length, the main part of designing the structure is about complete. Condor uses six data types: alphabetic alphanumeric C. numeric C. decimal numeric C. Julian date C. dollar Once the fields have been designed, data entry is as easy as pressing enter and inputting the respective values to the appropriate fields and like the newer databases, Condor too can use the Update, Delete, Insert, and Backspace commands.

Accessing data is done by creating an index. The index can be used to perform sorts and arithmetic. dBaseIII DbaseIII is a relational DBMS which was partially built on dbaseII. Like Condor 3, dbaseIII is menu-driven and has its menus built in several levels. One of the problems discovered, was that higher level commands were not included in all menu levels. That is, dBaseIII is limited to only basic commands and anything above that is not supported. Many of the basic capabilities are easy to use, but like Condor, dBaseIII has inconsistencies and inefficiency.

The keys used to move and select items in specific menus are not always consistent through out. If you mark an item to be selected from a list, once it=s marked it can not be unmarked. The only way to correct this is to start over and enter everything again. This is time consuming and obviously inefficient. Although the menus are helpful and guide you through the stages or levels, there is the option to turn off the menus and work at a little faster rate. DBaseIII=s command are procedural (function oriented) and flexible.

It utilizes many of the common functions like: select records C. select fields C. include expressions ( such as calculations) C. redirect output to the screen or to the printer C. store results separately from the application Included in dBaseIII is a limited editor which will let you create commands using the editor or a word processor. Unfortunately, it is still limited to certain commands, for example, it can not create move or copy commands. It also has a screen design package which enables you to design how you want your screen to look.

The minimum RAM requirement of 256k for this package really illustrates how old this application is. The most noticeable problem documented about dBaseIII is inability to edit command lines. If, for example, an error was made entering the name and address of a customer, simply backing up and correcting the wrong character is impossible without deleting everything up to the correction and re-entering everything again. DBaseIII is portable and straightforward to work with. It allows users to import and export files in two forms: fixed-length fields and delimited fields.

It can also perform dBaseII conversions. Creating file structures are simple using the menus or the create command. It has field types that are still being used today by applications such as Microsoft Access, for example, numeric fields and memo fields which let you enter sentences or pieces of information, like a customer=s address, which might vary in length from record to record. Unlike Condor 3, dBaseIII is able to edit fields without having to start over. Inserting new fields or deleting old fields can be done quite easily.

Data manipulation and query is very accessible through a number of built-in functions. The list and display commands enable you to see the entire file, selected records, and selected files. The browse command allows you to scroll through all the fields inserting or editing records at the same time. Calculation functions like sum, average, count, and total allow you to perform arithmetic operations on data in a file. There are other functions available like date and time functions, rounding, and formatting.

Computers-how they affect our lives

Only once in a lifetime will a new invention come about to touch every aspect of our lives. Such a device that changes the way we work, live, and play is a special one, indeed. A machine that has done all this and more now exists in nearly every business in the US and one out of every two households. This incredible invention is the computer. The electronic computer has been around for over a half-century, but its ancestors have been around for 2000 years. However, only in the last 40 years has it changed the American society.

From the first wooden abacus to the latest high-speed microprocessor, the computer has changed nearly every aspect of people’s lives for the better. The very earliest existence of the modern day computer’s ancestor is the abacus. These date back to almost 2000 years ago. It is simply a wooden rack holding parallel wires on which beads are strung. When these beads are moved along the wire according to “programming” rules that the user must memorize, all ordinary arithmetic operations can be performed.

The next innovation in computers took place in 1694 when Blaise Pascal invented the first “digital calculating machine”. It could only add numbers and they had to be entered by turning dials. It was designed to help Pascal’s father who was a tax collector. In the early 1800’s, a mathematics professor named Charles Babbage designed an automatic calculation machine. It was steam powered and could store up to 1000 50-digit numbers. Built in to his machine were operations that included everything a modern general-purpose computer would need.

It was programmed by–and stored data on–cards with holes punched in them, appropriately called “punchcards”. His inventions were failures for the most part because of the lack of precision machining techniques used at the time and the lack of demand for such a device. After Babbage, people began to lose interest in computers. However, between 1850 and 1900 there were great advances in mathematics and physics that began to rekindle the interest. Many of these new advances involved complex calculations and formulas that were very time consuming for human calculation.

The first major use for a computer in the US was during the 1890 census. Two men, Herman Hollerith and James Powers, developed a new punched-card system that could automatically read information on cards without human intervention. Since the population of the US was increasing so fast, the computer was an essential tool in tabulating the totals. These advantages were noted by commercial industries and soon led to the development of improved punch-card business-machine systems by International Business Machines (IBM), Remington-Rand, Burroughs, and other corporations.

By modern standards the punched-card machines were slow, typically processing from 50 to 250 cards per minute, with each card holding up to 80 digits. At the time, however, punched cards were an enormous step forward; they provided a means of input, output, and memory storage on a massive scale. For more than 50 years following their first use, punched-card machines did the bulk of the world’s business computing and a good portion of the computing work in science.

By the late 1930s punched-card machine techniques had become so well established and reliable that Howard Hathaway Aiken, in collaboration with engineers at IBM, undertook construction of a large automatic digital computer based on standard IBM electromechanical parts. Aiken’s machine, called the Harvard Mark I, handled 23-digit numbers and could perform all four arithmetic operations. Also, it had special built-in programs to handle logarithms and trigonometric functions. The Mark I was controlled from prepunched paper tape. Output was by card punch and electric typewriter.

It was slow, requiring 3 to 5 seconds for a multiplication, but it was fully automatic and could complete long computations without human intervention. The outbreak of World War II produced a desperate need for computing capability, especially for the military. New weapons systems were produced which needed trajectory tables and other essential data. In 1942, John P. Eckert, John W. Mauchley, and their associates at the University of Pennsylvania decided to build a high-speed electronic computer to do the job. This machine became known as ENIAC, for “Electrical Numerical Integrator And Calculator”.

It could multiply two numbers at the rate of 300 products per second, by finding the value of each product from a multiplication table stored in its memory. ENIAC was thus about 1,000 times faster than the previous generation of computers. ENIAC used 18,000 standard vacuum tubes, occupied 1800 square feet of floor space, and used about 180,000 watts of electricity. It used punched-card input and output. The ENIAC was very difficult to program because one had to essentially re-wire it to perform whatever task he wanted the computer to do. It was, however, efficient in handling the particular programs for which it had been designed.

ENIAC is generally accepted as the first successful high-speed electronic digital computer and was used in many applications from 1946 to 1955. Mathematician John von Neumann was very interested in the ENIAC. In 1945 he undertook a theoretical study of computation that demonstrated that a computer could have a very simple and yet be able to execute any kind of computation effectively by means of proper programmed control without the need for any changes in hardware. Von Neumann came up with incredible ideas for methods of building and organizing practical, fast computers.

These ideas, which came to be referred to as the stored-program technique, became fundamental for future generations of high-speed digital computers and were universally adopted. The first wave of modern programmed electronic computers to take advantage of these improvements appeared in 1947. This group included computers using random access memory (RAM), which is a memory designed to give almost constant access to any particular piece of information. These machines had punched-card or punched-tape input and output devices and RAMs of 1000-word capacity.

Physically, they were much more compact than ENIAC: some were about the size of a grand piano and required 2500 small electron tubes. This was quite an improvement over the earlier machines. The first-generation stored-program computers required considerable maintenance, usually attained 70% to 80% reliable operations, and were used for 8 to 12 years. Typically, they were programmed directly in machine language, although by the mid-1950s progress had been made in several aspects of advanced programming. This group of machines included EDVAC and UNIVAC, the first commercially available computers.

John W. Mauchley and John Eckert, Jr. developed the UNIVAC in the 1950’s. Together they had formed the Mauchley-Eckert Computer Corporation, America’s first computer company in the 1940’s. During the development of the UNIVAC, they began to run short on funds and sold their company to the larger Remington-Rand Corporation. Eventually they built a working UNIVAC computer. It was delivered to the US Census Bureau in 1951 where it was used to help tabulate the US population. Early in the 1950s two important engineering discoveries changed the electronic computer field.

The first computers were made with vacuum tubes, but by the late 1950’s computers were being made out of transistors, which were smaller, less expensive, more reliable, and more efficient. In 1959, Robert Noyce, a physicist at the Fairchild Semiconductor Corporation, invented the integrated circuit, a tiny chip of silicon that contained an entire electronic circuit. Gone was the bulky, unreliable, but fast machine; now computers began to become more compact, more reliable and have more capacity. These new technical discoveries rapidly found their way into new models of digital computers.

Memory storage capacities increased 800% in commercially available machines by the early 1960s and speeds increased by an equally large margin. These machines were very expensive to purchase or to rent and were especially expensive to operate because of the cost of hiring programmers to perform the complex operations the computers ran. Such computers were typically found in large computer centers–operated by industry, government, and private laboratories–staffed with many programmers and support personnel. By 1956, 76 of IBM’s large computer mainframes were in use, compared with only 46 UNIVAC’s.

In the 1960s efforts to design and develop the fastest possible computers with the greatest capacity reached a turning point with the completion of the LARC machine for Livermore Radiation Laboratories by the Sperry-Rand Corporation, and the Stretch computer by IBM. The LARC had a core memory of 98,000 words and multiplied in 10 microseconds. Stretch was provided with several ranks of memory having slower access for the ranks of greater capacity, the fastest access time being less than 1 microseconds and the total capacity in the vicinity of 100 million words.

During this time the major computer manufacturers began to offer a range of computer capabilities, as well as various computer-related equipment. These included input means such as consoles and card feeders; output means such as page printers, cathode-ray-tube displays, and graphing devices; and optional magnetic-tape and magnetic-disk file storage. These found wide use in business for such applications as accounting, payroll, inventory control, ordering supplies, and billing. Central processing units (CPUs) for such purposes did not need to be very fast arithmetically and were primarily used to access large amounts of records on file.

The greatest number of computer systems were delivered for the larger applications, such as in hospitals for keeping track of patient records, medications, and treatments given. They were also used in automated library systems and in database systems such as the Chemical Abstracts system, where computer records now on file cover nearly all known chemical compounds. The trend during the 1970s was, to some extent, away from extremely powerful, centralized computational centers and toward a broader range of applications for less-costly computer systems.

Most continuous-process manufacturing, such as petroleum refining and electrical-power distribution systems, began using computers of relatively modest capability for controlling and regulating their activities. In the 1960s the programming of applications problems was an obstacle to the self-sufficiency of moderate-sized on-site computer installations, but great advances in applications programming languages removed these obstacles. Applications languages became available for controlling a great range of manufacturing processes, for computer operation of machine tools, and for many other tasks.

In 1971 Marcian E. Hoff, Jr. , an engineer at the Intel Corporation, invented the microprocessor and another stage in the development of the computer began. A new revolution in computer hardware was now well under way, involving miniaturization of computer-logic circuitry and of component manufacture by what are called large-scale integration techniques. In the 1950s it was realized that “scaling down” the size of electronic digital computer circuits and parts would increase speed and efficiency and improve performance. However, at that time the manufacturing methods were not good enough to accomplish such a task.

About 1960-photo printing of conductive circuit boards to eliminate wiring became highly developed. Then it became possible to build resistors and capacitors into the circuitry by photographic means (Rogers, 142). In the 1970s entire assemblies, such as adders, shifting registers, and counters, became available on tiny chips of silicon. In the 1980s very large scale integration (VLSI), in which hundreds of thousands of transistors are placed on a single chip, became increasingly common. Many companies, some new to the computer field, introduced in the 1970s programmable minicomputers supplied with software packages.

The size-reduction trend continued with the introduction of personal computers, which are programmable machines small enough and inexpensive enough to be purchased and used by individuals. One of the firsts of such machines was introduced in January 1975. Popular Electronics magazine provided plans that would allow any electronics wizard to build his own small, programmable computer for about $380 (Rose, 32). The computer was called the “Altair 8800. Its programming involved pushing buttons and flipping switches on the front of the box. It didn’t include a monitor or keyboard, and its applications were very limited (Jacobs, 53).

Even though, many orders came in for it and several famous owners of computer and software manufacturing companies got their start in computing through the Altair. For example, Steve Jobs and Steve Wozniak, founders of Apple Computer, built a much cheaper, yet more productive version of the Altair and turned their hobby into a business. After the introduction of the Altair 8800, the personal computer industry became a fierce battleground of competition. IBM had been the computer industry standard for well over a half-century. They held their position as the standard when they introduced their first personal computer, the IBM Model 60 in 1975.

However, the newly formed Apple Computer company was releasing its own personal computer, the Apple II (The Apple I was the first computer designed by Jobs and Wozniak in Wozniak’s garage, which was not produced on a wide scale). Software was needed to run the computers as well. Microsoft developed a Disk Operating System (MS-DOS) for the IBM computer while Apple developed its own software system. Because Microsoft had now set the software standard for IBMs, every software manufacturer had to make their software compatible with Microsoft’s. This would lead to huge profits for Microsoft.

The main goal of the computer manufacturers was to make the computer as affordable as possible while increasing speed, reliability, and capacity. Nearly every computer manufacturer accomplished this and computers popped up everywhere. Computers were in businesses keeping track of inventories. Computers were in colleges aiding students in research. Computers were in laboratories making complex calculations at high speeds for scientists and physicists. The computer had made its mark everywhere in society and built up a huge industry. The future is promising for the computer industry and its technology.

The speed of processors is expected to double every year and a half in the coming years. As manufacturing techniques are further perfected the prices of computer systems are expected to steadily fall. However, since the microprocessor technology will be increasing, its higher costs will offset the drop in price of older processors. In other words, the price of a new computer will stay about the same from year to year, but technology will steadily increase. Since the end of World War II, the computer industry has grown from a standing start into one of the biggest and most profitable industries in the United States.

It now comprises thousands of companies, making everything from multi-million dollar high-speed super computers to printout paper and floppy disks. It employs millions of people and generates tens of billions of dollars in sales each year. Surely, the computer has impacted every aspect of people’s lives. It has affected the way people work and play. It has made everyone’s life easier by doing difficult work for people. The computer truly is one of the most incredible inventions in history.

History of Computers

Historically, the most important early computing instrument is the abacus, which has been known and widely used for more than 2,000 years. Another computing instrument, the astrolabe, was also in use about 2,000 years ago for navigation. Blaise Pascal is widely credited with building the first “digital calculating machine” in 1642. It performed only additions of numbers entered by means of dials and was intended to help Pascal’s father, who was a tax collector.

In 1671, Gottfried Wilhelm von Leibniz invented a computer that was built in 1694; it could add and, by successive adding and shifting, multiply. Leibniz invented a special “stepped gear” mechanism for introducing the addend digits, and this mechanism is still in use. The prototypes built by Leibniz and Pascal were not widely used but remained curiosities until more than a century later, when Tomas of Colmar (Charles Xavier Thomas) developed (1820) the first commercially successful mechanical calculator that could add, subtract, multiply, and divide.

A succession of improved “desk-top” mechanical calculators by various inventors followed, so that by about 1890 the available built-in operations included accumulation of partial results, storage and reintroduction of past results, and printing of results, each requiring manual initiation. These improvements were made primarily to suit commercial users, with little attention given to the needs of science. While Tomas of Colmar was developing the desktop calculator Charles Babbage initiated a series of very remarkable developments in computers in Cambridge, England.

Babbage realized (1812) that many long computations, especially those needed to prepare mathematical tables, consisted of routine operations that were regularly repeated; from this he surmised that it ought to be possible to do these operations automatically. He began to design an automatic mechanical calculating machine, which he called a “difference engine,” and by 1822 he had built a small working model for demonstration. With financial help from the British government, Babbage started construction of a full-scale difference engine in 1823.

It was intended to be steam-powered; fully automatic, even to the printing of the resulting tables; and commanded by a fixed instruction program. The difference engine, although of limited flexibility and applicability, was conceptually a great advance. Babbage continued work on it for 10 years, but in 1833 he lost interest because he had a “better idea” the construction of what today would be described as a general-purpose, fully program-controlled, automatic mechanical digital computer.

Babbage called his machine an “analytical engine”; the characteristics aimed at by this design show true prescience, although this could not be fully appreciated until more than a century later. The plans for the analytical engine specified a parallel decimal computer operating on numbers (words) of 50 decimal digits and provided with a storage capacity (memory) of 1,000 such numbers. Built-in operations were to include everything that a modern general-purpose computer would need, even the all-important “conditional control transfer” capability, which would allow instructions to be executed in any order, not just in numerical sequence.

The analytical engine was to use punched cards (similar to those used on a Jacquard loom), which were to be read into the machine from any of several reading stations. It was designed to operate automatically, by steam power, with only one attendant. Babbage’s computers were never completed. Various reasons are advanced for his failure, most frequently the lack of precision machining techniques at the time. Another conjecture is that Babbage was working on the solution of a problem that few people in 1840 urgently needed to solve.

After Babbage there was a temporary loss of interest in automatic digital computers. Between 1850 and 1900 great advances were made in mathematical physics, and it came to be understood that most observable dynamic phenomena could be characterized by differential equations, so that ready means for their solution and for the solution of other problems of calculus would be helpful. Moreover, from a practical standpoint, the availability of steam power caused manufacturing, transportation, and commerce to thrive and led to a period of great engineering achievement.

The designing of railroads and the construction of steamships, textile mills, and bridges required differential calculus to determine such quantities as centers of gravity, centers of buoyancy, moments of inertia, and stress distributions; even the evaluation of the power output of a steam engine required practical mathematical integration. A strong need thus developed for a machine that could rapidly perform many repetitive calculations. A step toward automated computation was the introduction of punched cards, which were first successfully used in connection with computing in 1890 by Herman Hollerith and James Powers, working for the U. S. Census Bureau.

They developed devices that could automatically read the information that had been punched into cards, without human intermediation. Reading errors were consequently greatly reduced, workflow was increased, and, more important, stacks of punched cards could be used as an accessible memory store of almost unlimited capacity; furthermore, different problems could be stored on different batches of cards and worked on as needed.

These advantages were noted by commercial interests and soon led to the development of improved punch-card business-machine systems by International Business Machines (IBM), Remington-Rand, Burroughs, and other corporations. These systems used electromechanical devices, in which electrical power provided mechanical motion such as for turning the wheels of an adding machine. Such systems soon included features to feed in automatically a specified number of cards from a “read-in” station; perform such operations as addition, multiplication, and sorting; and feed out cards punched with results.

The machines were slow, typically processing from 50 to 250 cards per minute, with each card holding up to 80 decimal numbers. At the time, however, punched cards were an enormous step forward. By the late 1930s punched-card machine techniques had become well established and reliable, and several research groups strove to build automatic digital computers. An IBM team led by Howard Hathaway Aiken built one promising machine, constructed of standard electromechanical parts. Aiken’s machine, called the Harvard Mark I, handled 23-decimal-place numbers (words) and could perform all four arithmetic operations.

Moreover, it had special built-in programs, or subroutines, to handle logarithms and trigonometric functions. The Mark I was originally controlled from prepunched paper tape without provision for reversal, so that automatic “transfer of control” instructions could not be programmed. Output was by cardpunch and electric typewriter. Although the Mark I used IBM rotating counter wheels as key components in addition to electromagnetic relays, the machine was classified as a relay computer. It was slow, requiring 3 to 5 seconds for a multiplication, but it was fully automatic and could complete long computations.

Mark I was the first of a series of computers designed and built under Aiken’s direction. The outbreak of World War II produced a desperate need for computing capability, especially for the military. New weapons systems were produced for which trajectory tables and other essential data were lacking. In 1942, J. Presper Eckert, John W. Mauchly, and their associates at the Moore School of Electrical Engineering of the University of Pennsylvania decided to build a high-speed electronic computer to do the job.

This machine became known as ENIAC, for Electronic Numerical Integrator and Computer (or Calculator). The size of its numerical word was 10 decimal digits, and it could multiply two such numbers at the rate of 300 products per second, by finding the value of each product from a multiplication table stored in its memory. Although difficult to operate, ENIAC was still many times faster than the previous generation of relay computers. ENIAC used 18,000 standard vacuum tubes, occupied 167. 3 m6 (1,800 ft6) of floor space and consumed about 180,000 watts of electrical power.

It had punched-card input and output and arithmetically had 1 multiplier, 1 divider square rooter, and 20 adders employing decimal “ring counters,” which served as adders and also as quick-access (0. 0002 seconds) read-write register storage. The executable instructions composing a program were embodied in the separate units of ENIAC, which were plugged together to form a route through the machine for the flow of computations. These connections had to be redone for each different problem, together with presetting function tables and switches.

This “wire-your-own” instruction technique was inconvenient, and only with some license could ENIAC be considered programmable; it was, however, efficient in handling the particular programs for which it had been designed. ENIAC is generally acknowledged to be the first successful high-speed electronic digital computer (EDC) and was productively used from 1946 to 1955. A controversy developed in 1971, however, over the patent ability of ENIAC’s basic digital concepts, the claim being made that another U. S. physicist, John V.

Atanasoff, had already used the same ideas in a simpler vacuum-tube device he built in the 1930s at Iowa State College. In 1973 the court found in favor of the company using the Atanasoff claim. Intrigued by the success of ENIAC, the mathematician John von Neumann undertook (1945) a theoretical study of computation that demonstrated that a computer could have a very simple, fixed physical structure and yet be able to execute any kind of computation effectively by means of proper programmed control without the need for any changes in hardware.

Von Neumann contributed a new understanding of how practical fast computers should be organized and built; these ideas, often referred to as the stored-program technique, became fundamental for future generations of high-speed digital computers. The stored-program technique involves many features of computer design and function besides the one named; in combination, these features make very-high-speed operation feasible. Details cannot be given here, but considering what 1,000 arithmetic operations per second imply may provide a glimpse.

If each instruction in a job program were used only once in consecutive order, no human programmer could generate enough instructions to keep the computer busy. Arrangements must be made, therefore, for parts of the job program called subroutines to be used repeatedly in a manner that depends on how the computation progresses. Also, it would clearly be helpful if instructions could be altered as needed during a computation to make them behave differently.

Von Neumann met these two needs by providing a special type of machine instruction called conditional control transfer who permitted the program sequence to be interrupted and reinitiated at any point and by storing all instruction programs together with data in the same memory unit, so that, when desired, instructions could be arithmetically modified in the same way as data. Computing and programming became faster, more flexible, and more efficient, with the instructions in subroutines performing far more computational work.

Frequently used subroutines did not have to be reprogrammed for each new problem but could be kept intact in “libraries” and read into memory when needed. Thus, much of a given program could be assembled from the subroutine library. The all-purpose computer memory became the assembly place in which parts of a long computation were stored, worked on piecewise, and assembled to form the final results. The computer control served as an errand runner for the overall process. As soon as the advantages of these techniques became clear, the techniques became standard practice.

The first generation of modern programmed electronic computers to take advantage of these improvements appeared in 1947. This group included computers using random access memory (RAM), which is a memory designed to give almost constant access to any particular piece of information. These machines had punched card or punched-tape input and output devices and RAMs of 1,000-word capacity with an access time of 0. 5 microseconds (0. 5? 10 sec); some of them could perform multiplications in 2 to 4 microseconds.

Physically, they were much more compact than ENIAC: some were about the size of a grand piano and required 2,500 small electron tubes, far fewer than required by the earlier machines. The first-generation stored-program computers required considerable maintenance, attained perhaps 70% to 80% reliable operation, and were used for 8 to 12 years. Typically, they were programmed directly in machine language, although by the mid-1950s progress had been made in aspects of advanced programming. These machines included EDVAC and UNIVAC, the first commercially available computers.

Early in the 1950s two important engineering discoveries changed the image of the field, from one of fast but often unreliable hardware to an image of relatively high reliability and even greater capability. These discoveries were the magnetic-core memory and the transistor-circuit element. These new technical discoveries rapidly found their way into new models of digital computers; RAM capacities increased from 8,000 to 64,000 words in commercially available machines by the early 1960s, with access times of 2 or 3 microseconds.

These machines were very expensive to purchase or to rent and were especially expensive to operate because of the cost of expanding programming. Such computers were typically found in large computer centers operated by industry, government, and private laboratories staffed with many programmers and support personnel. This situation led to modes of operation enabling the sharing of the high capability available; one such mode is batch processing, in which problems are prepared and then held ready for computation on a relatively inexpensive storage medium, such as magnetic drums, magnetic-disk packs, or magnetic tapes.

When the computer finishes with a problem, it typically “dumps” the whole problem program and results on one of these peripheral storage units and takes in a new problem. Another mode of use for fast, powerful machines is called time-sharing. In time-sharing the computer processes many waiting jobs in such rapid succession that each job progresses as quickly as if the other jobs did not exist, thus keeping each customer satisfied. Such operating modes require elaborate “executive” programs to attend to the administration of the various tasks.

In the 1960s efforts to design and develop the fastest possible computers with the greatest capacity reached a turning point with the completion of the LARC machine for Livermore Radiation Laboratories of the University of California by the Sperry-Rand Corporation, and the Stretch computer by IBM. The LARC had a core memory of 98,000 words and multiplied in 10 microseconds. Stretch was provided with several ranks of memory having slower access for the ranks of greater capacity, the fastest access time being less than 1 microsecond and the total capacity about 100 million words.

During this period the major computer manufacturers began to offer a range of computer capabilities and costs, as well as various peripheral equipment such input means as consoles and card feeders; such output means as page printers, cathode-ray-tube displays, and graphing devices; and optional magnetic-tape and magnetic-disk file storage. These found wide use in business for such applications as accounting, payroll, inventory control, ordering supplies, and billing.

Central processing units (CPUs) for such purposes did not need to be very fast arithmetically and were primarily used to access large amounts of records on file, keeping these up to date. Most computer systems were delivered for the more modest applications, such as in hospitals for keeping track of patient records, medications, and treatments given. They are also used in automated library systems, such as MEDLARS, the National Medical Library retrieval system, and in the Chemical Abstracts system, where computer records now on file cover nearly all known chemical compounds.

The trend during the 1970s was, to some extent, away from extremely powerful, centralized computational centers and toward a broader range of applications for less-costly computer systems. Most continuous-process manufacturing, such as petroleum refining and electrical-power distribution systems, now use computers of relatively modest capability for controlling and regulating their activities. In the 1960s the programming of applications problems was an obstacle to the self-sufficiency of moderate-size on-site computer installations, but great advances in applications programming languages are removing these obstacles.

Applications languages are now available for controlling a great range of manufacturing processes, for computer operation of machine tools, and for many other tasks. Moreover, a revolution in computer hardware came about that involved miniaturization of computer-logic circuitry and of component manufacture by large-scale integration, or LSI, techniques. In the 1950s it was realized that “scaling down” the size of electronic digital computer circuits and parts would increase speed and efficiency and thereby improve performance if only manufacturing methods were available to do this.

About 1960, photo printing of conductive circuit boards to eliminate wiring became highly developed. It then became possible to build resistors and capacitors into the circuitry by photographic means (see printed circuit). In the 1970s vacuum deposition of transistors became common, and entire assemblies, such as adders, shifting registers, and counters, became available on tiny “chips. ” During this decade many companies, some new to the computer field, introduced programmable minicomputers supplied with software packages.

The size-reduction trend continued with the introduction of personal computers, which are programmable machines small enough and inexpensive enough to be purchased and used by individuals (see computer, personal). Many companies, such as Apple Computer and Radio Shack, introduced very successful personal computers. Augmented in part by a fad in computer, or video, games, development of these small computers expanded rapidly. In the 1980s, very large-scale integration (VLSI), in which hundreds of thousands of transistors are placed on a single chip, became increasingly common.

During that decade the Japanese government announced a massive plan to design and build a new generation the so-called fifth generation of supercomputers that would employ new technologies in very large-scale integration. This project, however, was abandoned by the early 1990s (see artificial intelligence). The enormous success of the personal computer and resultant advances in microprocessor technology initiated a process of attrition among giants of the computer industry.

That is, as a result of advances continually being made in the manufacture of chips, rapidly increasing amounts of computing power could be purchased for the same basic costs. Microprocessors equipped with ROM, or read-only memory (which stores constantly used, unchanging programs), now were also performing an increasing number of process-control, testing, monitoring, and diagnostic functions, as in automobile ignition-system, engine, and production line inspection tasks. In the 1990s these changes were forcing the computer industry as a whole to make striking adjustments.

Long-established and more recent giants of the field were reducing their work staffs, shutting down factories, and dropping subsidiaries. At the same time competition in the hardware field intensified and producers of personal computers continued to proliferate, as did specialty companies, each company devoting itself to some special area of manufacture, distribution, or customer service. Computers continue to dwindle to increasingly convenient sizes for use in offices, schools, and homes. Programming productivity has not increased as rapidly, and as a result software has become the major cost of many systems.

However, programming techniques such as object-oriented programming have been developed to help alleviate this problem. The computer field as a whole continues to experience tremendous growth. As computer and telecommunications technologies continue to integrate, computer networking, electronic mail, and electronic publishing are just a few of the applications that have matured in recent years. The most phenomenal growth has been in the development of the Internet, with attendant ramifications in software manufacture and other areas.

IT Helps Keep Frito-Lay in the Chips

This paper’s intent is to answer the questions found at the end of the case study “IT Helps Keep Frito-Lay in the Chips. ” We plan to identify the key input and output devices used in Frito-Lay’s information system. Also, the steps that the IT professionals at Frito-Lay took to create a system that would be easy to use as well as what steps we would take as managers to introduce the employees to the information system that will be discussed. The question of “how will Frito-Lay’s information system help it achieve its goals” will be explored.

At Frito-Lay they use a variety of input devices, among those are keyboards, mice, terminals, trackballs and bar coded scanners. To understand fully the extent they have gone to at Frito-Lay, the types of input devices needs to be examined. One of their key input devices is the “brick. ” The “brick” is a handheld computer, which will be discussed at greater length in the next paragraph. The next important piece of input hardware is the receiving end of the “uplink. ” The “uplink” transfers data from the truck to the mainframe where the data can is inputted. Once the mainframe has the data, it can be analyzed.

Analyzing the data includes determining the order replacement stock and calculating replacement stock. The “user friendly graphical interface” is another important input device that Frito-Lay uses. This device allows employees with very little computer experience to work with computers. The bar code scanners are optical code readers. These devices read the universal product code (UPC) from the package. Output devices include visual displays (monitors), printers and transmission devices 2 linked to satellites.

The monitors are found on various computers, from the handheld to the to the typical PC that most of us are familiar with. Monitors probably provide the most visible output device for Frito- Lay. The monitors undoubtably come in a wide range of sizes, colors, graphics standards, resolution and bit mapping capabilities. Like the monitors, the printers are found in various roles and places. In the truck there is a printer that is used for a localized effort producing an itemized sales ticket. This specially designed printout is geared toward spotting problems and targeting sales. These are two very important business activities, where success is calculated “bag by bag.

Throughout the company their are printers of a more conventional nature. It would be expected to find impact printers as well as nonimpact printers. The nonimpact variety is more common today, however you might find the impact variety in the truck where multiple copies might lend itself to be! ing preferred. Of the Impact printers you might encounter consider dot matrix, character, and line. The nonimpact devices include laser, ink jet and thermal printers. The company may also include plotters, which are handy for charts and graphs, line drawing and blueprints. Another device that the company uses is the uplink.

The uplink allows the truck to transmit real time information back to the mainframe for evaluation. The IT professionals at Frito-Lay created a system that would be easy to use. First they created a color-coded chart for all region of the country. When a region showed red, it meant a loss of sales. This helped them track down their problems when sales were eroded in specific areas. They also made it easy to input the raw 3 information. The information came from two sources. The primary source came from the hand held computer, the “brick. ” This device is carried by 12,000 employees who sell and deliver Frito-Lay products o the stores.

Once inside the store they log inventory, determine replacement stock, and determine promotional discounts. At the truck the computer is plugged into a printer that produces an itemized sales ticket. All the sales information is transmitted at the same time via satellite to the mainframe. The second way that the raw data is collected is by the bar code scanners that they have in 400,000 stores. Within a week they can break down sales of corn chips by brand in a region or specific store. They can also judge other products or review promotional events.

Frito-Lay has teamed up with Lotus and designed a graphical nterface that is easy to use, even for employees who are not computer literate. If we were managers in charge of training at Frito-Lay, we would introduce new employees to the handheld computers on site, in order for them to see firsthand how the system works. After this demonstration, we would show the new employees how the system reports the information to headquarters so they will understand the importance of the data they will be gathering. For employees using the graphical interface, we would show them the ease of the system and let them practice in a real workstation.

It would be necessary for them to understand how to prepare reports, using the system, and how to interpret that information. If the employee has limited computer skills, then a class introducing them to basic computer skills would be implemented. Understanding how the computer works, the importance of having the right hardware and software and 4 a basic overview of haw the computer uses random access memory (RAM), hard drive, external devices, modems, printers, etc. Frito-Lay should have no trouble in securing a position in the chip arena. They have developed a ystem which reacts quickly to an ever changing marketplace.

Frito-Lay will achieve and maintain its goal of creating an entrepreneurial atmosphere that allows employees to act swiftly and react to changing market conditions. From the color-coded maps to user-friendly graphical interfaces and hand held computers Frito- Lay has created such an atmosphere. The managers are able to quickly react to the analyzed data. The sales people have the latitude to apply price cuts when necessary. All this quick assimilation of data and the appropriate response will assure that Frito-Lay will remain a ough competitor.

Like many corporations, Frito-Lay wants to create an entrepreneurial atmosphere that allows employees to take charge and react swiftly to changing market conditions. How will Frito-Lay information systems help it achieve this goal? Frito-Lay’s system is perfect for an entrepreneurial atmosphere. An example is Frito-Lay will be able to offer price cuts if the reported information shows that cutting into Frito-Lay profits. Also, Frito-Lay’s sales managers have access to the system’s “Briefing Book” and use maps and charts to show grocers why their brands offer more profit potential.

This up-to-date information will let grocery managers see that Frito-Lay is “doing their homework” on the competition and is willing to give the grocers price breaks if necessary. The new system has helped Frito-Lay reduce the amount of returned 5 products by preventing overstock which can be frustrating for grocers and causes greater costs to Frito-Lay. Frito-Lay has developed a system through todays technology that allows it to be successful and profitable. While the handheld “bricks” cost the company $40 million to develop, the company claims that it saves that much or more every year in “stales. “

Computer Science at the University of Arizona

The University of Arizona’s Computer Science Department is a quality research program. The most recent National Research Council rankings place the department 33rd out of 108 PhD-granting institutions nationwide, despite the fact that we are a comparatively small department. In addition, we are the best Computer Science department of our size among publicly funded Universities, with the highest in number of citations (references) per faculty, and 17th overall in the number of publications per faculty.

Another measure of our research productivity includes awards of external research funding in excess of $2. 5 million from such prestigious sources as DARPA, INTEL, and NSF, including our fourth 5-year Research Infrastructure awarded in 2000. Our faculty serve on the editorial boards of a variety of journals, serve on program committees, publish books, and serve as fellows and chairs of organizations within the ACM and IEEE.

In terms of teaching, our undergraduate and graduate curriculum provides a timely and well-rounded view of the field, with special emphasis on the practical aspects of building useful software. Our strengths lie in the traditional mainstream of areas of computer science: algorithms, programming languages, operating systems, distributed computing, networks, databases and theory of computing. We also offer courses in some subfields: graphics, artificial intelligence and the software aspects of computer architecture.

The department’s programs prepare students for positions in the design and development of computer systems and applications, in business and industry, and for scientific positions in industrial or academic computing research. The Computer Science department was established in 1973 as a graduate department offering masters and doctoral degrees. An undergraduate program was initiated in 1989. We currently have 15 faculty members, 3 lecturers, 5 technical support staff, and 4 research programmers affiliated with specific funding.

The graduate program contains 61 MS students, 22 PhD candidates: the undergraduate program has 205 bachelors students and 400+ pre-majors. There are currently three Computing Laboratories available: Harvill 332b (houses a 31-station Pentium III based Windows 2000 instructional lab), Gould-Simpson 228 (contains a 50-station Xterm & Pentium III based Windows 2000 instructional lab), and the Research Lab in Gould-Simpson 748/756.

Students receive accounts on both the main instructional machine, Lectura, (a multiprocessor, Sun SparcServer running the Solaris operating system), and on the Windows 2000 network. All systems have access to 100Mb switched Ethernet connections and direct Internet connectivity. The Gould-Simpson Research Lab contains numerous Pentium III Windows 2000/Linux OS systems, specialized printers, graphics devices, and PC clusters. Our largest computing cluster is a 64-node Pentium cluster, our newest, a 16-node Pentium cluster supporting nonblocking, switched Gigabit ethernet.

Two Network Appliance file servers with over 200 GB’s of available file storage provide shared data across systems. The Research Lab is used by graduate students and faculty for research projects. The Computer Science department is located on the 7th floor of Gould-Simpson , with offices on the 8th floor, Bio Sciences East 3rd floor, and labs on the 2nd floor of GS and the 3rd floor of the Harvill building. Our Academic Services Office is located in room 725 of Gould-Simpson.

How to be Dumb

Now that Alan Cooper’s personas have become famous, one of the most prominent and well-known goals for user interface designers is not to make the user look stupid. This goal isn’t really new because we all know of situations where we or someone else looked horribly stupid when trying to do something on a computer. Even the smartest women and men can look stupid at a computer if they don’t know which button to click, menu command to call, or key to press – defenseless and exposed to the laughter and ridicule of other, less knowledgeable people.

I came across so many people who did not dare touch a computer in my presence, either because they feared destroying something on the computer or afraid they would look stupid. As this problem is a really big issue for computer users, one of the most prominent and noble research areas for usability people should be to investigate how computers can avoid making people look stupid.

Figure 1: Like so many other personas, Gerhard – my personal persona – does not want to look stupid when working at the computer

Computers are Intransparent

In the early days, computers were totally intransparent: there were just some switches and light bulbs at the computer’s front panel that served for the communication with the “knowledgeable.” From time to time, the computers spit out a punched tape, which again required some machine to decode it. (The “experts,” however, could even decode the tape just by looking at it.) Later, computers printed out some more or less cryptic characters, and even later, the user communicated with the computer via keyboard, monitor and mouse – that’s the state we have today. But however sophisticated these devices are, we still look into the computers’ inner workings through a “peephole” called a monitor.

Do we really understand what state the computer is in, which commands it expects and what its cryptic error and system “messages” mean? No – computers often still leave us in the dark about what they expect from us, what they want us to do and what they can do for us. So, it’s no wonder that even the smartest people can look stupid in front of a computer, but even ordinary people like you and me can too.

Computers are Rigid Machines

As we all know, computers are mindless, rule-following, symbol-manipulation machines. They are just machines, though not ruled by the laws of mechanics but by the rules of logic and by the commands of their programs. Nevertheless, there is no inbuilt flexibility in computers, they just react according to the commands that have been programmed into them.

There have been long debates in the past whether artificial intelligence based on symbol manipulation is possible. Some people have proven that it is, others haven proven that it is not – in the end, this issue seems a matter of personal belief. So, let’s return to “real life.” We have all had the experience that computers are rigid in so many ways: they issue error messages, they do not find a file or search item if you misspell a name, they crash if they run on a wrong command. This stubbornness drives many users crazy: they feel stupid because they can’t remember even the simplest cryptic command. And they feel inferior to those “logical” machines because they are “fuzzy” human beings who commit so many errors.

Computers Can Cheat – But Not so Well…

But even if computers exhibit some flexibility, it is because farsighted programmers have programmed this flexibility into them. Often these programmers are not farsighted enough, or do not take human characteristics into account, such as the desire for a certain stability of the work environment. For example, there is a current trend to make computer systems adaptive in order to make them easier to use. The adaptive menus in the recent Microsoft applications are an example of this approach: the menus adapt their appearance according to their usage – with the result that people like me are puzzled each time they open a menu because it always looks different. So, today’s computers are even narrow-minded when they try to be flexible. They still make people look stupid, for example because the system changes its look and behavior in unpredictable ways.

Computers Are too Complex and Complicated for their Users

One of the arguments, often put forward by developers of complex software, is that it’s not the computers that are stupid but the users. Well, I let this stand as it is, but of course there are many occasions where average users are overwhelmed by the complexity of their computer hardware and software. There are so many things you have to remember and think of, far more than in a car or household. So, if you forget to bear in mind one important detail, all your efforts in trying to impress your friends or colleagues with how well you can master computer technology may be ruined within a second.

Let me illustrate this point with an example. Lately I took some photos of my friends with my awesome digital camera, actually a computer in itself. My friends were enthusiastic about the photos. OK, I said, and now I will print these images in the blink of an eye. My friends’ enthusiasm increased and I received many “oh’s” and “ah’s” because how fast the process of taking and printing a photo could go. But then there was a problem – I had forgotten to reconnect the printer to the USB port because I had used my scanner on that port. However, when I connected the cable, the computer did not recognize the printer, despite all the hype and promises with USB. Finally, I had to reboot the computer. So the blink of an eye turned into a quarter of an hour, and my friends had plenty of time for a couple of jokes on computers in general and on my “well prepared” equipment in particular.

Computers Can Be Mean and Wicked

Many of us can tell stories of evil computer experiences: the program that crashes shortly before you want to save an important mail or document that you worked on for several hours or the printer that jams in the middle of printing the slides for an urgent presentation. Computers can have even allies in their wickedness. Again and again, someone prints a 100-page document with lots of graphics when you are in a hurry and need to print a paper shortly before a meeting. I could continue with such stories for hours. So, how do computers know when the “right” moment has come to break down? This question still remains a mystery to me and requires further investigation. From Clifford and Nass’ book The Media Equation, we know that many people attribute human characteristics to computers and often treat them like humans – especially in those breakdown situations. On the other hand, we know that computers are rigid machines and typically do not care about human reactions and emotions. How can we clarify this contradiction? Some people believe that computers just follow the “law of maximum meanness,” similar to entropy in thermodynamics, others still believe there are demons inside computers.

I do not know who is right, but at least I have some examples to offer of how “wicked” computers make people appear stupid. Yes, computers can make your life hard at times, and they know well when the time is right – and you become a laughing stock. Presentations are a good time to make people look stupid because the presenters are in a hurry and nervous because the presentation is supposed to make them look good. There is also an audience that is often grateful for the mishaps. Think of the presentations where the server for the demo is not available although it was available just a few minutes before; the following hurried activities are well suited for entertaining the audience. Or you want to go to the next slide in a presentation but it does not appear. You click a second time, and – oops -, now you are beyond the slide, and the audience has some fun.

Conclusion

What can we do so that computers cannot make us look stupid? For some people, the simple solution is not to use computers at all. However, there are many people who have to use computers in their daily work. Not every one is old enough for early retirement. So, my advice is to create computer applications that take human characteristics, human strengths and human weaknesses into account. As long as we require human to adapt to the logic of machines, we still will stumble into situations where people look stupid while working with computers. But if we strive hard we will one day arrive at computer programs that accept the users as human beings with all their human limitations.

Computer Crime: Prevention and Innovation

Since the introduction of computers to our society, and in the early 80s the Internet, the world has never been the same. Suddenly our physical world got smaller and the electronic world set its foundations for an endless electronic reality. As we approach the year 2000, the turn of the millenium, humanity has already well established itself into the Information Age. So much in fact that as a nation we find our selves out of a service economy and into an information based economy. In a matter of only a few years almost all systems are run buy computers in some way, shape, or form.

We depend on them for everything. Even the smallest malfunction or glitch in a system could now cause unfathomable amounts of trouble in everything from riding the bus, having access to your money, to getting your prescription at the pharmacists. Furthermore, Icove suggested that with the price of home computers that work faster and store more memory going down every year due to competition in the market, it is estimated that by the year 2011 most every American home will have a PC with instant access to the Internet.

With an increase in users everyday and new businesses taking advantage of perks of an alternate electronic world, this information dimension will only get bigger, more elaborate, provide more services, and we will find society as a whole more and more dependent on it. However, even in an artificial environment such as the cyberspace, it appears mankind cannot escape from its somewhat overwhelming natural attraction to wrongful behavior or criminal tendencies. In turn this alternative dimension has been infected with the same criminal behavior that plagues our physical reality.

The information age has opened the doors to anti social, smart, and opportunistic people to find new and innovative ways to commit old crimes. These people are called hackers. What is the social Problem? Computer crime is the official name given to this criminal phenomenon driven by hackers. Although a solid definition of computer crime has yet to be agreed upon by scholars, it is described in a functional manner encompassing old crimes such as forgery, theft, mischief, fraud, manipulation or altering of documents; all of which are usually subject to criminal sanctions everywhere.

Also included in the description or computer crime is the unauthorized invasion or trespass of data base systems of private companies or government agencies. According to Schamalleger computer crime is also described as any violation of a federal or state computer crime statue. It has been suggested that the history of computer crime and hacking started in the late 1950s when AT&T first implemented its interstate phone system and direct distance dialing (DDD). Manipulation of the way that the DDD system worked, in order to make free long distance calling, was the objective of this crime.

What the early hackers did was duplicate the particularly distinguishable tones that the phone companies used in their early switching devices enabling them to fool the system into thinking that the customer has paid. A person who used any number of devices available for manipulating switch systems in order to explore or exploit is called a phone phreak. Who are the perpetrators? Computer crime, like any other criminal activity, is a product of opportunity, desire, and ability. Anyone who owns a computer or has access to a computer has the opportunity to misuse its systems to engage in criminal activity.

Researchers found that the average age of most computer crime defendants is between the ages of 21 and 23. The second highest age groups of computer crime defendants are between 18 and 20. They suggested that hackers could be classified into the following groups based on their psychological characteristics: pioneers, scamps, explorers, game players, vandals, and addicts. Among these six classifications of hackers, hard-core computer criminals are mostly found in the explorers, game players, vandals, and addicts.

Pioneers are simply captivated by technology and explore with out necessarily knowing what they are getting into or without specific criminal intent. Scamps are curious and playful hackers that intend no real harm. Explorers, on the other hand, specifically look for highly sophisticated and secured systems that are relatively far from their physical location. The further away and the more sophisticated the systems that are being broken into are, the more exciting it is for the explorer. For the game player hacker, hacking is a sport that involves the identification and defeat of software, systems, and copyright protections.

Vandals are hateful, angry hackers that strike harshly, randomly, and without any apparent personal gain other then knowledge, responsibility, and credit for such acts. Finally, addicts are your stereotypical nerdy individuals that are literally addicted to computer technology and hacking itself like an addict is to drugs. Due to these obsessive behavioral patterns it is not surprising that many of these types of hackers are coincidentally hooked on illicit drugs too. Who Does the Problem Affect? Since the 1950s, technology has skyrocketed raising the capability of computers along with it.

In turn computer crime has also evolved at the same rate from mere phone phreaking, to a complex categorization of crime all to itself. So much in fact that computer crime is now a global concern due to the financial burden that it places onto its victims. Primary victims to computer crime are the companies or government agencies database systems that are being invaded. By doing so the hacker is hoping to cause some sort of loss or inconvenience and either directly or indirectly benefit themselves or a group they belong to or have strong affiliation with.

Secondary victimization is what happens to the public when they begin to fear that no information or computer system that we rely on is ever truly safe from hackers. In short computer crime affects the entire world . Computer use, misuse, and abuse. Computer use is grouped into three categories. Those categories are proper computer use, improper computer use, and criminal misuse. Computer abuse, although not illegal, points to moral issues of concern for society as a whole. This refers to legal but restricted web sites and activities belonging to subcultures that do not share traditional value structures that the general public adheres to.

Due to the existence of these groups the information dimension has also been flooded with misuses such as sexually explicit material and sensitive information that, in my opinion, the general public should not have such easy access to. A few examples of Sensitive information are, access to bomb making blue prints, where to get materials for such a devise, and military tactical procedures that out line the process and positioning of such devises to create the most damage. In the category of computer misuse, is the creation and deployment of viruses and software piracy.

This renders all systems locked into malfunction mode and can cause permanent damage to a computers hard drive and or memory if at all the computer can be de-bugged from the virus in the first place. More than $13 billion are lost world wide to software piracy. Software piracy refers to the bootlegging of software programs available for computers. Using software of this nature not only takes business away from the makers of the software, but it also, one, takes away from the original intentions and capabilities of the software, and two, provides a greater risk of passing a virus onto the computer it is applied to.

How, When, and Where Dose the Problem Occur? In traditional criminal events, when one is looking for the roots or causation of any particular physical crime, the physical environment, social environment, psychological and mental capacity, and economics all play major roles. Likewise, physical crime happens in real time and in a physically tangible space. However, computer crime is more complicated in the sense that although real time measurements are used to record the transactions between computers, the location of the criminal offence is in cyberspace.

Cyberspace is the computer produced matrix of virtual possibilities, including on line services, wherein human beings interact with each other and with technology it self . Although physical environment, social environment, psychological and mental capacity, and economics are important factors in computer crimes, specific causation of computer crime has to be dealt with on a case by base study. What makes one person engage in computer crime can be different from what makes another person do the vary same criminal act.

However, what remains the same in all computer crimes is the dependent use of modems, telephones, and some sort of unauthorized entrance into a government, commercial, or private data base system. For example, on September 16, 1999 in Dallas, Texas, United States Attorney Paul E. Coggins was happy to report that, due to aggressive investigations done by the Federal Bureau of Investigation and later prosecuted by Assistant United States Attorney Matthew Yarbrough, Corey Lindsly and Calvin Cantrell were sentenced in federal court by Chief District Judge Jerry Buchmeyer.

Criminal conviction on the two was handed down for selling stolen long distance calling card numbers that were illegally obtained by hacking into computer systems of Sprint Corporation, Southwestern Bell and GTE. Calvin Cantrell, age 30, of Grand Prairie, Texas, was sentenced to two years imprisonment and ordered to pay $10,000 to the victimized corporations and Corey Lindsly, age 32, of Portland, Oregon, was sentenced to forty-one months imprisonment and ordered to pay $10,000 to the victimized corporations.

Through investigations Corey Lindsly and Calvin Cantrell were identified as two major ringleaders in a computer hacker organization known as the Phone Masters. This group of hackers, very professionally, structured their attacks on the computers through teleconferencing and made use of encryption program PGP to hide the data that they exchanged with each other.

In addition to the crimes that Corey Lindsly and Calvin Cantrell were convicted for, further evidence showed that the group had also infiltrated computer systems owned by utility providers, credit reporting agencies, and systems owned by state and federal governmental agencies including the Nation Crime Information Center (NCIC) computer. The definitive mission of the Phone Masters organization was to one day take control of all telecommunications infrastructures in the nation.

In the Phone Masters example, an on line social environment, psychological and mental capacity, and economic gain were underlying factors in the causation of the criminal acts that Corey Lindsly and Calvin Cantrell ended up plead guilty to. In this example economic gain was the centerpiece of motivation. However, economic gain is not always a reason for computer crime. Sometimes people, particularly younger people between the ages of 12 to 35, partake in computer crime due to a lack of power or control in their real lives.

These people then turn to the electronic realm as an alternative to the real world so they can over come feelings of personal inferiority by challenging machines. These people develop self worth and acknowledgment for their criminal actions from the peers of their subculture. Research found that these people refer to themselves as cyber punks and they roam independently or in groups through cyberspace in search for vulnerable systems much like street gangs in search for likely victims or opportunity to partake in any criminal action just for kicks.

For example in Boston, Massachusetts, on October 18, 1998, Federal criminal charges were brought upon a juvenile computer hacker. According to United States Attorney Donald K. Stern and Acting Special Agent in Charge Michael T. Johnston of the U. S. Secret Service, in March of 1997, in two separate accessions, the defendant deployed a series of commands from the his personal computer at home. These virus like commands made a necessary telephone company computer shut down and in turn killed telephone communications to the Worcester airport.

This caused vital services to the FAA control tower to be disabled for a total of twelve hours. He deployed the first series of commands at 9:00 am. Emergency teams scrambled franticly for six hours until they we able to temporarily fix the malfunction, but only to have in knocked down again by the defendant just as soon as it was fixed. He deployed his second series of commands at 3:00 PM. It was 9:00 PM when the Worcester airport and its surrounding communities were able to set up telephone communications again.

In the course of the investigation done by the U. S. Secret Service additional evidence showed that the defendant also electronically broke into a pharmacists computer in a Worcester area pharmacy store and copied patient records. This information was accessible by modem after hours when the pharmacy was closed so that their headquarter offices could update patient records at anytime. Even though the juvenile was in a view only program and could not alter the prescriptions, and no evidence was found that he dispersed the information, this was still viewed as a serious invasion of privacy and the nature of his actions were found to be criminal.

In result to a plea-bargained decision the juvenile received two years’ probation, ordered to complete 250 hours of community service, must pay restitution to the telephone company, and in addition, he was made to surrender all of the computer equipment used during his criminal activity. Further more, during his sentence he may not possess, use, or buy a modem or any other means of remote accessing of computers or computer networks directly or indirectly.

According to the Department of Justice the charges in this particular case are vitally important because they are the first ever to have been brought against a juvenile by the federal government for commission of a computer crime. The juvenile was not publicly named due to federal laws concerning juvenile cases. A Realistic Theory For the Future Prevention of Computer Crime Hollinger suggested that computer crime is relatively new as compared to part-one index crimes. It is often seen as a method or a medium in which to commit particular types of crimes rather than a crime all to its self.

Furthermore, because most computer crimes are executed in a non-violent manner the public unwittingly views it as a crime of less seriousness than one that involves violence. However, even if they are committed in a non-violent way, on average hackers cost the private companies and government agencies world wide billions of dollars every year. The consumers and the taxpayers are the ones left to make up the losses in the long run. Whether violence was used in these offences or not it is surely evident as to why this phenomenon concerns us all.

The problem with dealing with computer crime is that most of it is reactive instead of proactive. Also affecting the way we combat computer crime is the way that society views it and the way society punishes offenders. To me, it appears that the efforts against computer crime are following a historical view of policing and crime fighting where a lot of emphasis was put on law enforcement, dealing with the offender after the crime has been committed, instead of community cohesion, trying to set up community based organizations to try a raise awareness and impact the criminal behavior before in turns to criminal activity.

The evolution that policing has gone through over the years, from enforcement to prevention, should provide as a good model to follow in preventing and dealing with computer crime instead of just reacting to it. This prevention approach would most successful if done in three parts. Part one would involve the media, the government, and businesses or groups that have fallen victims to hackers in the past. Even though the media is often accused of damaging public perception on issues, it can also be use to help shift or steer public perception in the right direction.

The medias responsibility would be to bring the public interest message to the masses through the use of TV and radio commercials, newspaper writing, pamphlets, highway billboards, internet posting, mailing list, and series or public lectures or conferences. Primarily, the victimized companies and groups along with the government agencies involved in the pursuit and conviction of the offenders would compose the message that media would be delivering.

The message would entail an appropriate perspective on computer crime, examples of what attitudes society should or should not have toward computer crime, examples or a view of the many government agencies that are on the front line of the computer crime battlefield, and finally a heightened uniform guideline for the punishment of computer crime. Part two of my computer crime prevention approach would involve taking away much if not all the discretion involved in the sentencing of people convicted of computer crime.

I feel that because the outcomes of computer crime could can so financially detrimental, instilling harsh mandatory uniform sentencing guild lines should be enacted. This would let the offenders know that the public will not tolerate computer criminal activity and not to expect leniency if convicted of computer crimes. This method of sentencing is not liked by many judges because it takes away from their judicial powers and dose not allow the judge to decide what he/she think is best for the community, district, city, or state they preside over. Hackers like it even less because it ensures their fait if caught and convicted of computer crime.

Part three would simply be the reinforcement of part two and provide that the punishment for computer criminals be harsher and involve more time in a federal prison then the average sentence being handed down today for computer crime offenses. A perfect example of the way this three-part method of dealing with computer crime works is in the similar approach used in Project Exile where the media helps push the idea that the government will automatically give people five years in a federal institution, added to the sentence of any other crime they may be found guilty for, if they are found in possession of an illegal handgun.

Project Exile has helped drastically cut the numbers of people carrying and using illegal handguns and a similar Project for computer crime would, in my opinion, have similar results in impacting existing computer criminals and deter would-be hackers from partaking in computer crime.

Programming Under The Wizards Spell

The computer is a tool that has become indispensable to the modern family and company. In flourishing so successfully the computer has passed from incredibly complex and unusable to anyone how was not well versed in its intricacies, to consumer oriented and user-friendly. In Ellen Ullman’s essay, Programming Under The Wizard’s Spell, she attempts to convince to reader that the computer has been over simplified to the point of no return.

The simplification of the computer made it more user-friendly and there for more appealing to customers, this only blinded people into using the computer the way corporate America wanted them to, using without understanding. First, this essay is a hybrid, it is a mix of the expository and comparison and contrast essay. In the first part she attempts to examine the differences between various Microsoft operating systems and the Unix operating system.

Then the author tries to answer the question ”What is it ? ” and ”What is it not ? ” in paragraphs 3, Ullman states : ”Unix always presumes that you know what you’re doing. and in referring to Microsoft she states it as: “Consumer-oriented, idiot-proofed, covered by its pretty skin of icons and dialog boxes [… ]”.

She has tactfully drawn the boundaries between the two products which start to take one the appearance of the good and the corporate induced bad. Ullman has now inferred her goal, she wishes to convince the reader of her convictions of the new computerised corporate America. Also, she uses simple wording, narration and a somewhat comic anecdote of her experiences, effectively leading the reader into drawing negative conclusions about the new consumer oriented computer.

She does not truly attempt to be objective but gives that illusion by shortly stating in the first paragraph: ”a reasonable, professional choice in a world where Microsoft platforms are everywhere”. This was a reasonably good statement that inspires in the reader to believe that Ellen Ullman is waying the good and the bad. Further more, once finished, the reader can only conclude that there where so many more bad things than good things about Microsoft that it most likely a bad product hinged on reducing our computing freedom.

This conclusion is of course the only one possible to anyone how reads the essay. she made it this way but without actually expressing this opinion herself, she is merle telling a story littered with an unfavourable tone that seeped out of the text by her choice of wording: “My computer. I’ve always hated this icon”. Ullman infintilizes windows in order to ridicule it in order to further convince the reader of the negativity of these sorts of programs.

Ullman’s purpose in writing her essay was to warn the reader of the dangers that may insue from the over simplification of such a complex machine, the title she chose conveys her convictions well. But as she explains her misfortunes with Windows she makes usage of certain terms and expression that not just any one can understand, she wrote this essay for an audience of others such computer fans that she try’s to convince of the perils of forgetting how a computer really works, not just how the operating system works.

In conclusions, Ellen Ullman’s ultimate goal was that Corporate America saw the complex computer as a wild beast inaccessible to most, so they tinkered with to finally made it the new user-friendly computer system, man’s new best friend. But in doing so they destroyed it’s instincts. Her vision of the industry is most obviously a personal one and through her essay she ultimately succeeds in persuading the reader that her convictions are almost fact. This is a good example of how one’s opinions can be successfully diffused to others.

Computer Crime Report

Advances in telecommunications and in computer technology have brought us to the information revolution. The rapid advancement of the telephone, cable, satellite and computer networks, combined with the help of technological breakthroughs in computer processing speed, and information storage, has lead us to the latest revolution, and also the newest style of crime, “computer crime”. The following information will provide you with evidence that without reasonable doubt, computer crime is on the increase in the following areas: hackers, hardware theft, software piracy and the information highway.

This information is gathered from expert sources such as researchers, journalists, and others involved in the field. Computer crimes are often heard a lot about in the news. When you ask someone why he/she robbed banks, they world replied, “Because that’s where the money is. ” Today’s criminals have learned where the money is. Instead of settling for a few thousand dollars in a bank robbery, those with enough computer knowledge can walk away from a computer crime with many millions. The National Computer Crimes Squad estimates that between 85 and 97 percent of computer rimes are not even detected.

Fewer than 10 percent of all computer crimes are reported this is mainly because organizations fear that their employees, clients, and stockholders will lose faith in them if they admit that their computers have been attacked. And few of the crimes that are reported are ever solved. Hacking was once a term that was used to describe someone with a great deal of knowledge with computers. Since then the definition has seriously changed. In every neighborhood there are criminals, so you could say that hackers are the criminals of the computers around us.

There has been a great increase in the number of computer break-ins since the Internet became popular. How serious is hacking? In 1989, the Computer Emergency Response Team, a organization that monitors computer security issues in North America said that they had 132 cases involving computer break-ins. In 1994 alone they had some 2,341 cases, that’s almost an 1800% increase in just 5 years. An example is 31 year old computer expert Kevin Mitnick that was arrested by the FBI for stealing more then $1 million worth in data and about 20,000 credit card numbers through he Internet.

In Vancouver, the RCMP have arrested a teenager with breaking into a university computer network. There have been many cases of computer hacking, another one took place here in Toronto, when Adam Shiffman was charged with nine counts of fraudulent use of computers and eleven counts of mischief to data, this all carries a maximum sentence of 10 years in jail. We see after reading the above information that hacking has been on the increase. With hundreds of cases every year dealing with hacking this is surely a problem, and a problem that is increasing very quickly.

Ten years ago hardware theft was almost impossible, this was because of the size and weight of the computer components. Also computer components were expensive so many companies would have security guards to protect them from theft. Today this is no longer the case, computer hardware theft is on the increase. Since the invention of the microchip, computers have become much smaller and easier to steal, and now even with portable and lap top computers that fit in you briefcase it’s even easier.

While illegal high-tech information hacking ets all the attention, it’s the computer hardware theft that has become the latest in corporate crime. Access to valuable equipment skyrockets and black- market demand for parts increases. In factories, components are stolen from assembly lines for underground resale to distributors. In offices, entire systems are snatched from desktops by individuals seeking to install a home PC. In 1994, Santa Clara, Calif. , recorded 51 burglaries. That number doubled in just the first six months of 1995.

Gunmen robbed workers at Irvine, Calif. , computer parts company, stealing $12 million worth of computer chips. At a large advertising agency in London, thieves came in over a weekend and took 96 workstations, leaving the company to recover from an $800,000 loss. A Chicago manufacturer had computer parts stolen from the back of a delivery van as he was waiting to enter the loading dock. It took less then two minutes for the doors to open, but that was enough time for thieves to get away with thousands of computer components.

Hardware theft has sure become a problem in the last few years, with cases popping up each day we see that hardware theft is on the increase. As the network of computers gets bigger so will the number of software thief’s. Electronic software theft over the Internet and other online services and cost the US software companies about $2. 2 billion a year. The Business Software Alliance shows that number of countries were surveyed in 1994, resulting in piracy estimated for 77 countries, totaling more than $15. billion in losses.

Dollar loss estimates due to software piracy in the 54 countries surveyed last year show an increase of $2. 1 billion, from $12. 8 billion in 1993 to $14. 9 billion in 1994. An additional 23 countries surveyed this year brings the 1994 worldwide total to $15. 2 billion. As we can see that software piracy is on the increase with such big numbers. Many say that the Internet is great, that is true, but there’s also the bad side of the Internet that is hardly ever noticed.

The crime on the Internet is increasing dramatically. Many say that copyright law, privacy law, broadcasting law and law against spreading hatred means nothing. There’s many different kinds of crime on the Internet, such as child pornography, credit card fraud, oftware piracy, invading privacy and spreading hatred. There have been many cases of child pornography on the Internet, this is mainly because people find it very easy to transfer images over the Internet without getting caught.

Child pornography on the Internet has more the doubled on the Internet since 1990, an example of this is Alan Norton of Calgary who was charged of being part of an international porn ring. Credit card fraud has caused many problems for people and for corporations that have credit information in their databases. With banks going on-line in last ew years, criminals have found ways of breaking into databases and stealing thousands of credit cards and information on their clients.

In the past few years thousands of clients have reported millions of transactions made on credit cards that they do not know of. Invading privacy is a real problem with the Internet, this is one of the things that turns many away from the Internet. Now with hacking sites on the Internet, it is easy to download Electronic Mail(e-mail) readers that allows you to hack servers and read incoming mail from others. Many sites now have these e-mail eaders and since then invading privacy has increased. Spreading hatred has also become a problem on the Internet.

This information can be easily accessed by going to any search engine for example http://www. webcrawler. com and searching for “KKK” and this will bring up thousands of sites that contain information on the “KKK”. As we can see with the freedom on the Internet, people can easily incite hatred over the Internet. After reading that information we see that the Internet has crime going on of all kinds. The above information provides you with enough proof that no doubt computer rime is on the increase in many areas such as hacking, hardware theft, software piracy and the Internet.

Hacking can be seen in everyday news and how big corporations are often victims to hackers. Hardware theft has become more popular because of the value of the computer components. Software piracy is a huge problem, as you can see about $15 billion are lost each year. Finally the Internet is good and bad, but theirs a lot more bad then good, with credit card fraud and child pornography going on. We see that computer crime is on the increase and something must be done to stop it.

Hacking to Peaces

The “Information Superhighway” possesses common traits with a regular highway. People travel on it daily and attempt to get to a predetermined destination. There are evil criminals who want to violate citizens in any way possible. A reckless driver who runs another off the road is like a good hacker. Hacking is the way to torment people on the Internet. Most of the mainstream hacking community feel that it is their right to confuse others for their entertainment. Simply stated, hacking is the intrusion into a computer for personal benefit.

The motives do not have to be focused on profit because many o it out of curiosity. Hackers seek to fulfill an emptiness left by an inadequate education. Do hackers have the right to explore wherever he or she wants on the Internet (with or without permission), or is it the right of the general population to be safe from their trespasses? To tackle this question, people have to know what a hacker is. The connotation of the word ‘hacker’ is a person that does mischief to computer systems, like computer viruses and cybercrimes.

There is no single widely-used definition of computer-related crime, [so] computer network users and law nforcement officials must distinguish between illegal or deliberate network abuse versus behavior that is merely annoying. Legal systems everywhere are busily studying ways of dealing with crimes and criminals on the Internet” (Voss, 1996, p. 2). There are ultimately three different views on the hacker controversy. The first is that hacking or any intrusion on a computer is just like trespassing. Any electric medium should be treated just like it were tangible, and all laws should be followed as such.

On the other extreme are the people that see hacking as a privilege that falls under the right of free speech. The limits of the law should be pushed to their farthest extent. They believe that hacking is a right that belongs to the individual. The third group is the people that are in the middle of the two groups. These people feel that stealing information is a crime, and that privacy is something that hackers should not invade. They are not as right wing as the people that feel that hackers should be eliminated.

Hackers have their own ideals to how the Internet should operate. The fewer laws there are to impede a hacker’s right to say and do what they want, the better they feel. Most people that do hack follow a certain profile. Most of them are disappointed with school, feeling “I’m smarter than most of the other kids, this crap they teach us bores me” (Mentor, 1986, p. 70). Computers are these hackers only refuge, and the Internet gives them a way to express themselves. The hacker environment hinges on people’s first amendment right to freedom of speech.

Some justify their actions of hacking by saying that the hacking that they do is legitimate. Some hackers that feel their pastime is legitimate and only do it for the information; others do it for the challenge. Still other hackers feel it is their right to correct offenses done to people by large corporations or the government. Hackers have brought it to the public’s attention that the government has information on people, without the consent of the individual. Was it a crime of the hacker to show that the government was intruding on the privacy of the public?

The government hit panic stage when reports stated that over 65% of the government’s computers could be hacked into 95% of the time (Anthes, 1996, p. 21). Other hackers find dubious business practices that large corporations try to accomplish. People find this information helpful and disturbing. However, the public may not feel that the benefits out weigh the problems that hackers can cause. When companies find intruders in their computer system, they strengthen their security, which costs money.

Reports indicate that hackers cost companies a total of $150 to $300 billion a year (Steffora & Cheek, 1994, p. 3). Security system implementation is necessary to prevent losses. The money that companies invest on security goes into the cost of the products that they sell. This, in turn, raises the prices of the products, which is not popular to the public. The government feels that it should step in and make the choices when it comes to the control of cyberspace. However, the government has a tremendous amount of trouble with handling the laws dealing with hacking. What most of the law enforcement agencies follow is the “Computer Fraud and Abuse Act of 1986.

“Violations of the Computer Fraud and Abuse Act include intrusions into government, financial, most medical, and Federal interest computers. Federal interest computers are defined by law as two or more computers involved in the criminal offense, which are located in different states. Therefore, a ommercial computer which is the victim of an intrusion coming from another state is a “Federal interest” computer” (Federal, 1996, p. 1). Most of the time, the laws have to be extremely specific, and hackers find loopholes in these laws, ultimately getting around them.

Another problem with the laws is the people that make the laws. Legislators have to be familiar with high-tech materials that these hackers are using, but most of them know very little about computer systems. The current law system is unfair; it tramples over the rights of the individual, and is not productive, as illustrated in the following case. David LaMacchia used his computers as “distribution centers for illegally copied software. In this case, the law was not prepared to handle whatever crimes may have been committed.

The judge ruled that there was no conspiracy and dismissed the case. If statutes were in place to address the liability taken on by a BBS operator for the materials contained on the system, situations like this might be handled very differently” (Voss, 1996, p. 2). The government is not ready to handle the continually expanding reaches of the Internet. If the government cannot handle the hackers, then who should judge the limits of hacking? This decision has to be in the placed in the hands of the public, but in all probability, the stopping of hackers will never happen.

The hacker’s mentality stems from boredom and a need for adventure, and any laws or public beliefs that try to suppress that cannot. Every institution that they have encountered has oppressed them, and hacking is the hacker’s only means for release, the government or public cannot take that away from them. That is not necessarily a bad thing. Hacking can bring some good results; especially bringing oppressing bodies (like the government and large corporations) to their nees by releasing information that shows how suppressive they have been.

However, people that hack to annoy or to destroy are not valid in their reasoning. Nothing is accomplished by mindless destruction, and other than being a phallic display, it serves no purpose. Laws and regulations should limit these peo ple’s capabilities to cause havoc. Hacking is something that will continue to be a debate in and out of the computer field, but maybe someday the public will accept hackers. On the converse, maybe the extreme hackers will calm down and follow the accepted behaviors.

Marketing Plan Microsoft Corporation Hostile Take Over of Red Hat Linux

Microsoft, the leading manufacturer of personal computer software with its windows based operating systems and application software, has decided to expand its influence beyond windows into the Linux freeware operating system world. The means for entry into this rapidly growing segment of the server operating system market is through a takeover of the Red Hat Linux Company. Currently Microsoft Corporation now owns 51% of the stock for Red Hat Linux.

This expansion directly into the Linux arena will provide Microsoft with the ability to attack competitors in the network server market with the Windows NT and Windows 2000 operating systems on one flank and with the extremely stable Linux operating system on the other flank. Microsoft expects to use this one-two punch to significantly gain market share in the server market and to shape the future of business LANs, WANs and the internet. Additionally, Microsoft expects to gain a controlling market share of the Linux office application suite with a Linux version of its Microsoft Office application suite.

Red Hat Linux is one of the recognized pioneers and leading distributor of the freeware operating system called Linux. Although Red Hat Linux does not have wide name recognition outside of the computer industry, the name is synonymous with Linux among computer professionals. Red Hat Linux is the preferred operating system for the network server program Apache, an extremely stable server program that currently runs on 49% of the internet servers. The current Red Hat Marketing strategy has been to stress three basic points to consumers.

First, Linux is the most stable operating system available. Second, Linux is easy to use and has user friendly graphic user interfaces that facilitate the functionality of the operating system. Third, Red Hat is the ‘David’ competing against the ‘Goliath’ Microsoft. This approach has huge appeal for the ‘free thinkers’ of the computer world. Microsoft’s strategy for maintaining market dominance in the computer software industry clearly rests in its ability to develop or control the operating software for the majority of the computer market.

This strategy has propelled Microsoft to the dominant position in the personal computer operating system market and, with the acquisition of Red Hat Linux, now has the opportunity to decisively control the network server market. Ultimately, Microsoft will use both the Windows and the Linux operating systems to force its major competitors, such as Sun Microsystems and Hewlett Packard to either consolidate or exit from the server operating system market. A key element of the two pronged attack on competitors is Microsoft’s decision to provide a Linux version of Microsoft Office applications for distribution to Red Hat Linux customers.

In addition to creating additional revenue for Microsoft, this move will ensure application compatibility between businesses using the Windows operating system and the Linux operating system. Red Hat Linux, who already controls a significant share of the Linux distribution and services market, is in the position to leverage the Microsoft distribution system, financial resources and suites of application software to gain market share and achieve market dominance in the Linux distribution and services segments.

The takeover of Red Hat Linux by Microsoft Corporation has necessitated that a marketing plan be created to capitalize on this opportunity. This marketing plan will outline the current Situation Analysis, Financial and Marketing Objectives, marketing strategy, action programs to execute the programs and a projected profit and loss statement. The system-level software industry comprises operating systems, operating systems enhancements, and data center management software. Its worldwide market increased 15% in 1998, to $41 billion.

This figure should exceed $68 billion by 2002, implying a 13. compound annual growth rate . The competitive intensity of the computer operating system industry is extremely high. Although there are considerable incentives for competitors to enter the market, the industry is characterized by significant expenses for up-front development, marketing, and technical support infrastructure for initial versions of software products. Subsequent products based on the first version become much less expensive to develop. The software products within this industry often enjoy gross margins of 70 to 80 percent.

The capital support costs to support a software company are relatively small, with labor for software development comprising the largest expense item. Development often involves working in multiple teams of 6, 12, or even 100 persons with long lead times between different versions of software. The most significant entry barrier, however, is the entrenchment of currently used operating systems. Selection of previous versions of a specific computer operating system by a computer user has a tremendous influence on the buyer. The operating systems for computers are not compatible and operate application programs exclusive of each other.

The choice of operating system determines the type of supplemental application programs that are chosen and installed. The expenditure of individual capital for operating system specific application programs creates a monetary incentive for the customer to remain loyal to the selected operating system. This ‘financial loyalty’ plus software compatibility issues and high entry costs combine to create a very large entry barrier to companies attempting to penetrate the market with alternative operating systems. A unique aspect to the computer operating system industry is the lack of influence by suppliers.

The product produced is software, intellectual property, written and published by an individual or company. Supplier influence is limited to the producers of the media that the software is published on and the availability of trained personnel to physically write the computer code. Currently there is no shortage of either, although exceptionally high quality computer programmers are sought after by software companies. The cost of a premium programmer, however, is miniscule in comparison to the profit margin on mass produced magnetic media. Buyer influence in the market is likewise tempered.

Once the investment for an operating system and applications has been expended, there has to be a huge difference in perceived value for the buyer to switch to another operating system. In the software industry a typical buyer response is simply choosing not to upgrade to a new product. This response, however, is constrained by the requirement to remain compatible with the rest of the computing/business world. This pressure is becoming more critical as technology and e-commerce evolves. Eventually the buyer is forced to upgrade or fall behind competitively.

This is not to say that software companies can fix prices with impunity, the challenge to the software vendor is develop the pricing for their product that does not cause the buyer to delay upgrading and is not so exorbitant that the perceived value of a competitor’s software becomes attractive. Industry wide, software is distributed through streamlined distribution networks to software wholesalers and retailers. Microsoft has refined their distribution into a new integrated distribution model combining the best of both indirect and direct sales.

The resulting hybrid model allies the supply channel and vendors into a supply channel paradigm that leads straight to the customer. Product/Product Line and Market position. Microsoft is the industry giant, completely dominating the personal computer operating system market and office productivity software suite applications segment. Microsoft Windows 98 and Windows 95 are installed on 79. 6% of the world’s personal computers; Microsoft Office suite is used by 79% of the market . Prior to the acquisition of Red Hat Linux, Microsoft held a slim advantage in the network operating system’s market with 37% of the servers running Windows NT.

The addition of Red Hat Linux to the Microsoft stable of operating systems brought an additional 17% of the network operating systems that currently use Linux into the fold. Currently the Microsoft family of server operating systems controls 54% of the server market. Major Customers and Market Segments Served. Microsoft has an extremely wide customer base, as one would expect with the market dominance that they enjoy. Major customers for the operating systems market include Compaq, Dell, Gateway, IBM, Hewlett Packard, Micron Enterprises, and CompUSA.

Within the network operating system segment Microsoft has aggressively sought out the full market spectrum of small, medium and large companies, advertising the scalability of their Windows NT software. To complement the scalability of the operating system Microsoft utilizes a Value-Added Provider (VAP) program to provide the customer with a total package of network management, network security, legacy system integration, web design and integration, and project management and e-mail servers. This strategy seems to have paid off, especially among network servers in manufacturing plants.

Among operating systems in plants Microsoft NT is currently operating on 50% of the platforms . Additionally, Microsoft has made significant inroads into the governmental and educational server sectors by stressing cost effectiveness of the windows based systems, legacy program integration and application software compatibility. Historically Microsoft had a difficult time penetrating the internet web server software segment, controlling 22% of the market share among the internet servers . The acquisition of Red Hat changed the playing field.

Red Hat is the leading Linux provider of open source software and services, controlling 60% of the Linux distribution . Most significant, however, is that Linux gave a Microsoft operating system the ability to run Apache Server, the dominate internet server software. Currently Apache Server is operating on 49% of the internet servers. In addition to operating Apache Server, the growth potential for Linux is huge. Growing by 212% in 1998, Linux now controls nearly 20% of the commercial server market. . This growth is expected to continue at a 25% annual rate until the year 2003 .

Development Of Computers Over The Decades

A Computer is an electronic device that can receive a set of instructions, or program, and then carry out this program by performing calculations on numerical data or by compiling and correlating other forms of information. Thesis Statement:- The modern world of high technology could not have come about except for the development of the computer. Different types and sizes of computers find uses throughout society in the storage and handling of data, from secret governmental files to banking transactions to private household accounts.

Computers have opened up a new era in manufacturing through the techniques of automation, and they have enhanced modern communication systems. They are essential tools in almost every field of research and applied technology, from constructing models of the universe to producing tomorrow’s weather reports, and their use has in itself opened up new areas of conjecture. Database services and computer networks make available a great variety of information sources.

The same advanced techniques also make possible invasions of privacy and of restricted information sources, but computer crime has become one of the many risks that society must face if it would enjoy the benefits of modern technology. Imagine a world without computers. That would mean no proper means of communicating, no Internet, no video games. Life would be extremely difficult. Adults would have to store all their office work paper and therefore take up an entire room. Teenagers would have to submit course-works and projects hand-written.

All graphs and diagrams would have to be drawn neatly and carefully. Youngsters would never have heard of ‘video-games’ and will have to spend their free time either reading or playing outside with friends. But thanks to British mathematicians, Augusta Ada Byron and Charles Babbage, our lives are made a lot easier. Later, on my investigation about the growth of computers over the decades, I will be talking about types of computers, how and when computers were first being developed, the progress it made, computers at present and plans for the future.

In types of computers, I will be talking about analogue and digital computers and how they function. In the development of computers, I will be mentioning about the very first electronic calculator and computer. Under progress made, I will only be mentioning about circuits. For computers of the present, I will be talking about networking, telecommunications and games.

And finally, as for planning for the future, I will mention about new and recent ideas, research and development of new computers heard and talked about in newspapers and on television. There are two main types of computers which are in use today, analog and digital computers, although the term computer is often used to mean only the digital type. Analog computers exploit the mathematical similarity between physical interrelationships in certain problems, and employ electronic or hydraulic circuits to simulate the physical problem.

Digital computers solve problems by performing sums and by dealing with each number digit by digit. Hybrid computers are those which contain elements of both analog and digital computers. They are usually used for problems in which large numbers of complex equations, known as time integrals, are to be computed. Data in analog form can also be fed into a digital computer by means of an analog- to-digital converter, and the same is true of the reverse situation.

A Comparison of Two Network Operating Systems

The decision to utilize Microsoft Windows NT Server or one of the many Unix operating systems is the concern of many IS managers around the world today. Unix is not a single operating system; it refers to a family of operating systems which includes AIX, BSDI, Digital UNIX, FreeBSD, HP-UX, IRIX, Linux, NetBSD, OpenBSD, Pyramid, SCO, Solaris, SunOS, just to name a few. Microsoft Windows NT has a well-known reputation. But these managers have to consider whether or not choosing a Microsoft product can increase the company’s profits.

The cost of the network operating system (NOS) will be the ultimate factor in their decision. It is not just the initial cost of the hardware however, but rather many other factors will need to be considered to insure that further maintenance costs are not overwhelming. For instance, software licenses will need to be procured. Technical support agreements will need to be assessed. The costs of upgrades/service packs, hardware upgrades will need to be weighed for both types of systems. Determining which system has a greater occurrence of glitches can be a factor in estimating lost profits for every hour of downtime.

If the company should experience a glitch, how substantial will personnel costs for recovering/recreating data be? Knowledgeable systems administrators will need to be employed to maintain the system. This task is not to be taken lightly as these are only some of the situations to be considered prior to making a decision on which NOS to purchase. Since accruing costs is a primary concern for managers, the conditions previously discussed give an indication that a combination of server hardware and operating systems seems to be the most cost-effective option for long term use.

Unix is a fully developed, group of operating systems known for its performance, reliability, and security in a server environment. On the other hand, Windows NT Server has the advantage of Windows 95’s popularity. This desktop operating system is already being used in homes and offices everywhere. Before making the operating system decision a manager should consider visiting the local library to research the particular subject. It will be difficult to find current unbiased literature. But a determined manager or QM student should be able to separate the important information from personal preferences.

Most of the older books are concerned with theory using Unix as a guide. For current information, periodicals are the best source. But as stated earlier, much of it is very biased one way or the other. The preferences are split down the middle with half of the professionals supporting Unix or a Unix variant and the other half supporting Microsoft products. Operating Systems Operating systems (OS) were originally developed as a large set of instructions for large mainframe computers in order to control the hardware resources of the mainframe.

Thereafter, they have been developed to run on smaller and smaller computers, first mini computers then on the new personal computers (PC). But, the main job of the OS was the same, a layer between the hardware and the user. The main reason for having an OS is for the application programmers to have a common base upon which to run their applications, no matter what hardware is being used. One important function of the OS is to perform file management. This allows applications to read or write to disk, regardless of the hardware being used or how it is stored.

Without this feature programmers would have to write new programs for every different type of hardware and every different type of hardware configuration. However, Microsoft Windows is the dominant PC OS, so most of the applications written today are written for the Windows environment. Network Operating Systems When businesses initially began to use desktop PCs in the 1980’s, there was no connection between PCs and mainframes or between the PCs themselves. The PC was normally used for word processing, spreadsheets, etc. Soon users wanted to more efficiently share resources than disk swapping allowed.

A solution emerged, networking, and to control these resources, network operating systems (NOS) were developed. At first NOSs allowed the most basic of functions like sharing printers and files. Soon the NOSs role expanded greatly to management of the resources in the local network, and to link up with other local area networks (LAN), therefore creating wide area networks (WAN). NOS’s controlled the network through a server. The server only controlled the resources directly linked to it and the PCs used a second OS that controlled their specific hardware. Peer-to-peer networks later developed.

While using a peer-to-peer LAN there was no need for a dedicated server, which was great for small businesses with few users. But with many users and large amounts of data, a greater need surfaced for a dedicated server. Methodology Managers without knowledge or experience with systems and network administration find it difficult to choose a server platform. This report will attempt to compare and contrast Microsoft Windows NT Server and Unix, a mixture of commercial and non-commercial operating systems originating from the same source so they share many similarities.

The main focus of the paper is to assist managers in choosing a network operating system using quantitative methods. The issues of comparison discussed are in the areas of product costs and licensing, functionality, reliability, and performance. These are presented to provide a more complete view of these products. Product costs and licensing issues Most managers will agree that the mere cost of an operating system is trivial when evaluating the big picture.

Although Windows NT Server 4. an be expensive, a Unix variant can be bought for a minor dollar amount. In order to match the functionality of a BSDI (a variant of Unix) installation, additional Microsoft products and third party solutions would bring the final price of a comparable NT solution within a reasonable cost. Functionality What can you expect from Windows NT Server and from Unix immediately after acquiring the systems? NT and Unix can both communicate with many different types of computers. Both Unix and NT can secure sensitive data and keep unauthorized users off the network.

Essentially, both operating systems meet the minimum requirements for operating systems functioning in a networked environment. Reliability As computers become more and more utilized in our world today, reliability is the more significant feature, even more important than speed. Although performance is largely a function of hardware platform, it is in the area of reliability that the choice of operating systems has the most influence. An operating system may offer more functionality. Also, it may be more scalable. To add to that it may even offer greater ease of system management.

But if you are constantly being challenged with glitches in the system and are unable to even get any use out of the system because it is always down, what good are these advantages? Performance Processing power is largely a function of computer hardware rather than of the operating system. Since most commercial Unix operating systems run only on high-end workstations or servers, Unix has historically been considered an operating system for high-end hardware. To say that Unix outperforms NT based on the results of differing hardware would be unfair to Microsoft.

One should compare NT Server’s performance to that of Linux or FreeBSD, since all three operating systems run on the same hardware which is Intel, the hardware-type most often used with NT. A truly unbiased comparison of performance would have to be based on benchmarks, but these are few and usually only focus on specific areas like web performance. There are some specific issues that affect performance. Unix does not require a graphical user interface to function while NT does. Graphics require incredible amounts of disk space and memory, the same holds true for sound files.

Computers and Finance

Computers have made financial bookkeeping much easier, and people no longer have to spend hours tracking investments or pay someone else to do their taxes. Moreover, the advancement in technology has allowed governments to cut back on the number of big companies and employees hired to process tax returns, resulting in the saving of millions of dollars. Although these advancements are extraordinary, they are not without their shortcomings. The IRS has had increased trouble in tracking fraudulent tax returns, and has had to revamp its detection system.

The most surprising part of Microsoft’s current purchase of Intuit, the maker of the Quicken line of personal finance software was not the $1. 5 billion price, which was fifty percent over the market value (Schlender 14). It was not even the fact that Bill Gates, America’s richest entrepreneur, is in a position to become America’s richest banker (14). The most surprising thing was that it did not happen earlier (14). For years Gates has had a dream of putting “electronic commerce at the core of personal computing,” and now he finally has the software to accompany that dream (14).

His idea includes a “Wallet PC” that can be carried around with people at all times (14). Microsoft believes that it can provide what executive VP Mike Maples refers to as a “whole new value chain” that will allow customers to interact by modem with banks, insurance companies, pension funds, etc. (14). Quicken is already being used by six million people to pay bills, manage credit, write checks, and handle taxes (14). For those of you scoring at home, it has 5. 2 million more users than Microsoft’s Money software (14).

That is a prime reason that Gates basically wanted to give up the product and donate it to his competitor Novell (14). Programs such as Quicken are excellent for keeping track of what is spent at home, but can be a big hassle for keeping track of the money spent on business trips (Baig 20). One way to solve the problem would be to carry a notebook computer with Quicken on it, but as Edward Baig states “It’s just not practical to boot up a laptop each time I step out of a taxi” (20). Intuit has released Pocket Quicken, a “Quicken Lite” for those who carry around digital assistants to help alleviate that problem (20).

Pocket Quicken is built into the new Hewlett-Packard 200LX palmtop, the Tandy/Casio Zoomer PDA’s, and the AST Gridpad 2390, but is not sold as a separate product just yet (20). Eventually this will also be available on the Motorola Envoy (20). Pocket Quicken allows users to categorically follow expenditures such as food, gas, and rent by creating checking, credit card, and cash accounts (20). Pocket Quicken lets travelers sequester what is spent into areas such as trip, client, project, or class (20).

People can share data with regular Quicken with the HP 200LX’s $119 optional cable and software package, or with the $30 addition to the Zoomer (20). However, Pocket Quicken does have its shortcomings because it does not allow for the set up of budgets or the following of investments (20). It also does not compute net worth or tax summaries, and does not have all the graphs with which Quicken comes equipped (20). If Pocket Quicken is not of interest, another option would be to record expenditures at the end of each day (20). Yet, another possibility would be QuickXpense for Windows from Portable Software (20).

This program allows users to work with the exact expense form they would like to use because many of the forms from large corporations have been previously loaded into the program (20). Another of its qualities is that if a specific company’s form was not included, they will put it on a disk for customers if they send blank copy of the form to Portable Software (20). All entries are entered the same as Quicken, but this program will also figure mileage cost for driving, convert foreign currency, and catalog each type of expense included on hotel bills (20).

QuickXpense will also let the user know when it is time to file an expense report (20). Computer and technology outsource companies typically handle the dirty work behind tax collecting; however, with the recent advances in technology, the state of New York was able to hire a financial group to handle the responsibility (Halper 63). The New York State Department of Taxation gave Fleet Financial Group a $197 million contract to handle computerized tax collections and refunds for the next ten years (63).

This contract does not totally ignore the traditional outsourcing company because Computer Science Corp. (CSC) of El Segundo California is responsible for developing the software (63). Arthur Gross, deputy commissioner for revenue and management, said that because the process involved “so many banking procedures it simply made more sense to hire a bank” (63). The inefficiencies of the current system would not have been done away with if a traditional technology company had been hired (63).

The state will receive more than ten million personal income tax returns accompanied by two million checks (63). Those checks are sent to one of nearly forty banks that will deposit the checks, capture data, and then return that information to the state (63). Gross questioned that since the bank will end up with much of the information then “Why shouldn’t they get it directly instead of it banging around all over the place? ” (63). Experts have given New York credit for a fresh idea, but say that it is too early to know if a bank can handle all the challenges presented by the technology (63).

This contract will cut the cost of processing by an estimated $80 million during the next ten years (63). Gross is impressed with the new technology, but added that “the focal point was not necessarily the technology” (63). Fleet Financial will carry out the same tasks for the state that banks already do for their customers (63). Because of this, the bank is able to do a better and more efficient job than the state ever could (63). In fact, the new imaging and scanning equipment will allow Fleet Financial to do the job forty percent faster (63).

CSC will play a large part in the process of cutting cost because the department will still depend on its IBM and Unisys Corp. mainframes (63). The new equipment for improving the process comes from the new imaging and scanning technology being installed by CSC and Fleet Financial (63). The first item installed will be the AT&T Global Information Solutions 7780 scanning system (63). This will be used to read the data from the 2. 3 million different coupons sent by self-employers, who are required to file their estimated earnings, and all checks that might accompany them (63).

New York state law states that residents of New York City or Yonkers must use special filing procedures, so the bank will check addresses to be sure that the filer claimed residence in the proper place (63). Currently the state must check ten million forms manually to sort out approximately two million false forms (63). The cost-savings will allow the state to redistribute about 300 employees to jobs that create revenue — i. e. , disproving taxpayer complaints of inaccuracies — from their processing jobs (63). The new technology will allow the state to cut back from 1400 tax season workers to about 300 (63).

According to the IRS filing taxes from home with the use of a PC and modem may be possible within five years (Kantra 47). The IRS also expects to process roughly eighty million electronic tax returns (47). In 1993, almost ten percent of everyone that filed individually used some form of electronic or computerized filing system (47). Approximately twelve million people filed with a professional tax preparer for refunds or some kind of third party that uses personal tax preparation software — a process that must have a verification form (47).

Furthermore, 1040PC forms, printouts from tax software packages, were sent by another 4. 8 million people (47). The most widely used software packages are Chipsofts TurboTax and Meca Software’s TaxCut; however, it is now possible to obtain a competitors package for the cost of shipping (47). Several of these programs can pull data in automatically from other programs like Quicken (47). With the invention of electronic filing, more fraudulent attempts have been made to cheat the IRS. The U. S.

General Accounting Office (GAO) noted a major increase in the amount of fraud attempts in the IRS’ electronic filing program by saying that “The growth of [fraudulent] returns is very high, but it is unclear how much of the growth is due to an increase in fraudulent activity rather than improvement in fraud detection” said GAO special assistant James F. Hinchman (Anthes 28). The worse part of everything is the uncertainty about the number of false returns that go undetected (28). An experimental system was instituted in 1990 without having certain “edit-and-validation” rules (28).

An illustration of this would be that taxpayers names and Social Security numbers on the returns were not checked with the internal IRS records (28). When that omission was corrected, 200,000 returns were rejected; however, Hincman says the system is still in need of “stronger validity checks” (28). Aggressive plans have been taken by the IRS in order to automate fraud detection (28). They have, with the help of computer scientists at Los Alamos National Laboratory, developed artificial intelligence techniques to notice suspicious returns (28).

A new “electronic fraud detection system” will be implemented next year (28). A system that will allow for greater taxpayer and tax-return data on-line (28). Finally, more documents will be compared with each other for the purpose of making it easier to find more fraud attempts before the IRS makes refunds (28). The world of computers is always expanding, and computers are a part of almost everything that people do. This is evidenced by the way in which companies such as Microsoft have expanded into the world of finance.

Computer programs like Quicken have made financial bookkeeping much easier because of its great versatility, and new computer technology has allowed the state of New York to hire a bank to work the state tax system. Furthermore, the invention of electronic tax filing has allowed people to file more quickly and get returns more quickly, but it has also caused an increase in tax fraud. This has forced the IRS to go look at and totally redesign its process of detecting fraud. All this is evidence that the world is becoming more computer oriented.

Computer Fraud and Crimes

In the world of computers, computer fraud and computer crime are very prevalent issues facing every computer user. This ranges from system administrators to personal computer users who do work in the office or at home. Computers without any means of security are vulnerable to attacks from viruses, worms, and illegal computer hackers. If the proper steps are not taken, safe computing may become a thing of the past. Many security measures are being implemented to protect against illegalities. Companies are becoming more aware and threatened by the fact that their computers are prone to attack.

Virus scanners are becoming necessities on all machines. Installing and monitoring these virus scanners takes many man hours and a lot of money for site licenses. Many server programs are coming equipped with a program called “netlog. ” This is a program that monitors the computer use of the employees in a company on the network. The program monitors memory and file usage. A qualified system administrator should be able to tell by the amounts of memory being used and the file usage if something is going on that should not be.

If a virus is found, system administrators can pinpoint the user ho put the virus into the network and investigate whether or not there was any malice intended. One computer application that is becoming more widely used and, therefore, more widely abused, is the use of electronic mail or email. In the present day, illegal hackers can read email going through a server fairly easily. Email consists of not only personal transactions, but business and financial transactions. There are not many encryption procedures out for email yet.

As Gates describes, soon email encryption will become a regular addition to email just as a hard disk drive has become a regular addition to a computer (Gates p. 97-98). Encrypting email can be done with two prime numbers used as keys. The public key will be listed on the Internet or in an email message. The second key will be private, which only the user will have. The sender will encrypt the message with the public key, send it to the recipient, who will then decipher it again with his or her private key. This method is not foolproof, but it is not easy to unlock either.

The numbers being used will probably be over 60 digits in length (Gates p. 98-99). The Internet also poses more problems to users. This problem faces the home user more than the business user. When a person logs onto the Internet, he or he may download a file corrupted with a virus. When he or she executes that program, the virus is released into the system. When a person uses the World Wide Web(WWW), he or she is downloading files into his or her Internet browser without even knowing it. Whenever a web page is visited, an image of that page is downloaded and stored in the cache of the browser.

This image is used for faster retrieval of that specific web page. Instead of having to constantly download a page, the browser automatically reverts to the cache to open the image of that page. Most people do not know about this, but this is an example f how to get a virus in a machine without even knowing it. Every time a person accesses the Internet, he or she is not only accessing the host computer, but the many computers that connect the host and the user. When a person transmits credit card information, it goes over many computers before it reaches its destination.

An illegal hacker can set up one of the connecting computers to copy the credit card information as it passes through the computer. This is how credit card fraud is committed with the help of the Internet. What companies such as Maxis and Sierra are doing are making secure sites. These ites have the capabilities to receive credit card information securely. This means the consumer can purchase goods by credit card over the Internet without worrying that the credit card number will be seen by unauthorized people.

System administrators have three major weapons against computer crime. The first defense against computer crime is system security. This is the many layers systems have against attacks. When data comes into a system, it is scanned for viruses and safety. Whenever it passes one of these security layers, it is scanned again. The second resistance against viruses and corruption is computer law. This defines what is illegal in the computer world. In the early 1980’s, prosecutors had problems trying suspect in computer crimes because there was no definition of illegal activity.

The third defense is the teaching of computer ethics. This will hopefully defer people from becoming illegal hackers in the first place (Bitter p. 433). There are other ways companies can protect against computer fraud than in the computer and system itself. One way to curtail computer fraud is in the interview process and training procedures. If it is made clear to the new employee that honesty is valued in the company, the mployee might think twice about committing a crime against the company.

Background checks and fingerprinting are also good ways to protect against computer fraud. Computer crime prevention has become a major issue in the computer world. The lack of knowledge of these crimes and how they are committed is a factor as to why computer crime is so prevalent. What must be realized is that the “weakest link in any system is the human” (Hafner and Markoff p. 61). With the knowledge and application of the preventative methods discussed, computer crime may actually become an issue of the past.

Ethics in the Age of Information

The information age is the age we live in today, and with the information age comes an age of ethics. When we deal with the new technologies introduced every day, we need to decide what we must consider ethical and unethical. We must consider all factors so that the use of the information readily available to many persons is not abused. “Information technology will be the most fundamental area of ethical concern for business in the next decade” (Houston 2). The most widely used tool of the information age is the computer, whether it be a PC or a network of computer systems.

As we enter the information age he newness and power of information technologies tests the ethics of the average person, not just the criminal and causes thousands of computer crimes to be committed daily. The most common computer crime committed daily, some aware and many not, is the illegal sharing of computer software. Software is any of the programs used in operating a digital computer, as input and output programs, as defined by Funk and Wagnalls Standard Desk Dictionary.

When you purchase computer software, you purchase it with the underezding that it will be for use on a single computer, once installed on that system, it is not to e loaded on any other computer. However many people are not aware of this underezding, and many load a program on a couple of computers or on a whole network of computer systems not aware that they are committing a crime. Even though you probably will not be prosecuted for loading a program on a friends computer, this is where your ethics come in.

Do you consider anything when you share a program with others? If not then consider the programmers of the software who are denied compensation for their developments every time you distribute a piece of software. “Why is it that people who wouldn’t think of tealing pack of gum will copy a $500 piece of software” (Houston 3)? A popular form off illegal software distribution is throughout the online world. Whether it be the Internet, America Online, CompuServe, Prodigy, or a BBS (Bulletin Board System), software “pirates” thrive freely online.

These so called “pirates” operate by uploading pieces of software, commonly referred to as “warez”, into an online service’s database then sending through e-mail the rights to download them. “The Information Superhighway has opened the door to a new kind of highway robbery – the home shoplifting network” (Mattia 43). When you access a nline service, you are identified through an account which most commonly consists of a user ID and password. The password is so you only can access the online service with your user ID.

Many people online use their own accounts to access their service, but many steal and use the accounts of others or make fake accounts. When online, these account “pirates” many times trick other users into giving their passwords to them by impersonating an employee of the online service. Others can hack into the online services mainframe computer and steal thousands of accounts. Probably the most common method of getting nline without paying is the use of fake or fraudulent accounts. These are made by giving false information when attempting to gain access to an online service.

Name, address, phone number, and billing information, such as checking account or credit card number, are all falsified in obtaining an online account. With these stolen and fake accounts, software “pirates” have virtually unlimited time to download their “warez” without any charge to them. Many people don’t consider the people behind the creation of software when they illegally distribute it. The developers of software are not properly compensated or their work because of the extent of software piracy.

No one can argue with a software company’s desire, and right, to make sure everyone using their products has paid for it (Furger 73). The numbers add up, it is estimated that in 1994 alone that software companies lost $15 billion from illegal software copying (Maremont 65). It is not only illegal, but clearly unethical to distribute software knowing that the people behind the software are experiencing the downfalls of it. Every time software companies cannot compensate their programmers for their work, more people are out of a job.

Consider his, you enter a store and purchase an item, during this transaction you give your name and phone number. The person you have given this information to then enters it into a computerized database. After this person has collected a sufficient amount of names, they then sell it to a telemarketing firm for a profit. This action is legal, but is it ethical. Do you want your name sold without your consent? Most people don’t because they don’t want to be bothered by sales persons on the telephone. Also, your address could be sold and you put on a mailing list.

Then its an issue of do you want your mailbox filled with junk ail. This action is unethical for the simple reason of consent. If the person had just gained consent to enter the names into his/her database then he would not have committed and unethical act. One conclusion from studies sponsored by the National Institute of Justice is that persons involved in computer crimes get form skills and interests at an early age. Usually they are introduced to computers at home or in school and usually start their “career path” with illegally copying software (McEwen 2).

As young people interact with hackers, they incorporate the beliefs of the hackers into their own. Many of hese unconventional beliefs of young hackers about information and computers leads them to a career in computer crime. Many times it is the lack of education by parents and schools that helps to make these beliefs all the more true to a young person. Computer criminals have their own set of beliefs about information and computers. Their beliefs are based on obvious unethical reasoning. For example, hackers believe that computerized data are free and should be accessible to anyone.

They also believe that passwords and other security features are simply obstacles to be overcome in obtaining data that should lready be available and while data should never be destroyed, there is nothing wrong with viewing and transferring data for one’s own use (McEwen 2). One member of the Legion of Doom, a nationwide group of hackers who exchange information about computer systems and techniques to break into them, has said, “Hackers will do just about anything to break into a computer except crashing a system, that’s the only taboo” (McEwen 2).

The key to stop computer criminals from forming is education. It is often times the case that people commit computer crimes without even know they are doing so and the reason for this is he lack of education. Few schools teach computer ethics, and parents of arrested hackers are usually unaware that their children have been illegally accessing computer systems (McEwen 2). Colleges and universities do not usually include computer use and abuse in their courses, arguing that it is the responsibility of the schools.

On the other hand, many secondary school educators are not sure about what should be taught and are reluctant or unable to add ethical computer education to many subjects in the curriculum. Textbooks on computer literacy rarely mention computer abuses and individual esponsibilities. Educators and software developers have worked together to prevent software piracy in educational institutions. In 1987, the Software Copyright Committee of the International Council for Computers in Education (ICCE) developed a policy to guide educators.

The policy call on school districts to teach staff the provisions of the copyright law and both staff and students the ethical and practical implications of software piracy. This policy has been adopted by many school districts across the country (McEwen 3). In recognition of the problems arising with the illegal and unethical se of computers, criminal justice forces have begun to crack down on computer criminals. In 1989, three computer crime studies were sponsored by the National Institute of Justice.

One of these studies examined different organizational approaches for computer crime investigation and prosecution, another documented the experiences of several dedicated computer crime units, and the third developed a computer crime investigation handbook (McEwen 2). Computers are a permanent fact of life in work places and classrooms across the country. More businesses are likely to incorporate policies on nformation access and confidentiality in their employee orientation and training programs.

Many schools and universities, responding from pressure around them, are beginning to incorporate computer ethics into their courses. For the criminal justice community, computer crime, which poses special challenges in detection and prosecution will require more and more attention. In order to prevent computer crimes in the future, criminal and juvenile justice agencies must look for ways to help parents, teachers, and employers educate the computer-using community to the importance of ethical computer behavior (McEwen 4).

Copyright Laws Essay

Current copyright and patent laws are inappropriate for computer software; their Imposition slows down software development and reduces competition. From the first computer as we know them, the ENIAC, computer software has become more and more important. From thousands of bytes on miles of paper to millions of bytes on a thin piece of tin foil sandwiched between two pieces of plastic, software has played an important part in the world. Computers have most likely played an important role in all our lives, from making math easier with calculators, to having money on the go with ATM machines.

However, with all the help that has been given to us, we haven’t done anything for software and the people who write it. Software by nature is completely Defenseless, as it is more or less simply intellectual property, and not a physical thing, thus very easily copied. Copied software does not make money for its creators, and thus they charge more for whats Not copied, and the whole industry inflates. There are two categories of intellectual property. The first one is composed of writing, music, and films, which are covered by copyright. Inventions and innovations are covered by patent.

These two Categories have covered for years many kinds of work with little or no conflict. Unfortunately, it is not that easy when dealing with such a complex matter as computer software. When something is typed on A computer, it is considered writing, as it is all written words and numbers. However, when executed by the computer, it functions like an invention, performing a specific task as instructed by the user. Thus, software falls into both categories (Del Guercio 22-24). Copyright laws, for most mass-market software, generally cover it today at least.

More advanced software or programming techniques, however, can be patented, as they are neither obvious nor old. This results In many problems which I will go into later. Copyrights last the lifetime of the author, plus 50 years, and can be renewed. Patents last only 17 years, but cannot be renewed. With technology advancing so quickly, it is not necessary to maintain The protection of the software for the length of the copyright, but also, it is sometimes necessary to renew them (Del Guercio 22-24), say, for a 10th sequel in a video game series or version 47. 1 of Bob’s Graphic Program.

With copyrighted material, one is able to write software similar to someone else’s, so long as the programming code is their own, and not borrowed from the others (Del Guercio 22-24). This Keeps the industry competitive, and thus results in better software (because everyone is greedy, and they don’t want to fall behind). With patents no one is allowed to create software that performs a similar Functions. Take AutoCAD and True Space 2, two 3D modeling programs. True Space 2 would be a violation of patent laws, as it performs a very close task to AutoCADs, which came first.

Luckily for us, CAD programs are not new; they have been around for more than 10 Years, and no one thought to patent them. Thus, you can see the need for change in the system. The current laws regarding the protection of intellectual material cannot adequately protect software; they are either too weak or too strict. We Need a new category of protection. The perfect protection law would most likely last for 10 years, renewable. This is long enough to protect a program for as long as it is still useful, and allows for Sequels and new versions just in case.

It would also have to allow for others to make similar software, keeping the industry competitive, but it would have to not allow copying of portions of other software (Because you can’t ‘quote’ something from someone elses software like you can with a book). However, there are many who dispute this, and I can see their point. Current copyright laws have and will protect Software effectively, it can be just as protected as other mediums (Cosgrove). This is true sometimes, however, to copy a book would take time.

You would have to type up each page to make a copy of it, or at least photocopy or scan each page, and it would most likely take p much more time than its worth. To copy a computer program however, takes seconds. Changing the law would take time and money, you might be saying. It would be a tremendous hassle in Congress to have a new law written just to cover that \”Information Superhighway\” thingy. Yes, thats True too, but to not change the laws will cost more. With the ability to patent new and non-obvious software functions come serious problems.

The latest new technology is it ray-tracing 3D engines, anti-aliasing software, or a new Internet exploring fad can be patented. This would mean that only one company and its software could use it. Any other companies that wanted to use the software would have to pay them a large sum of money for the rights. Also, since patent hearings are conducted over a period of 3 years, and in secrecy, company ‘a’ might create a software package and then apply for a patent, and company ‘b’ may create better software during that period, and might become quite successful, and then bam, the patent is given to the company ‘a’, who promptly sues the!

Pants off company ‘b’. This stagnates the computer industry; it used to be that company ‘a’ would retaliate by making better software. For example, Lotus software. They used to make data organization software. Up until I did this report, I thought they had gone out of business, because I hadn’t heard about anything new being done by them. Well, while I was researching, I found the appalling truth. When patenting of software became acceptable in the early 90’s, they closed up their R&D departments and called in a bunch of lawyers to Get them patents on all their programming techniques (Del Guercio 22-24).

Ever since then, they’ve been selling out the rights as their primarily (and I’m willing to bet, only) business. This could even be taken to the extremes of actually patenting simple methods of handling data, such as say, mouse support. Now, it can’t happen to mouse support as it is today, but in the future, something undoubtedly will replace the mouse as the preferred method of input, for instance, in what may be a virtual reality future, the glove might be the input device. Anyway, say it did happen to mouse support. Every single program that uses mouse support would have to pay a fee for the rights to do so.

This would result in higher software prices (aren’t they high enough? , and reduced quality in the programs, as they have to worry about the legalities more. Needless to say, the patenting of software is not a widely loved policy, mostly embraced by large corporations like Lotus and Microsoft (Tysver \”Software Patents\”). Smaller companies and most often consumers are generally against it. Even with all the legal problems I’ve mentioned that arise with current laws, thats not all. The complexity of software protection laws brings up a large degree of confusion. I myself thought that Copyrights lasted 7 years until I read this.

I asked 15 people in a chat room on the Internet what they knew about software protection laws, and only one of them knew that software could be patented. 12 of them thought that it cost lots of money for a copyright, which it doesn’t. It’s $20 for a copyright at most, and $10000 at most for a patent. 5 of them thought that software copyrights lasted 7 years (hey, it’s a popular misconception, I thought so myself at one point). And last but not least, 10 of them believed that there were no laws regarding the copying of software (there are, but they’re virtually ineffective).

Now that you know all about the legal and business aspects of software protection, lets take a look at how it can affect you. Say you’ve got a web page, and you’ve got a link on your web page to your friend Bob’s web page, and he’s got a link on his page to \”JoEs LeeT PiRaCY aND WaReZ\”, and on that site, there is a link to a pirated copy of AutoCAD. Then Joe gets busted. Joe will almost certainly be in trouble, Bob will likely be either questioned or considered responsible, depending on the blatancy of the link, and YOU will likely be questioned and your page might be monitored for a time (Bilodeau). One such example is my web page.

I had a link from my page (the Weird Wide Web) to Archaic Ruins, which is a site regarding information on emulators of old video game systems. When the operator of Archaic Ruins got sued by a video game company (I think it was Konami), I too got questioned, and had my page had ANY questionable material on it, I would have been sued. Thankfully, I was too lazy to work on the page, as I had planned to put up a page that had really old videogames. Who said procrastination was bad? How can you prosecute someone for a crime that is undefined? Thats a question many people are asking. What is a copy of software?

Is it a physical clone of the media it came on? Or is it the code duplicated to someplace else? If so, where else? Currently, software copying is generally considered a copy of the code someplace else… but thats a problem. We all know that a backup of software is a copy, but did you know that even running the software creates a copy of it? Yes, it does. When you load a program, it goes into your computers memory, and is legally considered a copy. While the copy does not stay indefinitely, it does stay long enough to perform a certain task, and can and has been looked upon as a form of software piracy, as stupid as that sounds.

Tysver \”Software Patents\”) BBS (Bulletin Board Systems, small online services run by normal people) Sysops (system operators) are legally considered responsible for all the files that are available on their system (Elkin-Koren). While at first this seems like an obvious thing, after all, it is their computer, they should know whats on it. However, if you had ever run a BBS before, which I do, you’d know that its hard, if not impossible to know whats on your computer. Planet-X, my friend John Morse’s BBS, which I co-run, has 50 calls a day. Of those 50 calls, about 35 of them upload or download software.

Neither one of us is constantly monitoring the system, nor is there a way to make the computer automatically check to see what happens. Thus, about half of the public files on the BBS we don’t know about. Lets take a look at an example of BBS’s and copyright, and how they oh-so-beautifully coincide. Sega Ltd. , maker of the Sega Genesis and Sega Gamegear, recently sued the Maphia BBS for making Sega Genesis ROMs publicly available in a download section. This section was a type of \”digital rental\” as it is commonly known in the BBS community.

Commercial software publicly available for download, on an on-your-honor system, you had to delete the files after a short period of time (24-48 hours). Unfortunately for the Maphia BBS, they did not have a disclaimer, stating that the files must be deleted after a trial period, and thus, Sega was able to sue them for it, as without the disclaimer, there was no proof that they had used the \”digital rental\” system, and thus it was not fair use, as it could be used for monetary gain by the down loader (not having to buy the game). Of course, it could be used for that purpose WITH the disclaimer, but the disclaimer does just that, disc!

Claims the BBS operator of the responsibilities of that copy of software. Another such case was the case between Playboy (I think we all know who that is), and the Frena BBS. The public file areas on the Frena BBS frequently contained image files, and more often than not, they were adult image files. Well, I don’t know exactly how it appended, but Playboy somehow found out that this BBS had some scanned photos from a Playboy magazine, and because they have the copyright to all their photos, they were able to sue the operator of the Frena BBS.

The operator had no idea that there were any Playboy images on his system. Speaking of image files, they too can be a problem with software protection. Say you’ve got an image file that someone had copyrighted. You load it up in a photo-retouching program, and add a big old goat in the background and paint the sky red. Then you remove the artists file name. Viola, the picture is now semi-legally copyrighted to you, as it has been significantly changed from its original, although I wouldn’t recommend going to court over it .

All you have to do is change a very large portion of the image files coding. Technically, darkening or blurring the image, changing the file format, or interlacing the file changes the file entirely, and thus, its yours. Sounds too easy? It is. Copyrights and patents are designed to help the media it protects. But in the case of technology, its actually hindering it. CD-ROMs contain a lot of information, and are the perfect media for music.

A lesser known media, the Digital Video Disc, or DVD, is much more versatile, containing 26 times the storage capacity of a CD-ROM, and 11500 times more than a standard floppy disk, or about 17 gigabytes (the largest hard drives are 9 gigs). However, DVDs are not available to the public. Why? Because of the ease of copying them. We’ve all dubbed tapes; its easy to do. However, we often opt for higher quality originals, because there is always a bit of degradation in the copies (although its very small now). With DVDs, a copy is exactly that, a copy.

No degradation, no reason to buy an original. All the big companies are really scared by this technology, because it will take another five bucks out of their pockets. DVD’s would be one of the greatest advancements in the short history of computers, but because of the shadier uses it could be used for, we’ll never see it. I like to compare it to the Internet, its very useful, but it can be used for illegal purposes. You be the judge. Luckily, we may yet someday see DVDs, because several companies are developing copy protection schemes for them, to stop the casual home hacker/copier.

Macrovision, for instance, is producing hardware for the DVD player that will make them incompatible with VCRs (the easiest dubbing-to platform, the equivalent of CD to audio tape). It will send output through the audio/video out ports that when played on a TV, will appear normal, but when played through a VCR, will have color stripes running sideways across the screen. That is due to the difference between the ways the two works. So as you can see, current methods of protecting software are a hindrance on the software industry.

The problems outweigh the benefits, but with a new law, the industry would be able to keep the benefits and minimize any drawbacks. Instead of having to nitpick over who wrote something that did something similar, it would be back to who wrote something more powerful than the other guy, and thats what makes the industry great, competition. Oh, and I’d like to add that I broke copyright law a total of 13 times in the making of this report, when I made a copy of each reference with the school copying machine, although it was fair use, so I’m not in any trouble.

Computer Fraud and Crimes

In the world of computers, computer fraud and computer crime are very prevalent issues facing every computer user. This ranges from system administrators to personal computer users who do work in the office or at home. Computers without any means of security are vulnerable to attacks from viruses, worms, and illegal computer hackers. If the proper steps are not taken, safe computing may become a thing of the past. Many security measures are being implemented to protect against illegalities. Companies are becoming more aware and threatened by the fact that their computers are prone to attack.

Virus scanners are becoming necessities on all machines. Installing and monitoring these virus scanners takes many man hours and a lot of money for site licenses. Many server programs are coming equipped with a program called “netlog. ” This is a program that monitors the computer use of the employees in a company on the network. The program monitors memory and file usage. A qualified system administrator should be able to tell by the amounts of memory being used and the file usage if something is going on that should not be.

If a virus is found, system administrators can pinpoint the user ho put the virus into the network and investigate whether or not there was any malice intended. One computer application that is becoming more widely used and, therefore, more widely abused, is the use of electronic mail or email. In the present day, illegal hackers can read email going through a server fairly easily. Email consists of not only personal transactions, but business and financial transactions. There are not many encryption procedures out for email yet.

As Gates describes, soon email encryption will become a regular addition to email just as a hard disk drive has become a regular addition to a computer (Gates p. 97-98). Encrypting email can be done with two prime numbers used as keys. The public key will be listed on the Internet or in an email message. The second key will be private, which only the user will have. The sender will encrypt the message with the public key, send it to the recipient, who will then decipher it again with his or her private key. This method is not foolproof, but it is not easy to unlock either.

The numbers being used will probably be over 60 digits in length (Gates p. 98-99). The Internet also poses more problems to users. This problem faces the home user more than the business user. When a person logs onto the Internet, he or he may download a file corrupted with a virus. When he or she executes that program, the virus is released into the system. When a person uses the World Wide Web(WWW), he or she is downloading files into his or her Internet browser without even knowing it. Whenever a web page is visited, an image of that page is downloaded and stored in the cache of the browser.

This image is used for faster retrieval of that specific web page. Instead of having to constantly download a page, the browser automatically reverts to the cache to open the image of that page. Most people do not know about this, but this is an example f how to get a virus in a machine without even knowing it. Every time a person accesses the Internet, he or she is not only accessing the host computer, but the many computers that connect the host and the user. When a person transmits credit card information, it goes over many computers before it reaches its destination.

An illegal hacker can set up one of the connecting computers to copy the credit card information as it passes through the computer. This is how credit card fraud is committed with the help of the Internet. What companies such as Maxis and Sierra are doing are making secure sites. These ites have the capabilities to receive credit card information securely. This means the consumer can purchase goods by credit card over the Internet without worrying that the credit card number will be seen by unauthorized people.

System administrators have three major weapons against computer crime. The first defense against computer crime is system security. This is the many layers systems have against attacks. When data comes into a system, it is scanned for viruses and safety. Whenever it passes one of these security layers, it is scanned again. The second resistance against viruses and corruption is computer law. This defines what is illegal in the computer world. In the early 1980’s, prosecutors had problems trying suspect in computer crimes because there was no definition of illegal activity.

The third defense is the teaching of computer ethics. This will hopefully defer people from becoming illegal hackers in the first place (Bitter p. 433). There are other ways companies can protect against computer fraud than in the computer and system itself. One way to curtail computer fraud is in the interview process and training procedures. If it is made clear to the new employee that honesty is valued in the company, the mployee might think twice about committing a crime against the company.

Background checks and fingerprinting are also good ways to protect against computer fraud. Computer crime prevention has become a major issue in the computer world. The lack of knowledge of these crimes and how they are committed is a factor as to why computer crime is so prevalent. What must be realized is that the “weakest link in any system is the human” (Hafner and Markoff p. 61). With the knowledge and application of the preventative methods discussed, computer crime may actually become an issue of the past.

Cybercrime on Computerized Systems

This situation involves a large bank that has recently installed a new software system for handling all transactions and account storage. An employee at the company developing the software programmed a “back door” into the system, and got another employee to unknowingly install it. Some weeks later, millions were stolen from a number of accounts at the bank. This situation was chosen to highlight the amount of trust that large corporations place in programmers of critical systems.

Programmers are quite capable of abusing extremely large and important systems without leaving a trace, and it is surprising that this sort of situation does not happen more often in today’s world. The paper provides an analysis of this type of cybercrime, possible ways in which such a crime could have been prevented, and the consequences of such crime in general. This paper shows that a complete reliance on a single computerized system makes it easier for such a cybercrime to occur. The focus of the Safebank investigation shifted back to the headquarters of Microsoft Corporation, reported the FBI .

The investigation had originally been conducted with the cooperation of international law agencies, in an attempt to track the location of the funds moving through accounts in Europe and the Caribbean. More recently the FBI reported, in a statement given Monday by case director Walter Navarre, that “Evidence has been collected linking the crimes to an employee of the Microsoft Corporation. ” The Safebank incident began last Wednesday, October 17, 2001, when the management at a Safebank branch in Boston was contacted by a customer of the bank reporting that his account suddenly contained no more money.

There was no record of any transaction carried out on the account, but when backup records were checked, it was determined that the account had indeed contained the specified amount. Safebank spokeswoman Alicia Delrey said, in an interview Monday that “Safebank had no indication that a transaction of any kind had taken place. The records showed a balance of approximately a half-million on one day, and the next day these funds were no longer present in the account. ” A comparison check conducted by the bank showed that similar actions had occurred on nearly two hundred other accounts.

All accounts affected in this way contained in the range of half a million to a million dollars. Problems were assumed to have been the cause of a bug in the new transaction software installed by Safebank two weeks earlier. The developer of the software, Microsoft Corporation, was contacted in relation to the problem. At this point, one of the Geneva branches of the Swiss banking giant UBS contacted Safebank with reports of fifty-two major transfers to unidentified accounts. These transfers consisted of amounts that matched exactly the amounts missing from certain Safebank accounts. An international alert was dispatched to banks worldwide.

Within hours, a listing of accounts in foreign banks had been assembled that exactly matched the amounts missing from Safebank. The FBI was called in to investigate the incident, while all accounts indicated were frozen. Initial investigations indicated that the accounts had been opened under a variety of assumed names, by a single individual. According to special investigator Shawn Murray, “although the accounts were not opened in person, we were able to determine, through reports given by bank employees and through bank terminal video recordings, that they were indeed opened by the same individual in all cases.

Investigations pointed to Wolfgang Schlitz, a former director of the Safebank transaction software project, as one suspect. According to FBI investigators, a current Microsloth employee, who is also a suspect, provided information pointing to Mr. Schlitz. Although Mr. Schlitz was unavailable for comment, the employee was identified as Bertrand Dupont, a senior programmer on the Safebank software project. Apparently, Mr. Dupont was, while programming, given a precompiled code object by Mr. Schlitz. The object was intended to be integrated into a specific part of the system handling transactions.

Mr. Dupont, in an interview yesterday, said “He told me it was a set of more optimized transaction classes that the optimizations team had produced. He was the boss, and the explanation sounded perfectly reasonable, so I didn’t suspect anything. The code worked fine, and I forgot all about it until now. ” The FBI investigation is currently centering on Mr. Dupont and Mr. Schlitz as possible suspects although, according to case director Walter Navarre, “We have not ruled out the possibility of other, as yet unidentified, collaborators. The scope of this crime is unprecedented; millions of dollars were taken without a trace.

If it were not for the size of the transactions involved, we may never have noticed anything,” commented industry analyst Lancolm Hayes. “We should take this as a strong argument for better security controls on safety-critical sectors of the development industry,” he added. The current level of reliance on computerized systems has always elicited concern from those who see this dependence as a security risk. As the recent Safebank incident demonstrates, there is indeed cause for alarm.

The fact that the bank used a completely computerized system allowed a single individual with malicious intent to steal millions. The average amount stolen through computerized means is more than twenty times higher than the average taken through more conventional, “physical,” crime [1]. Although it could be argued that banks implement safety measures such as a marker or alert for large or suspicious transactions, all these transactions are computerized. The program actually carrying out the transfer can be modified not to issue such an alert by the person who has carried out such modifications, as in the Safebank case.

A complete reliance on computers has created more opportunities for cybercrime, reduced the ability to prevent this crime, and made the potential consequences of these crimes more serious. In order to evaluate this statement, I will be discussing different aspects of computer crime, relating specifically to the idea of malicious programming in the banking sector. Although there are many different types of cybercrime, focusing on this issue relates more strongly to the Safebank case. In addition to this, the paper will cover methods of halting or preventing this crime, and possible consequences, in relation to the Safebank incident.

The crime at Safebank was a cybercrime. Money was stolen through the system itself, without any physical aspect to the crime. The crime was rendered even more effective as a result of the deliberate modification that prevented the system from recording the stolen money on its transaction records. As Mr. Hayes points out, “If it were not for the size of the transactions involved, we may never have noticed anything. ” If those committing this crime had decided to take very small amounts; a few dollars, from a large number of accounts, there may never have been an investigation.

The fact that the bank relied entirely on computerized records to keep track of transactions resulted in a reduced the ability to detect cybercrimes, and thereby makes them easier to commit. The crime is, in this case, an “inside job,” since it was an employee or employees at Microsloth responsible for the crime. This type of crime is, in the present day, growing less common in comparison with other types of cybercrime such as external attacks. Statistics used to show that over 80% of all cybercrime was the result of inside operatives [2].

At the current time, however, this is no longer true. Polls by the Computer Security Institute show that the number of businesses citing the internet as a frequent point of attack is “up from 59 per cent in 2000 to 70 per cent this year. The percentage of those reporting their internal systems as a frequent Achilles heel has dropped from 38 per cent to 31 per cent over the same period [2]. ” The survey reported that, in 2001, 70% of all cybercrime was initiated from outside, rather than inside, the target [6].

External attacks are significant because they are conducted by people who usually do not have intimate knowledge of a system. The fact that these types of crimes are becoming more common indicates that it is becoming easier for common criminals without specific links to a company to commit cybercrimes. Although Safebank received wide publicity due to the size and global reach of the theft involved, many other similar cases of fraud go unreported. In the UK, at least four large internet banks have been the subject of cybercrime attacks.

These attacks involved losses of hundreds of thousands of pounds, but were mostly not reported due to the banks’ worries that news coverage would damage their image. [5] These banks are, even more so than Safebank, completely dependent on computers for all aspects of their business. Whereas Safebank had employees and terminals, Internet banks operate almost entirely online. These banks are indeed more vulnerable than traditional banks; this vulnerability coming from their reliance on computers as a way of both carrying out transactions and storing funds.

How can these types of computer crimes be prevented? In the case of Safebank, how could the modification to the system have been detected before it was released? There are no methods to effectively ensure that this happens. Safebank has no way of verifying that the software they receive is free of malicious code, because Safebank was probably unable to view the code itself; it received compiled executables. The issue here is one of trust; Safebank assumes that software from Microsoft is free of defects, but has no way of verifying that this is indeed the case.

Microsoft could perform a final evaluation of the code itself once the program is completed, but this would be time-intensive and costly, especially for a system like Safebank’s, which likely consists of millions of lines of code. Such an evaluation would give no complete assurances of security, because employees conducting the tests could themselves insert the malicious code. Other, stricter, version-control options are available, but with each layer of protection there is additional cost and time involved.

As with almost anything, there is a point at which it no longer becomes profitable to add additional security. Building a three-meter high wall around some property will cost more than a two-meter wall, but will provide almost exactly the same security, since a determined criminal can scale a wall of almost any height. This analogy relates well to software development. Adding additional security costs money, yet determined hackers can break almost any amount of security. The goal in most projects is, therefore, to create enough security to discourage the majority of hackers from attempting to break in.

The security approach is also only effective in the specific case of Safebank. Most other types of attacks cannot be dealt with in this way. Prevention of cybercrime can be assisted through education. Training can increase awareness of the potential for cybercrimes to occur, and effective measures of eliminating or reducing losses incurred from these crimes. Safebank had no way of knowing that the program was faulty, but if its employees had been more alert to the possibility of cybercrime threats, they may have caught and reacted to the transactions more quickly.

The main disadvantage to these training courses is that they are not complete solutions, and are expensive; often costing several thousand dollars per wee[4]. Although it could have reduced losses, Safebank could not have prevented the crime through training. Another aspect of training is certification. At the current time, programmers are not required to have completed certification courses present in many other industries. [7] Programmers could be required to take courses relating to legal and ethical aspects of computers, in addition to certification for standard programming skills.

Although this would not deter a criminal set on a certain path of action, better knowledge of the potential consequences of cybercrime might make criminals think twice about committing this type of crime. Microsloth can operate with greater assurances of security if it knows that its employees are competent and informed in both the technical and ethical aspects of software creation. Insurance does not eliminate the threat of cybercrimes, but it does help cover damages. Cyberterrorism insurance is a relatively new concept.

Previously, insurance was designed to cover physical assets from damage in a fire or other similar event. Now, new forms of insurance protect specifically against cybercrime, and older insurance no longer covers digital damages. [3] Although the article does not indicate whether Safebank had cybercrime insurance, most large corporations vulnerable to cybercrimes have insurance policies that cover their losses. Again, although this method helps reduce losses for a corporation, it does nothing to prevent the attack itself. Cybercrime can mean huge losses in vital sectors such as banking and government.

The Safebank theft of several million dollars is nothing compared to the total cost of cybercrime. A survey conducted by the Computer Security Institute indicated total losses of $727 million. This represented only one-third of the interviewed; the others did not wish to reveal their losses. [6] These figures are for the United States only; cybercrime is just as prevalent in other countries worldwide. According to US Attorney-General John Ashcroft, “Although there are no exact figures on the costs of cybercrime in America, estimates run into the billions of dollars each year. 8]

A second consequence of attacks with relation to banking can be political instability. Groups with political motives may see banks as attractive targets for cyberterrorism. During the conflict between Israel and Palestine, “pro-Palestinian hackers have attacked the web sites of Israeli banking and financial institutions [9]. ” As indicated previously, the ability to hack into a system is now much more widely available than it used to be. The disruption of a country’s financial structures can be as devastating, if not more so, than a direct physical attack.

Cyberterrorism, with banks as targets, whether inside jobs like the Safebank case or external infiltrations, may become increasingly common. Other potential consequences of cybercrime are less quantifiable. Through the recent events, both Microsoft and Safebank have suffered disastrous consequences in terms of public relations. Customers will be less willing to use a bank that they know uses a faulty system. This is precisely the reason why the banks in the UK were reluctant to report their cybercrime losses. Customers of Microsoft will be less likely to purchase software that might contain such flaws.

This means a loss of revenue and potential losses of jobs at both Microsoft and Safebank. As the Safebank example shows, cybercrimes are now much easier to commit. The higher rate of outside attacks indicates that cybercrimes can now be performed those in the general public, without any insider knowledge. At the same time, dependence on computers has reduced the ability to prevent cybercrimes, because crimes can now no longer be detected as easily, and even when detected they are difficult to stop. Cybercrime causes billions of dollars in losses every year; a great cost to society. ]

This conclusion raises further questions about how much of this crime could be prevented. At what point to corporations decide that it is more profitable to invest in security than to suffer potential losses? Are the methods of combating cybercrime of this kind, as outlined in the body of this paper, sufficient? At the moment, the answer is no. As cybercrime becomes more prevalent, affects an increasingly large number of people, and causes increasingly larger amounts of damage, it is important to investigate ways of dealing with it, ways of reducing the risk associated with it, and ways of preventing it altogether.

History of the PC

Nothing epitomizes modern life better than the computer. For better or worse, computers have infiltrated every aspect of our society. Today computers do much more than simply compute: supermarket scanners calculate our grocery bill while keeping store inventory; computerized telephone switching centers play traffic cop to millions of calls and keep lines of communication untangled; and automatic teller machines (ATM) let us conduct banking transactions from virtually anywhere in the world. But where did all this technology come from and where is it heading?

To fully understand the impact computers have on our lives and promises they hold for the future, it is important to understand the personal computer. The personal computer also known as the PC has evolved over the past several years into something that is now a part of every day life. Personal computers were not always like this though; they have made a great deal of significant changes that have allowed them to become as important as they are now. Before these improvements were made, the personal computer was not found in almost every single home across the world as they are today.

Not only has the personal computer positively affected the business world, but also serves as a means of entertainment. It’s amazing what use to be someone’s dream is now a major tool that everyday life evolves around. There are many different companies that make personal computers today, but I would like to focus on Apple, Macintosh and IBM, because it seems that it was these companies that started the personal computer’s launch into it’s present day use. Steven Wozniak and Steven Jobs, who were interested in electronics, were very good friends in high school.

Staying in contact after high school both ended up dropping out of college and getting jobs for companies in the Silicon Valley. Wozniak worked for Hewlett-Packard, while Jobs worked for Atari Company. Wozniak had been working in computer design for a while when, in 1976, he designed what would become the Apple 1. This was Wozniak’s first contribution to the personal computer. The Apple 1 was built in printed circuit board form when Jobs insisted it could be sold. In April of 1976 at the Homebrew Computer Club In Palo Alto, the Apple 1 made its debut, but few people took it very seriously.

The Apple 1 was based on the MOStek 6502 chip, whereas most other kit computers were built from the Intel 8080 (Apple 1). The Apple 1 was not very successful in 1976. It was sold through small retailers and only included the circuit board. A tape-interface was sold separately, but you had to build your own case for the Apple 1. The circuit board itself initially cost $667. It wasn’t until 1977, when the Apple II debuted that Apple begun to take off. The Apple II was based on Wozniak’s Apple I design, but had several additions.

The first improvement was a plastic case; this made the Apple II the first PC to come in any kind of casing. Secondly was the ability to display color graphics. Also it included a larger ROM, more expandable RAM and had integer basic hard-coded on the ROM for easier programming. With all this came two game paddles and a demo cassette for $1,298. The following year Apple released a disk drive compatible with for the Apple II. It was at this time that orders for Apple machines were multiplied largely.

Of course after the Apple II came the Apple III although it had may improvements from the Apple II such as A Synertek 8 bit 6502a processor which ran up to the speed of 2MHZ, 128K of RAM, 4K of ROM, first Apple PC with built in 5. 25” disk drive and hi-resolution graphics built into the motherboard. This machine did not sell as well as the Apple II and ended up being discontinued in 1985 (Apple III). It was after Jobs’ visit to Xerox PARC in 1979 that he and several other engineers began to work on the Lisa, which would redefine personal computing.

Jobs didn’t prove to be a great project manager so he was kicked off the project. Jobs used his 11% of what he owned of Apple to take over someone else’s project and began working with the Macintosh. Which started as a $500 personal computer, but Jobs was determined to make it more (History 1981-1983). It wasn’t until 1981 when IBM released its first PC. With the power of Big Blue, IBM’s PC began to dominate the market. Jobs and his team would have to try very hard to out do IBM, but he realizes until Apple made some changes they would never be able to match IBM.

The PC’s today are nothing in comparison to the ones back in Job’s time. Today we have computers that are faster, and more effective. One example of the great improvements is the IBM PC 300PL. The 300PL has a Pentium III processor, which allows it to move at the rate of 733MHz, that is 731MHz faster than the Apple III. It also has a CD read/write drive that allows you to 650mb of data, and Sdram memory, which enhances the overall performance of the system (PC World65). Another great example of the advancements in the PC would be the Gateway Profile 2 CX.

Which has installed an Intel Celeron Processor, which allows it to move at a rate of 500MHz but don’t let that fool you, the rest of the package makes up for this. It has a 20GB Ultra ATA Hard Drive, 64 MB SDRAM, and a 6X DVD-ROM Drive, which allows you to watch DVD’s on the PC (PC World 73). And one last example of the advancements in the PC is the Gateway Performance 700. This computer is equipped with the Intel Pentium III Processor 700MHz, 128MB of SDRAM, 27. 3GB Ultra ATA 66 7200 RPM Hard Drive, and an 8X DVD-ROM Drive (PC World 73).

These improvements and many more have enabled people to enjoy and do more things that require computers at home for an affordable price. Back in the late 70’s and 80’s when the PC was first making it’s appearances the PC was defiantly not used for the same purposes then as they are today. If you told someone in the 70’s and 80’s that the PC would be used for entertainment purposes they probably would not believe it. The PC has more purposes today then it ever had. Now with the PC you are capable of watching DVD movies, playing games, chatting, or just surfing the Internet.

These are just the minor things that the PC is used for. But the PC is capable of much more. For instance, now it is possible to keep track of your bank accounts going through online banking. Also another feature of the PC is online billing. This enables people to pay and receive bills in the comfort of their own home, and not having to worry about if the bill will make to the company on time if you don’t mail it early enough. And for those people interested in stocks, there’s no better or faster way to trade, sell or buy stocks in the privacy and comfort of your own home than on your PC.

For all the students across the world the PC also proves to be help full even in the educational field. Example your doing research for your Computer Concepts paper and you can’t seem to find enough information at the local library, then all you have to do is get on your PC get on the internet and search for what ever kind of information you are wanting. All in all the usage of the PC has grown greater over the past several years, and most likely will continue to grow. Thirty years ago you would have had trouble finding PC’s in anyone’s house or anywhere else, but that is not the case now.

Almost anywhere you go today you are most likely going to come across a personal computer directly or indirectly. Wither you are just going to Mac Donald’s for a Big Mac and you pay the cashier working at the register that just range you up on a personal computer or looking for a car on the Web you are dealing with personal computers. Personal computers are found all over the world today. Not only all over the world, but also in almost every household. That itself should prove evidence enough that the personal computer is growing very rapidly and is of major importance of the world and way of life today.

Molecular Switches Essay

We live in the technology age. Nearly everyone in America has a computer or at least access to one. How big are the computers you are used to? Most are about 7″ by 17″ by 17″. That’s a lot of space. These cumbersome units will soon be replaced by something smaller. Much smaller, we’re talking about computers based on lone molecules. As far off as this sounds, scientists are already making significant inraods into researching the feasability of this. Our present technology is composed of solid-state microelectronics based upon semiconductors. In the past few years, scientists have made momentus discoveries.

These advances were in molecular scale electronics, which is based on the idea that molecules can be made into transistors, diodes, conductors, and other components of microcircuits. (Scientific American) Last July, researchers from Hewlitt-Packard and the University of California at Los Angeles announced that they had made an electronic switch of a layer of several million molecules and rotaxane. “Rotaxane is a pseudorotaxane. A pseudorotaxane is a compound consisting of cyclic moles threaded by a linear molecule. It also has no covalant interaction.

In rotaxane, there are bulky blocking groups at each end of the threaded molecule. ” (Scientific American) The researchers linked many of these switches and came up with a rudimentary AND gate. An AND gate is a device which preforms a basic logic function. As much of an achievement as this was, it was only a baby step. This million-moleculed switch was too large to be useful and could only be used once. In 1999, researchers at Yale University created molecular memory out of just one molecule. This is thought to be the “last step down in size” of technology because smaller units are not economical.

The memory was created through a process called “self-assembly”. “Self-assembly” is where computer engineers “grow” parts and interconnections with chemicals. (Physics News Update, 1999) This single molecule memory is better than the conventional silicon memory (DRAM) because the it live around one million times longer. ‘ “With the single molecule memory, all a general-purpose ultimate molecular computer needs now is a reversible single molecule switch,” says Reed (the head researcher of the team. ) “I anticipate we will see a demonstration of one very soon. (Yale, 1999)

Reed was correct. Within a year, Cees Dekker and his colleagues at Delft University of Technology in the Netherlands had produced the first single molecule transistor. Dekker won an innovation award from Discover magazine for the switch which was also built from a lone molecule. The molecule they used was the carbon nanotube. It’s composition is of a lattice of carbon atoms rolled up into a long, narrow tube, one billionth of a meter wide. These can conduct electricity or, depending on how the tube is twisted, they can semiconductors.

The semiconducting nanotube is the only active element in the transistor. The transistor works like it’s silicon relatives, but in much less space. Dekker did, however, emphasize that they had made only a prototype. “Although it is “a technologically usable device,” he says, there’s still a long way to go. The next steps include finding ways to place the nanotubes at the right locations in an electronic circuit, probably by attching chemical guides that bind only to certain metals. ” (Discover) From there, we go back to Yale where efforts were being put forth to make a better switch.

Mark Reed and his collegues were at work on a different class of molecules. To make a switch the inserted regions into the molecules, that when made subject to certain voltages, trapped electrons. If the voltage was varied, they could continuously change the state of the molecules from nonconducting to conducting, the requirements of a basic switch. Their device was composed of 1,000 nitromine benzenethiol molecules in between metal contacts. One interesting developement was to find that these microswitches indeed followed Moore’s Law.

Moore’s Law says that each transistor chip contains approximately twice the memory of it’s predesessor. The chips also come out 18-24 months later. This demonstrates a rising exponential curve in the developement of transistors. Engineers can now put millions of transistors on a sliver of silicon just a few square centimeters. Moore’s Law does show that even technology has it’s limits, as it can get only so small and stay economically possible. (Physics News Update) Free electrons can take on energy levels from a continuous range of possibilities.

But in atoms or molecules, electrons have energy levels that are quantized: they can only be any one of a number of discrete values, like rungs on a ladder. This series of discrete energy values is a consequence of quantum theory and is true for any system in which the electrons are confined to an infinitesimal space. In molecules, electrons arrange themselves as bonds among atoms that resemble dispersed “clouds,” called orbitals. The shape of the orbital is determined by the type and geometry of the constituent atoms. Each orbital is a single, discrete energy level for the electrons.

Even the smallest conventional microtransistors in an integrated circuit are still far too large to quantize the electrons within them. In these devices the movement of electrons is governed by physical characteristics–known as band structures–of their constituent silicon atoms. What that means is that the electrons are moving in the material within a band of allowable energy levels that is quite large relative to the energy levels permitted in a single atom or molecule. This large range of allowable energy levels permits electrons to gain enough energy to leak from one device to the next.

And when these conventional devices approach the scale of a few hundred nanometers, it becomes extremely difficult to prevent the minute electric currents that represent information from leaking from one device to an adjacent one. In effect, the transistors leak the electrons that represent information, making it difficult for them to stay in the “off” state. The standard methods of chemical synthesis allow researchers to design and produce molecules with specific atoms, geometries and orbital arrangements.

Moreover, enormous quantities of these molecules are created at the same time, all of them absolutely identical and flawless. Such uniformity is extremely difficult and expensive to achieve in other batch-fabrication processes, such as the lithography-based process used to produce the millions of transistors on an integrated circuit. The methods used to produce molecular devices are the same as those of the pharmaceutical industry. Chemists start with a compound and then gradually transform it by adding prescribed reagents whose molecules are known to bond to others at specific sites.

The procedure may take many steps, but gradually the pieces come together to form a new potential molecular device with a desired orbital structure. After the molecules are made, we use analytical technologies such as infrared spectroscopy, nuclear magnetic resonance and mass spectrometry to determine or confirm the structure of the molecules. The various technologies contribute different pieces of information about the molecule, including its molecular weight and the connection point or angle of a certain fragment. Physics)

By combining the information, we determine the structure after each step as the new molecule is synthesized. Once the assembly process has been set in motion, it proceeds on its own to some desired end [see “Self-Assembling Materials,” by George M. Whitesides; Scientific American, September 1995]. In our research Reed} we use self-assembly to attach extremely large numbers of molecules to a surface, typically a metal one [see illustration on self- assembly].

When attached, the molecules, which are often elongated in shape, protrude up from the surface, like a vast forest with identical trees spaced out in a perfect array. Scientific American) Handy though it is, self-assembly alone will not suffice to produce useful molecular-computing systems, at least not initially. For some time, they will have to combine self-assembly with fabrication methods, such as photolithography, borrowed from conventional semiconductor manufacturing. In photolithography, light or some other form of electromagnetic radiation is projected through a stencil-like mask to create patterns of metal and semiconductor on the surface of a semiconducting wafer.

In their research they use photolithography to generate layers of metal interconnections and also holes in deposited insulating material. In the holes, they create the electrical contacts and selected spots where molecules are constrained to self-assemble. The final system consists of regions of self-assembled molecules attached by a mazelike network of metal interconnections. The molecular equivalent of a transistor that can both switch and amplify current is yet to be found. But researchers have taken the first steps by constructing switches, such as the twisting switch described earlier.

In fact, Jia Chen, a graduate student in Reeds Yale group, observed impressive switching characteristics, such as an on/off ratio greater than 1,000, as measured by the current flow in the two different states. For comparison, the device in the solid-state world, called a resonant tunneling diode, has an on/off ratio of around 100. (Yale Bulletin) “Foremost among them is the challenge of making a molecular device that operates analogously to a transistor. A transistor has three terminals, one of which controls the current flow between the other two.

Effective though it was, our twisting switch had only two terminals, with the current flow controlled by an electrical field. In a field-effect transistor, the type in an integrated circuit, the current is also controlled by an electrical field. But the field is set up when a voltage is applied to the third terminal. ” (Scientific American) Another problem with molecular switches is the thermodynamics. A microprocessor with 10 million transistor gives and a clock cycle of half a gigahertz gives off 100 watts, which is much hotter than a stovetop.

Finding the minimum amount of heat that a single molecular device emits would help limit the number of devices we could put on a chip or substrate of some kind. Operating at room temperature and at today’s speeds, this fundamental limit of a molecule is about 50 picowatts (50 millionths of a millionth of a watt). That suggests an upper limit to the number of molecular devices we can closely utilize: it is about 100,000 times more that what is possible now with silicon microtransistors on a chip. That may seem like a vast improvement, it is far below the density that would be possible if we did not have to worry about heat.

The smaller you get, the more problems you also come across. We are already incountering many problems in the fabrication of silicon based chips. These problem will become worse with each step down in size until they are no longer useful (or until they no longer function. ) The bigger problem is that this is bound to occur before computer science is able to achieve it’s primary goal of creating a viable working “brain. ” This means that the possibility of creating artifical life forms, or “androids,” is slim at this point due to the expected impasse in technology.

History Of Computers

Only once in a lifetime will a new invention come about to touch every aspect of our lives. Such devices changed the way we manage, work, and live. A machine that has done all this and more now exists in nearly every business in the United States. This incredible invention is the computer. The electronic computer has been around for over a half-century, but its ancestors have been around for 2000 years. However, only in the last 40 years has the computer changed American management to it’s greatest extent.

From the first wooden abacus to the latest high-speed microprocessor, the computer has changed nearly very aspect of management, and our lives for the better. The very earliest existence of the modern day computer’s ancestor is the abacus. These date back to almost 2000 years ago (Dolotta, 1985). It is simply a wooden rack holding parallel wires on which beads are strung. When these beads are moved along the wire according to programming rules that the user must memorize. All ordinary arithmetic operations can be performed on the abacus. This was one of the first management tools used.

The next innovation in computers took place in 1694 when Blaise Pascal invented the first digital calculating machine. It could only add umbers and they had to be entered by turning dials. It was designed to help Pascal’s father, who was a tax collector, manage the town’s taxes (Beer, 1966). In the early 1800s, a mathematics professor named Charles Babbage designed an automatic calculation machine (Dolotta, 1985). It was steam powered and could store up to 1000 50-digit numbers. Built in to his machine were operations that included everything a modern general-purpose computer would need.

It was programmed by and stored data on cards with holes punched in them, appropriately called punch cards. This machine was extremely useful to managers that delt with arge volumes of good. With Babbage’s machine, managers could more easily calculate the large numbers accumulated by inventories. The only problem was that there was only one of these machines built, thus making it difficult for all managers to use (Beer, 1966). After Babbage, people began to lose interest in computers. However, between 1850 and 1900 there were great advances in mathematics and physics that began to rekindle the interest.

Many of these new advances involved complex calculations and formulas that were very time consuming for human calculation. The first major use for a computer in the U. S. was during the 1890 census. Two men, Herman Hollerith and James Powers, developed a new punched-card system that could automatically read information on cards without human (Dolotta, 1985). Since the population of the U. S. was increasing so fast, the computer was an essential tool for managers in tabulating the totals (Hazewindus,1988).

These advantages were noted by commercial industries and soon led to the development of improved punch-card business-machine systems by International Business Machines, Remington-Rand, Burroughs, and other corporations (Chposky, 1988). By modern standards the punched-card machines were slow, typically processing from 50 to 250 cards per inute, with each card holding up to 80 digits. At the time, however, punched cards were an enormous step forward; they provided a means of input, output, and memory storage on a massive scale.

For more than 50 years following their first use, punched-card machines did the bulk of the world’s business computing (Jacobs, 1975). By the late 1930s punched-card machine techniques had become so well established and reliable that Howard Hathaway Aiken, in collaboration with engineers at IBM, undertook construction of a large automatic digital computer based on standard IBM electromechanical parts (Chposky, 1988). Aiken’s machine, alled the Harvard Mark I, handled 23-digit numbers and could perform all four arithmetic operations (Dolotta, 1985). Also, it had special built-in programs to handled logarithms and trigonometric functions.

The Mark I was controlled from prepunched paper tape. Output was by card punch and electric typewriter. It was slow, requiring 3 to 5 seconds for a multiplication, but it was fully automatic and could complete long computations without human intervention. The outbreak of World War II produced a desperate need for computing capability, especially for the military (Dolotta, 1985). New weapons systems were produced which needed rajectory tables and other essential data. In 1942, John P. Eckert, John W. Mauchley, and their associates at the University of Pennsylvania decided to build a high-speed electronic computer to do the job.

This machine became known as ENIAC, for Electrical Numerical Integrator And Calculator (Chposky, 1988). It could multiply two numbers at the rate of 300 products per second, by finding the value of each product from a multiplication table stored in its memory. ENIAC was thus about 1,000 times faster than the previous generation of computers. ENIAC used 18,000 standard vacuum tubes, occupied 1800 square feet of loor space, and used about 180,000 watts of electricity. It used punched-card input and output. The ENIAC was very difficult to program because one had to essentially re-wire it to perform whatever task he wanted the computer to do.

It was efficient in handling the particular programs for which it had been designed. ENIAC is generally accepted as the first successful high-speed electronic digital computer and was used in many applications from 1946 to 1955. However, the ENIAC was not accessible to managers of businesses (Beer, 1966). Mathematician John Von Neumann was very interested in the ENIAC. In 1945 he ndertook a theoretical study of computation that demonstrated that a computer could have a very simple and yet be able to execute any kind of computation effectively by means of proper programmed control without the need for any changes in hardware.

Von Neumann came up with incredible ideas for methods of building and organizing practical, fast computers. These ideas, which came to be referred to as the stored-program technique, became fundamental for future generations of high-speed digital computers and were universally adopted (Dolotta, 1985). The first wave of modern programmed electronic computers to take dvantage of these improvements appeared in 1947. This group included computers using random access memory, RAM, which is a memory designed to give almost constant access to any particular piece of information (Dolotta, 1985).

These machines had punched-card or punched-tape input and output devices and RAMs of 1000-word capacity. Physically, they were much more compact than ENIAC: some were about the size of a grand piano and required 2500 small electron tubes. This was quite an improvement over the earlier machines. The first-generation stored-program computers required considerable maintenance, usually attained 70% o 80% reliable operation, and were used for 8 to 12 years (Hazewindus,1988). Typically, they were programmed directly in machine language, although by the mid-1950s progress had been made in several aspects of advanced programming.

This group of machines included EDVAC and UNIVAC, the first commercially available computers. With this invention, managers had even more power to perform calculations for such things as statistical demographic data (Beer, 1966). Before this time, it was very rare for a manager of a larger business to have the means to process large numbers in so little time. The UNIVAC was eveloped by John W. Mauchley and John Eckert, Jr. in the 1950s. Together they had formed the Mauchley-Eckert Computer Corporation, America’s first computer company in the 1940s.

During the development of the UNIVAC, they began to run short on funds and sold their company to the larger Remington-Rand Corporation. Eventually they built a working UNIVAC computer. It was delivered to the U. S. Census Bureau in 1951 where it was used to help tabulate the U. S. population (Hazewindus,1988). Early in the 1950s two important engineering discoveries changed the electronic computer field. The first computers were made with vacuum ubes, but by the late 1950s computers were being made out of transistors, which were smaller, less expensive, more reliable, and more efficient (Dolotta, 1985).

In 1959, Robert Noyce, a physicist at the Fairchild Semiconductor Corporation, invented the integrated circuit, a tiny chip of silicon that contained an entire electronic circuit. Gone was the bulky, unreliable, but fast machine; now computers began to become more compact, more reliable and have more capacity. These new technical discoveries rapidly found their way into new models of digital computers. Memory storage capacities increased 800% in commercially vailable machines by the early 1960s and speeds increased by an equally large margin (Jacobs, 1975).

These machines were very expensive to purchase or to rent and were especially expensive to operate because of the cost of hiring programmers to perform the complex operations the computers ran. Such computers were typically found in large computer centers operated by industry, government, and private laboratories staffed with many programmers and support personnel. By 1956, 76 of IBM’s large computer mainframes were in use, compared with only 46 UNIVAC’s (Chposky, 1988).

In the 1960s efforts to design and develop the fastest ossible computers with the greatest capacity reached a turning point with the completion of the LARC machine for Livermore Radiation Laboratories by the Sperry-Rand Corporation, and the Stretch computer by IBM. The LARC had a core memory of 98,000 words and multiplied in 10 microseconds. Stretch was provided with several ranks of memory having slower access for the ranks of greater capacity, the fastest access time being less than 1 microseconds and the total capacity in the vicinity of 100 million words.

During this time the major computer manufacturers began to offer a range of computer capabilities, as well s various computer-related equipment (Jacobs, 1975). These included input means such as consoles and card feeders; output means such as page printers, cathode-ray-tube displays, and graphing devices; and optional magnetic-tape and magnetic-disk file storage. These found wide use in management for such applications as accounting, payroll, inventory control, ordering supplies, and billing.

Central processing units for such purposes did not need to be very fast arithmetically and were primarily used to access large amounts of records on file. The greatest number of computer systems were delivered for the larger pplications, such as in hospitals for keeping track of patient records, medications, and treatments given. They were also used in automated library systems and in database systems such as the Chemical Abstracts system, where computer records now on file cover nearly all known chemical compounds (Dolotta, 1985).

The trend during the 1970s was, to some extent, away from extremely powerful, centralized computational centers and toward a broader range of applications for less-costly computer systems (Jacobs, 1975). Most continuous-process manufacturing, such as petroleum refining and lectrical-power distribution systems, began using computers of relatively modest capability for controlling and regulating their activities. In the 1960s the programming of applications problems was an obstacle to the self-sufficiency of moderate-sized on-site computer installations, but great advances in applications programming languages removed these obstacles.

Applications languages became available for controlling a great range of manufacturing processes, for computer operation of machine tools, and for many other tasks. In 1971 Marcian E. Hoff, Jr. , an engineer at the Intel Corporation, invented the icroprocessor and another stage in the development of the computer began (Chposky, 1988). A new revolution in computer hardware was now well under way, involving miniaturization of computer-logic circuitry and of component manufacture by what are called large-scale integration techniques.

In the 1950s it was realized that scaling down the size of electronic digital computer circuits and parts would increase speed and efficiency and improve performance (Jacobs, 1975). However, at that time the manufacturing methods were not good enough to accomplish such a task. About 1960, photoprinting of conductive circuit boards to eliminate wiring ecame highly developed. Then it became possible to build resistors and capacitors into the circuitry by photographic means. In the 1970s entire assemblies, such as adders, shifting registers, and counters, became available on tiny chips of silicon.

In the 1980s very large scale integration, VLSI, in which hundreds of thousands of transistors are placed on a single chip, became increasingly common (Dolotta, 1985). Many companies, some new to the computer field, introduced in the 1970s programmable minicomputers supplied with software packages (Jacobs, 1975). The size-reduction trend continued with the ntroduction of personal computers, which are programmable machines small enough and inexpensive enough to be purchased and used by individuals (Beer, 1966). One of the first of such machines was introduced in January 1975.

Popular Electronics magazine provided plans that would allow any electronics wizard to build his own small, programmable computer for about $380. The computer was called the Altair 8800. Its programming involved pushing buttons and flipping switches on the front of the box. It didn’t include a monitor or keyboard, and its applications were very limited. Even though, many orders came in for it and everal famous owners of computer and software manufacturing companies got their start in computing through the Altair (Jacobs, 1975).

For example, Steve Jobs and Steve Wozniak, founders of Apple Computer, built a much cheaper, yet more productive version of the Altair and turned their hobby into a business. After the introduction of the Altair 8800, the personal computer industry became a fierce battleground of competition. IBM had been the computer industry standard for well over a half-century. They held their position as the standard when they introduced their first personal computer, the IBM Model 60 in 1975 (Chposky, 988). However, the newly formed Apple Computer company was releasing its own personal computer, the Apple II.

The Apple I was the first computer designed by Jobs and Wozniak in Wozniak’s garage, which was not produced on a wide scale. Software was needed to run the computers as well. Microsoft developed a Disk Operating System, MS-DOS, for the IBM computer while Apple developed its own software (Chposky, 1988). Because Microsoft had now set the software standard for IBMs, every software manufacturer had to make their software compatible with Microsoft’s. This would lead to huge profits for Microsoft. The main goal of the omputer manufacturers was to make the computer as affordable as possible while increasing speed, reliability, and capacity.

Nearly every computer manufacturer accomplished this and computers popped up everywhere. Computers were in businesses keeping track of even more inventories for managers. Computers were in colleges aiding students in research. Computers were in laboratories making complex calculations at high speeds for scientists and physicists. The computer had made its mark everywhere in management and built up a huge industry (Beer, 1966). The future is promising for the computer industry and its technology. The speed of processors is expected to double every year and a half in the coming years (Jacobs, 1975).

As manufacturing techniques are further perfected the prices of computer systems are expected to steadily fall. However, since the microprocessor technology will be increasing, it’s higher costs will offset the drop in price of older processors. In other words, the price of a new computer will stay about the same from year to year, but technology will steadily increase. Since the end of World War II, the computer industry has grown from a standing start into one of the biggest and most profitable industries in the United States (Hazewindus,1988).

It now comprises thousands of companies, making everything from multi-million dollar high-speed supercomputers to printout paper and floppy disks. It employs millions of people and generates tens of billions of dollars in sales each year. Surely, the computer has impacted every aspect of people’s lives (Jacobs, 1975). It has affected the way people work and play. It has made everyone’s life easier by doing difficult work for people. The computer truly is one of the most incredible inventions in history to ever influence management, and life.

The Computer – Incredible Invention

Only once in a lifetime will a new invention come about to touch every aspect of our lives. Such a device that changes the way we work, live, and play is a special one, indeed. A machine that has done all this and more now exists in nearly every business in the U. S. and one out of every two households (Hall, 156). This incredible invention is the computer. The electronic computer has been around for over a half-century, but its ancestors have been around for 2000 years. However, only in the last 40 years has it changed the American society.

From the first wooden abacus to the latest high-speed microprocessor, the computer has changed nearly every aspect of people’s lives for the better. The very earliest existence of the modern day computer’s ancestor is the abacus. These date back to almost 2000 years ago. It is simply a wooden rack holding parallel wires on which beads are strung. When these beads are moved along the wire according to “programming” rules that the user must memorize, all ordinary arithmetic operations can be performed (Soma, 14).

The next innovation in computers took place in 1694 when Blaise Pascal invented the first “digital calculating machine”. It could only add numbers and they had to be entered by turning dials. It was designed to help Pascal’s father who was a tax collector (Soma, 32). In the early 1800’s, a mathematics professor named Charles Babbage designed an automatic calculation machine. It was steam powered and could store up to 1000 50-digit numbers. Built in to his machine were operations that included everything a modern general-purpose computer would need.

It was programmed by-and stored data on-cards with holes punched in them, appropriately called punchcards. His inventions were failures for the most part because of the lack of precision machining techniques used at the time and the lack of demand for such a device (Soma, 46). After Babbage, people began to lose interest in computers. However, between 1850 and 1900 there were great advances in mathematics and physics that began to rekindle the interest (Osborne, 45). Many of these new advances involved complex calculations and formulas that were very time consuming for human calculation.

The first major use for a computer in the U. S. was during the 1890 census. Two men, Herman Hollerith and James Powers, developed a new punched-card system that could automatically read information on cards without human intervention (Gulliver, 82). Since the population of the U. S. was increasing so fast, the computer was an essential tool in tabulating the totals. These advantages were noted by commercial industries and soon led to the development of improved punch-card business-machine systems by International Business Machines (IBM), Remington-Rand, Burroughs, and other corporations.

By modern standards the punched-card machines were slow, typically processing from 50 to 250 cards per minute, with each card holding up to 80 digits. At the time, however, punched cards were an enormous step forward; they provided a means of input, output, and memory storage on a massive scale. For more than 50 years following their first use, punched-card machines did the bulk of the world’s business computing and a good portion of the computing work in science (Chposky, 73).

By the late 1930s punched-card machine techniques had become so well established and reliable that Howard Hathaway Aiken, in collaboration with engineers at IBM, undertook construction of a large automatic digital computer based on standard IBM electromechanical parts. Aiken’s machine, called the Harvard Mark I, handled 23-digit numbers and could perform all four arithmetic operations. Also, it had special built-in programs to handle logarithms and trigonometric functions. The Mark I was controlled from prepunched paper tape. Output was by card punch and electric typewriter.

It was slow, requiring 3 to 5 seconds for a multiplication, but it was fully automatic and could complete long computations without human intervention (Chposky, 103). The outbreak of World War II produced a desperate need for computing capability, especially for the military. New weapons systems were produced which needed trajectory tables and other essential data. In 1942, John P. Eckert, John W. Mauchley, and their associates at the University of Pennsylvania decided to build a high-speed electronic computer to do the job. This machine became known as ENIAC, for “Electrical Numerical Integrator And Calculator”.

It could multiply two numbers at the rate of 300 products per second, by finding the value of each product from a multiplication table stored in its memory. ENIAC was thus about 1,000 times faster than the previous generation of computers (Dolotta, 47). ENIAC used 18,000 standard vacuum tubes, occupied 1800 square feet of floor space, and used about 180,000 watts of electricity. It used punched-card input and output. The ENIAC was very difficult to program because one had to essentially re-wire it to perform whatever task he wanted the computer to do.

It was, however, efficient in handling the particular programs for which it had been designed. ENIAC is generally accepted as the first successful high-speed electronic digital computer and was used in many applications from 1946 to 1955 (Dolotta, 50). Mathematician John von Neumann was very interested in the ENIAC. In 1945 he undertook a theoretical study of computation that demonstrated that a computer could have a very simple and yet be able to execute any kind of computation effectively by means of proper programmed control without the need for any changes in hardware.

Von Neumann came up with incredible ideas for methods of building and organizing practical, fast computers. These ideas, which came to be referred to as the stored-program technique, became fundamental for future generations of high-speed digital computers and were universally adopted (Hall, 73). The first wave of modern programmed electronic computers to take advantage of these improvements appeared in 1947. This group included computers using random access memory (RAM), which is a memory designed to give almost constant access to any particular piece of information (Hall, 75).

These machines had punched-card or punched-tape input and output devices and RAMs of 1000-word capacity. Physically, they were much more compact than ENIAC: some were about the size of a grand piano and required 2500 small electron tubes. This was quite an improvement over the earlier machines. The first-generation stored-program computers required considerable maintenance, usually attained 70% to 80% reliable operation, and were used for 8 to 12 years. Typically, they were programmed directly in machine language, although by the mid-1950s progress had been made in several aspects of advanced programming.

This group of machines included EDVAC and UNIVAC, the first commercially available computers (Hazewindus, 102). The UNIVAC was developed by John W. Mauchley and John Eckert, Jr. in the 1950’s. Together they had formed the Mauchley-Eckert Computer Corporation, America’s first computer company in the 1940’s. During the development of the UNIVAC, they began to run short on funds and sold their company to the larger Remington-Rand Corporation. Eventually they built a working UNIVAC computer. It was delivered to the U. S.

Census Bureau in 1951 where it was used to help tabulate the U. S. population (Hazewindus, 124). Early in the 1950s two important engineering discoveries changed the electronic computer field. The first computers were made with vacuum tubes, but by the late 1950’s computers were being made out of transistors, which were smaller, less expensive, more reliable, and more efficient (Shallis, 40). In 1959, Robert Noyce, a physicist at the Fairchild Semiconductor Corporation, invented the integrated circuit, a tiny chip of silicon that contained an entire electronic circuit.

Gone was the bulky, unreliable, but fast machine; now computers began to become more compact, more reliable and have more capacity (Shallis, 49). These new technical discoveries rapidly found their way into new models of digital computers. Memory storage capacities increased 800% in commercially available machines by the early 1960s and speeds increased by an equally large margin. These machines were very expensive to purchase or to rent and were especially expensive to operate because of the cost of hiring programmers to perform the complex operations the computers ran.

Such computers were typically found in large computer centers-operated by industry, government, and private laboratories-staffed with many programmers and support personnel (Rogers, 77). By 1956, 76 of IBM’s large computer mainframes were in use, compared with only 46 UNIVAC’s (Chposky, 125). In the 1960s efforts to design and develop the fastest possible computers with the greatest capacity reached a turning point with the completion of the LARC machine for Livermore Radiation Laboratories by the Sperry-Rand Corporation, and the Stretch computer by IBM.

The LARC had a core memory of 98,000 words and multiplied in 10 microseconds. Stretch was provided with several ranks of memory having slower access for the ranks of greater capacity, the fastest access time being less than 1 microseconds and the total capacity in the vicinity of 100 million words (Chposky, 147). During this time the major computer manufacturers began to offer a range of computer capabilities, as well as various computer-related equipment.

These included input means such as consoles and card feeders; output means such as page printers, cathode-ray-tube displays, and graphing devices; and optional magnetic-tape and magnetic-disk file storage. These found wide use in business for such applications as accounting, payroll, inventory control, ordering supplies, and billing. Central processing units (CPUs) for such purposes did not need to be very fast arithmetically and were primarily used to access large amounts of records on file.

The greatest number of computer systems were delivered for the larger applications, such as in hospitals for keeping track of patient records, medications, and treatments given. They were also used in automated library systems and in database systems such as the Chemical Abstracts system, where computer records now on file cover nearly all known chemical compounds (Rogers, 98). The trend during the 1970s was, to some extent, away from extremely powerful, centralized computational centers and toward a broader range of applications for less-costly computer systems.

Most continuous-process manufacturing, such as petroleum refining and electrical-power distribution systems, began using computers of relatively modest capability for controlling and regulating their activities. In the 1960s the programming of applications problems was an obstacle to the self-sufficiency of moderate-sized on-site computer installations, but great advances in applications programming languages removed these obstacles. Applications languages became available for controlling a great range of manufacturing processes, for computer operation of machine tools, and for many other tasks (Osborne, 146).

In 1971 Marcian E. Hoff, Jr. , an engineer at the Intel Corporation, invented the microprocessor and another stage in the deveopment of the computer began (Shallis, 121). A new revolution in computer hardware was now well under way, involving miniaturization of computer-logic circuitry and of component manufacture by what are called large-scale integration techniques. In the 1950s it was realized that “scaling down” the size of electronic digital computer circuits and parts would increase speed and efficiency and improve performance.

However, at that time the manufacturing methods were not good enough to accomplish such a task. About 1960 photoprinting of conductive circuit boards to eliminate wiring became highly developed. Then it became possible to build resistors and capacitors into the circuitry by photographic means (Rogers, 142). In the 1970s entire assemblies, such as adders, shifting registers, and counters, became available on tiny chips of silicon. In the 1980s very large scale integration (VLSI), in which hundreds of thousands of transistors are placed on a single chip, became increasingly common.

Many companies, some new to the computer field, introduced in the 1970s programmable minicomputers supplied with software packages. The size-reduction trend continued with the introduction of personal computers, which are programmable machines small enough and inexpensive enough to be purchased and used by individuals (Rogers, 153). One of the first of such machines was introduced in January 1975. Popular Electronics magazine provided plans that would allow any electronics wizard to build his own small, programmable computer for about $380 (Rose, 32).

The computer was called the Altair 8800. Its programming involved pushing buttons and flipping switches on the front of the box. It didn’t include a monitor or keyboard, and its applications were very limited (Jacobs, 53). Even though, many orders came in for it and several famous owners of computer and software manufacturing companies got their start in computing through the Altair. For example, Steve Jobs and Steve Wozniak, founders of Apple Computer, built a much cheaper, yet more productive version of the Altair and turned their hobby into a business (Fluegelman, 16).

After the introduction of the Altair 8800, the personal computer industry became a fierce battleground of competition. IBM had been the computer industry standard for well over a half-century. They held their position as the standard when they introduced their first personal computer, the IBM Model 60 in 1975 (Chposky, 156). However, the newly formed Apple Computer company was releasing its own personal computer, the Apple II (The Apple I was the first computer designed by Jobs and Wozniak in Wozniak’s garage, which was not produced on a wide scale).

Software was needed to run the computers as well. Microsoft developed a Disk Operating System (MS-DOS) for the IBM computer while Apple developed its own software system (Rose, 37). Because Microsoft had now set the software standard for IBMs, every software manufacturer had to make their software compatible with Microsoft’s. This would lead to huge profits for Microsoft (Cringley, 163). The main goal of the computer manufacturers was to make the computer as affordable as possible while increasing speed, reliability, and capacity.

Nearly every computer manufacturer accomplished this and computers popped up everywhere. Computers were in businesses keeping track of inventories. Computers were in colleges aiding students in research. Computers were in laboratories making complex calculations at high speeds for scientists and physicists. The computer had made its mark everywhere in society and built up a huge industry (Cringley, 174). The future is promising for the computer industry and its technology. The speed of processors is expected to double every year and a half in the coming years.

As manufacturing techniques are further perfected the prices of computer systems are expected to steadily fall. However, since the microprocessor technology will be increasing, it’s higher costs will offset the drop in price of older processors. In other words, the price of a new computer will stay about the same from year to year, but technology will steadily increase (Zachary, 42) Since the end of World War II, the computer industry has grown from a standing start into one of the biggest and most profitable industries in the United States.

It now comprises thousands of companies, making everything from multi-million dollar high-speed supercomputers to printout paper and floppy disks. It employs millions of people and generates tens of billions of dollars in sales each year (Malone, 192). Surely, the computer has impacted every aspect of people’s lives. It has affected the way people work and play. It has made everyone’s life easier by doing difficult work for people. The computer truly is one of the most incredible inventions in history.

Computer Crime Essay

Advances in telecommunications and in computer technology have brought us to the information revolution. The rapid advancement of the telephone, cable, satellite and computer networks, combined with the help of technological breakthroughs in computer processing speed, and information storage, has lead us to the latest revolution, and also the newest style of crime, “computer crime”. The following information will provide you with evidence that without reasonable doubt, computer crime is on the increase in the following areas: hackers, hardware theft, software piracy and the information highway.

This information is gathered from expert sources such as researchers, journalists, and others involved in the field. Computer crimes are often heard a lot about in the news. When you ask someone why he/she robbed banks, they world replied, “Because that’s where the money is. ” Today’s criminals have learned where the money is. Instead of settling for a few thousand dollars in a bank robbery, those with enough computer knowledge can walk away from a computer crime with many millions. The National Computer Crimes Squad estimates that between 85 and 97 percent of computer rimes are not even detected.

Fewer than 10 percent of all computer crimes are reported this is mainly because organizations fear that their employees, clients, and stockholders will lose faith in them if they admit that their computers have been attacked. And few of the crimes that are reported are ever solved. Hacking was once a term that was used to describe someone with a great deal of knowledge with computers. Since then the definition has seriously changed. In every neighborhood there are criminals, so you could say that hackers are the criminals of the computers around us.

There has been a great increase in the number of computer break-ins since the Internet became popular. How serious is hacking? In 1989, the Computer Emergency Response Team, a organization that monitors computer security issues in North America said that they had 132 cases involving computer break-ins. In 1994 alone they had some 2,341 cases, that’s almost an 1800% increase in just 5 years. An example is 31 year old computer expert Kevin Mitnick that was arrested by the FBI for stealing more then $1 million worth in data and about 20,000 credit card numbers through he Internet.

In Vancouver, the RCMP have arrested a teenager with breaking into a university computer network. There have been many cases of computer hacking, another one took place here in Toronto, when Adam Shiffman was charged with nine counts of fraudulent use of computers and eleven counts of mischief to data, this all carries a maximum sentence of 10 years in jail. We see after reading the above information that hacking has been on the increase. With hundreds of cases every year dealing with hacking this is surely a problem, and a problem that is increasing very quickly.

Ten years ago hardware theft was almost impossible, this was because of the size and weight of the computer components. Also computer components were expensive so many companies would have security guards to protect them from theft. Today this is no longer the case, computer hardware theft is on the increase. Since the invention of the microchip, computers have become much smaller and easier to steal, and now even with portable and lap top computers that fit in you briefcase it’s even easier.

While illegal high-tech information hacking ets all the attention, it’s the computer hardware theft that has become the latest in corporate crime. Access to valuable equipment skyrockets and black- market demand for parts increases. In factories, components are stolen from assembly lines for underground resale to distributors. In offices, entire systems are snatched from desktops by individuals seeking to install a home PC. In 1994, Santa Clara, Calif. , recorded 51 burglaries. That number doubled in just the first six months of 1995.

Gunmen robbed workers at Irvine, Calif. , computer parts company, stealing $12 million worth of computer chips. At a large advertising agency in London, thieves came in over a weekend and took 96 workstations, leaving the company to recover from an $800,000 loss. A Chicago manufacturer had computer parts stolen from the back of a delivery van as he was waiting to enter the loading dock. It took less then two minutes for the doors to open, but that was enough time for thieves to get away with thousands of computer components.

Hardware theft has sure become a problem in the last few years, with cases popping up each day we see that hardware theft is on the increase. As the network of computers gets bigger so will the number of software thief’s. Electronic software theft over the Internet and other online services and cost the US software companies about $2. 2 billion a year. The Business Software Alliance shows that number of countries were surveyed in 1994, resulting in piracy estimated for 77 countries, totaling more than $15. billion in losses.

Dollar loss estimates due to software piracy in the 54 countries surveyed last year show an increase of $2. 1 billion, from $12. 8 billion in 1993 to $14. 9 billion in 1994. An additional 23 countries surveyed this year brings the 1994 worldwide total to $15. 2 billion. As we can see that software piracy is on the increase with such big numbers. Many say that the Internet is great, that is true, but there’s also the bad side of the Internet that is hardly ever noticed.

The crime on the Internet is increasing dramatically. Many say that copyright law, privacy law, broadcasting law and law against spreading hatred means nothing. There’s many different kinds of crime on the Internet, such as child pornography, credit card fraud, oftware piracy, invading privacy and spreading hatred. There have been many cases of child pornography on the Internet, this is mainly because people find it very easy to transfer images over the Internet without getting caught.

Child pornography on the Internet has more the doubled on the Internet since 1990, an example of this is Alan Norton of Calgary who was charged of being part of an international porn ring. Credit card fraud has caused many problems for people and for corporations that have credit information in their databases. With banks going on-line in last ew years, criminals have found ways of breaking into databases and stealing thousands of credit cards and information on their clients.

In the past few years thousands of clients have reported millions of transactions made on credit cards that they do not know of. Invading privacy is a real problem with the Internet, this is one of the things that turns many away from the Internet. Now with hacking sites on the Internet, it is easy to download Electronic Mail(e-mail) readers that allows you to hack servers and read incoming mail from others. Many sites now have these e-mail eaders and since then invading privacy has increased. Spreading hatred has also become a problem on the Internet.

This information can be easily accessed by going to any search engine for example http://www. webcrawler. com and searching for “KKK” and this will bring up thousands of sites that contain information on the “KKK”. As we can see with the freedom on the Internet, people can easily incite hatred over the Internet. After reading that information we see that the Internet has crime going on of all kinds. The above information provides you with enough proof that no doubt computer rime is on the increase in many areas such as hacking, hardware theft, software piracy and the Internet.

Hacking can be seen in everyday news and how big corporations are often victims to hackers. Hardware theft has become more popular because of the value of the computer components. Software piracy is a huge problem, as you can see about $15 billion are lost each year. Finally the Internet is good and bad, but theirs a lot more bad then good, with credit card fraud and child pornography going on. We see that computer crime is on the increase and something must be done to stop it.

The Most Basic Computer Systems

The ending of the millennium could bring many problems to our technological society ; we have grown to rely on the most basic computer systems to make our lives convenient. In our attempts to streamline programming computer systems, we created a monster by using only the final 2-digits to represent a 4-digit year. Programmers were excited that they could save a couple of bytes of memory by cutting back the dates to 2-digit numbers. With the turning of the odometer, on December 31st, 1999, we possibly could be in for some major inconveniences to our daily lives.

Religious broadcaster of the 700 Club television show, Pat Robertson said, “we are looking at a man-made global crisis of such magnitude that nobody can assess it”. The Christian Broadcasting Network has even issued a brochure about the millennium bug. Some religious leaders are saying that this could possibly be the end of the world, mankind would initiate Armageddon with there own technology (Szabo). Economist Ed Yardeni of Deutsche Bank in New York is at an extreme in predicting a 70 percent chance of a major recession (Samuelson).

The problem, dubbed the “Y2K bug”, has been deemed so serious that there has even been an U. S. Senate committee appointed to investigate it. Experts anticipate that computers controlling everything from banks, elevators, power grids and automobiles will go on the fritz, due to the date confusion. This could theoretically mean some sort of extended power loss could occur in the middle of winter, and thousands of people could possibly lose their lives. Either from hypothermia or starvation; just imagine the mass confusion that could happen (Meeks).

The United States has been furiously updating their computer-related systems, but we only house a small part of the problem. People sometimes forget of all the other countries in the world that use computer systems; the question of whether or not theyll be prepared remains to be seen. The total global cost of curing the Y2K bug is estimated to be in the neighborhood of 1. 3 trillion dollars. Economies may be crippled across the globe causing a tidal wave effect (MSNBC News). Its not so much of how to fix the date problem; its finding the incorrect 2-digit dates that are contained in the millions of lines of computer program codes.

Government agencies are going to be some of the hardest hit; for example, the estimated cost to Internal Revenue Service to fix 88,000 programs with 60 million lines of computer code is in the neighborhood of a billion dollars. If the deadline is not met, Late refunds, unprocessed returns, or faulty penalties for taxpayers could be the results, if forecasters are correct (Charbonneau). Pat Robertson interviewed Edward Yourdon, a leading software consultant and co-author of Time Bomb 2000: What the Year 2000 Computer Crisis Means to You, said, “Well, it’s fairly simple to explain.

For the last 40 years, we’ve been deliberately programming computers to keep track of only the last two digits of the year because everybody knows the first two digits are 19. This is 1998, the next year is 1999, and the year after that is 00. Unfortunately, the computers will generally think that it’s 1900, rather than 2000, and as a result, will begin making a whole series of mistakes, ranging from fairly simple too possibly catastrophic. We’ve been seeing one example lately, with the credit cards that are coming out now with a “00” expiration date, which a few restaurants and stores think is a credit card that expired in 1900″ (Robertson).

In the recent news reports the cost of fixing the Y2K bug seems to escalate everyday. In early December 1998 federal officials from the Office of Management and Budget and the Council on Year 2000 Conversion estimate the total cost of fixing the governments part of the problem is going to cost U. S. citizens 6. 4 billion dollars (Hamblen). In august 1998 the cost was 5. 4 billion, it would be pretty logical to say that the price tag could reach the 8 or 9 billion-dollar mark before the end of 1999.

In quarterly report released on December 8th 1998 from the Office of Management and Budget and the Council on Year 2000 Conversion, they stated that Of 6,696 federal mission-critical systems, 61% are year 2000 compliant. Currently only six federal agencies (The Departments of Defense, Energy, Health and Human Services, Transportation, State and the Agency for International Development) are making adequate progress (Hamblen). The legitimacy of the year 200 problem has never really been doubted by those who have an understanding of how computer systems operate, its just a wonder that we procrastinated this long to take any action on it.

Maybe its because people deny what they don’t like; the issue lacked political sex appeal; people assumed someone would fix the problem for us. If our current actions would have started back in 1995, very few of us would be panicking, and we could all be preparing to enjoy the New Years celebration. Well, its a little late now Here is excerpt from Robert Samuelsons article, Y2K Denial that really sums up what is happening with this issue: The year 2000 computer glitch may be the ultimate vindication of a 1959 essay titled The Two Cultures and the Scientific Revolution by the British scientist and novelist C. P. Snow.

The rise of science and technology, Snow said, was splintering society into two broad groups, those who did and didn’t understand science. Increasingly, these groups couldn’t relate or talk to each other. As they drift apart, he wrote, no society is going to be able to think with wisdom. Well, the Year 2000 (or Y2K) problem is a splendid example of unwisdom, But Snow’s essay foretold the larger cause. Most people have no conception of science and technology, he wrote. The general public is tone-deaf, he continued, over an immense range of intellectual experience, [and as] with the tone-deaf, they don’t know what they miss.

So the technologists making the warnings could not really talk with their intended audience. The result is that whatever damage Y2K does, the wound will be mostly self-inflicted (Samuelson). Some many people do not have an understanding of what a great impact this could have on our everyday lives. I think of what if I tried to going to the ATM machine to get money, and what do you know it says that you have zero dollar balance. Something that I have wanted to try was talking on the phone at about 5 minute to midnight and then have the year change from 1999 to 1900, conceivably this could be a 100 year phone call.

Even at AT & Ts rates of 10 cents a minute that phone call could cost anywhere in the neighborhood of 52 million dollars. A number of things could happen, for the moment these are just theories, who knows, January 1st, 2000 could come with out a problem. We could also spend much of the next year wondering whether the Y2K problem is a genuine menace or just a bad techno-joke. One thing is for sure, we have to wait and see what happens and be careful not create our own disaster.

Hackers, Good or Evil

Since the introduction of personal computers in the 1970’s, the art of computer hacking has grown along with changing roles of computers in society. Computers have taken over our lives. People could not function without them; our electricity is run by computers, the government could not function without computers, and there are many others. Hackers are people who illegally gain access to, and sometimes tamper with, information in a computer system. Due to recent media coverage and corporate interest, hackers activities are now looked down on by society as criminal.

Despite the growing trend of hacking, very little research has been done on the hacking world and its culture. The image of a computer hacker has grown from a harmless nerd into a vicious techno-criminal. In reality most hackers are not out to destroy the world. The hackers in today’s society are not bored teenagers. Imagine this, you are driving along the road and suddenly you see something spectacular. Now imagine that you are not allowed to deviate from your course to check it out. This is what a so-called hacker faces.

Just imagine that you saw an injured person on the side of the road. In this analogy you are not allowed to help the injured person. A hacker is not allowed to explore like everyone else in the world. A hacker is not allowed to help fix potential security holes. The term hacker can have many meanings. The most visible to the public is the person pirating software, and breaking into corporate networks and destroying information. This is the public misconception of a hacker. Back in the UNIX days, a hack was simply a quick and dirty way of doing something.

In fact hackers are well educated people, In “Hackers intensify fears of industrial espionage,” Mark Gembicki reports “the typical hacker used to be 14 to 16 years of age, white male, somewhat of an introvert . . . However, this is no longer the case. . . Our hacker profile . . . [is that] the hackers are around 30-33, white male again, professional” (Drumheller). Many of the hacker’s today are probably the grown-up fourteen to sixteen years old from the past. Except now they make enough money to purchase expensive computer equipment. They are well educated and have an interest in technology.

The majority of the hackers of today are thirty years old and well educated, they are not all out to destroy computer systems and break into national security. Although hacking is a growing trend in our society, it is not one that is accepted in the United States or any other country for that matter. Hacking is an international phenomenon that cuts across race, gender, ethnic background, sex, and education level. Hackers have always been considered different and have never been accepted in society. Hackers in those days were basically just computer experts.

Nowadays hacker means the same thing as a cracker, a person who pirates software, and malicious hackers. The media, of course, never prints the good things hackers do. Most hackers provide a service to companies, by letting the company know about security holes, before a rival exploits it. Most hackers want nothing more than to simply learn. A hacker has an extreme thirst for knowledge, but not in the traditional subjects. Technology, and anything new interest hackers. In fact most security experts start out but learning and hacking. The bad view of hackers is not completely false.

There are hackers out there that will do there best to harm any system hey can, national security documents the bad hackers as dangerous, they may gain access to classified information. Patricia Irving, president of a small business which creates biological and chemical defense technology, says “Our technologies are being used for national security type purposes, and the U. S. government has a concern about what might be happening’ in countries that might not be friendly toward the United States or with terrorist groups inside and outside of this country”.

Both governments and companies are forced to pay large amounts of money to try and make their sites safe and impossible for hackers to break into. However most hackers are not going to harm a government or business. Genuine hackers hack only for the joy of knowledge. A rush, like no other, is felt after finally gaining access into a site or a computer. They feel most information should be free. They do not look at hacking as stealing. They see hacking as borrowing information. However the good hackers do understand the rights of privacy and the good hackers do not mess with peoples private matters. Hackers believe knowledge is power.

Therefor they are in the constant pursuit of power. Hackers are a growing trend, or problem, whichever way one sees it. This underground culture will not disappear anytime soon. In fact its constantly growing as the number of users on the Internet keeps on increasing, this is a website href=http://www. coainc. cjb. net/*(http://www. coainc. cjb. net/)*/a href* just one site where a person is offered introductions into hacking and to join a hacker group. In short I will briefly describe the basic methods and, present alilte of the FYI on hacking it self. Hackers may use a variety of ways to hack into a system.

First if the hacker is experienced and smart the hacker will use Telnet to access a shell on another machine so that the risk of getting caught is lower than doing it using their own system. (This is very complicated to explain, the simplest way to put is the hacker sends a commend through the Internet to the server and the server on which the Telnet is located on executes the command actions), the ways in which the hacker will break into the system are: 1) Guess/cracking passwords. This is where the hacker takes guesses at the password or has a crackprogram to crack the password protecting the system. Finding back doors is anotherway in which the hacker may get access to the system.

This is where the hacker tries to find flaws in the system they are trying to enter. 3) One other way in which a hacker may try to get into a system is by using a program called a WORM. This program is specially programmed to suit the need of the user. This program continually tries to connect to a machine at over 100 times a second until eventually the system lets in and the worm executes its program. The program could be anything from getting password files to deleting files depending on what it has been programmed to do.

Regarding protection the only way that you or a company can stop a Hacker is by not having your computer connected to the net. This is the only sure-fire way in which you can stop a hacker entering your system. This is mainly because hackers use a phone line to access the system. If it is possible for one person to access the system then it is possible for a hacker to gain access to the system. One of the main problems is that major companies need to be networked and accessible over the net so that employees can do overdue work or so that people can look up things on that company.

Also major companies network their offices so that they can access data from different positions. One way which is used to try to prevent hackers gaining access is a program used by companies called a firewall. A firewall is a program that stops other connections from different servers to the firewall server. This is very effective in stopping hackers entering the system. Though this is not a fool proof way of stopping hackers as it can be broken and hackers can get in. Though this is a very good way of protecting your system on the Internet.

Alitel bit on consequences, and hacking cases. Some of the major hacks that have been committed have been done by young teens aged between 14 and 18. These computer geniuses as they are known have expert knowledge on what they are doing and also know the consequences. Though the consequences do naturally enter there mind when they are doing it. This hack occurred on February 10,1997, and again on February 14, 1997 Portuguese hackers launched a political attack on the web page of the Indonesian government, focusing on that country’s continued oppression of East Timor.

The attack was online for about 3 hours from 7. 00 PM to10. 00 PM (Portuguese Time) at the web site of the Department of Foreign Affairs, Republic of Indonesia. The hackers did not delete or change anything. The said We just hack pages. Another major hack that occurred was on April 1 1981 by a single user. This hacker who was situated in an East Coast brokerage house was interested in the stockmarket. SO he purchased $100,000 worth of shares in the stock market. Then he hacked into the stock markets main computers and stole $80 million dollars.

The hacker was eventually caught although $53 million dollars was not recovered. On Wednesday, March 5 1997 the home page of the National Aeronautics and Space Administration’s was recently hacked and the contents changed. The group known as H4G1S. This group of hackers managed to change the contents of the webpage the hacking group changed the webpage and left a little message for all. It said Gr33t1ngs fr0m th3 m3mb3rs 0f H4G1S. Our mission is to continue where our colleagues the ILF left off. During the next month, we the members of H4G1S will be launching an attack on corporate America.

All who profit from the misuse of the Internet will fall victim to our upcoming reign of digital terrorism. Our privileged and highly skilled members will stop at nothing until our presence is felt nationwide. Even your most sophisticated firewalls are useless. We will demonstrate this in the upcoming weeks. The homepage of the United States Air Force was recently hacked and the contents had been changed. The webpage had been changed completely as the hackers had inserted pornographic pictures saying, this is what we are doing to you and had under the image screwing you.

The hackers have changed it and shown their views on the political system. One other major hack that was committed was by a 16-year-old boy in Europe. This boy hacked into the British Airforce and downloaded confidential information on Ballistic missiles. The boy hacked into the site and down loaded this information because he was interested and wanted to know more about them. This boy was fined a sum of money. In conclusion it can be said that hackers are sophisticated and very talented when it comes to the use of a computer.

Hacking is a process of learning not following any manual. Hackers learn as they go and use a method of trial and error. Most people who say they are hackers most of the time are not. Real hackers do not delete or destroy any information on the system that they hack. Hackers hack because they love the thrill of getting into a system that is supposedly unable to be entered. Overall hackers are smart and cause little damage to the system they enter. So hackers are not really terrorists in a way they are portrayed.