In recent years, there has been an abundance of new technologies in the information systems field. These new technologies have altered the very development process itself. Information systems have gone from being a series of one level databases to three dimensional reality, virtual reality, and multimedia systems. In the early days of information systems, the demands were for data, with no real function of artificial intelligence. However, as the 21st century approaches, business has taken on an entirely different function, and the need for individual information systems has grown immensely.
This demand for information technology is in all areas of business: corporations, law, medicine, science and even small business. In addition, the worldwide web and the Internet have added an additional factor of communications. Most information systems in use today require at the very least, a measure of Internet capability. In order to understand the changes in these development processes, the history of databases should be analyzed. Database Management Systems actually began in the 1950s, with what is known as the first generation, also known as file systems on tape.
The major task of any computer in those days was to process data under the control of a program. This primarily meant calculating, counting and simple tasks. Second generation databases, file systems on disk, allowed use of computers in dialogue mode as well as batch mode. The development of magnetic disks allowed for more sophisticated file systems, making multiple access possible. These first two generations of DBMS were characterized by the availability of file systems only; strictly speaking these were the forerunners of database systems, the foundations.
An important component of these database systems were the static association of certain data sets (files) with individual programs that would concentrate on these. There were high redundancy problems between files; inconsistencies when one program made changes that are not made in all programs; inflexibility against changes in applications; low productivity by programmers since program maintenance was expensive; and the problem of adopting and maintaining standards for coding and data formats.
The third generation, pre-relational databases, started in the 1960s and continued into the 1970s. This generation is characterized by the introduction of a distinction between logical and physical information, along with a parallel need to manage large collections of data. Data models were used for the first time to describe physical structures from a logical point of view. With this distinction between the logical and physical information, value systems were developed which could integrate all the data of a given application into one collection.
The fourth generation consisted of relational databases and began in the 1980s, resulting in database systems that could store data redundancy free under a central control and in a clear distinction between physical and a logical data model. Systems based on relationship modeling occurred during this period of time. The systems based on relationship modeling are provided with a high degree of physical data independence and the availability of powerful languages. Less of the system is visible to the user, with changes taking place in the background.
A shift from record orientation to set orientation marks this fourth generation. As of 1991, there was a fifth generation predicted, post-relational, which we are currently experiencing, and perhaps surpassing. Other applications can benefit from database technology. The development of extensible systems, logic-oriented systems, and object-oriented systems are part of this generation. R. G. Cattel speaks of the changes seen in the last fifteen years: The past decade has seen major changes in the computing industry. There has been a widespread move from centralized computing to networked workstations on every desk.
We have seen an entirely new generation of software aimed at exploiting workstation technology, particularly in engineering, scientific and office applications. In database systems, there have been major changes in products for business applications, including the widespread acceptance of relational DBMSs. However, existing commercial DBMSs, both small-scale and large-scale, have proven inadequate for applications such as computer-aided design, software engineering, and office automation; new research and development in database systems has been necessary.
The very nature of these new object oriented databases has caused changes right down to the programming level. As we near the end of this century, designers are now looking at databases that can predict the side effects of medicines, eliminating the need for human trial subjects. Other programs are being designed to put in data for architecture to check building integrity. Car manufacturers are able to input data and have three-dimensional models to experiment with, regarding stress factors and damage.
With so much new technology erupting every day, some needs have developed for a standardization of protocols and a way to store all the data. Mark Hammond (PC Week) talks about a new development for standardization. IBM has developed DRDA (Distributed Relational Database Architecture) which is a standard interoperability protocol for databases and applications. The DRDA was developed in 1989, and is finally out into the public domain and ready for use. Data warehousing is a new development on the Information System front, and is actually the culmination of new developments in data technology. Gabrielle Gagnon identifies these developments.
They include entity-relationship modeling, heuristic searches, mass data storage, neural networks, multiprocessing, and natural-language interfaces. She goes on to say the data warehouse is a centralized integrated repository of information, one that can provide a vital competitive edge for product development. There are several types of data warehouses, including the operational data store (ODS), the data mart, which is of value in analyzing sales, and the enterprise warehouse, which can do both a centralized and distributed approach. There are some new packages being put on the market for entity-relationship modeling.
LogicWorks is marketing a package called ERwin. It uses the ER diagrams represented by Peter Chen, and produces logical representations of data for relational databases. Erwin is a full-fledged relational database designer; it lets the user define and type attributes. The use defines the primary keys, but Erwin automatically assigns foreign keys based on the type of relationships the user establishes between entities. ERwin also has many of the features of high-end dictionary products. Voice recognition is one technology that has been in use for many years, but on a limited basis.
As the end of the 20th century approaches, voice recognition is now a technology that is seen as a promising and exciting useful tool for computer science. Voice technology is a valuable tool for individuals as a time saver, a necessary tool for the disabled, and has several practical uses in business. In Esther Schindlers book, The Computer Speech Book, she talks about the need for a voice recognition system: Right now, to communicate with any computer, you have to learn how to use essentially arbitrary hardware and software. You have to learn how to type. You must master the intricate details of an operating system.
You need to learn what a file is, to discover what object to click on, and to understand why in the world that should ever matter. People are most comfortable expressing their thoughts with words, speech and language. Any other method we use to communicate with a computer is thus second-rate, unnatural, and inefficient. Despite the success of computers, until we can talk directly to them and convince the computer to do what we say, we will always be one step behind. (Schindler 1996) With the advent of voice recognition technology, the information systems development process is again radically altered.
Voice recognition involves a whole new set of requirements and protocols. The goal of voice recognition technology is to have effective speech communication that is hands free, allowing the user to use a computer without a keyboard. Perhaps, there will be a time when a keyboard and a mouse become obsolete. The major technical challenge in speech recognition is to provide a high degree of accuracy while supporting use of continuous speech. Improving speaker independence and vocabulary size is of equal importance. According to Esther Schindler,
Speech will become more and more a part of computing and as it does so, the lines between getting work done and conscious computing will blur. The speed at which this change will occur will be based on the rate at which the technology becomes cheaper, faster, smaller, more efficient, and solves peoples problems. As the various schools of computer speech technology improve in what they can do within their own field, (faster and more accurate speech recognition, or more understandable speech synthesis, for example), they will have to, and will, converge their technologies into more products and ever more useful ones.
Schindler 1996) Tracey Mayor believes there is a strong future for voice technology in areas that require a hands-free operation. Material handlers in factories will be able to voice commands to provide mobility; the unskilled computer operator will be able to operate a voice enabled stand-alone workstation. Industrial inspectors will be able to use the technology as well, using voice instead of pen and keyboard. Speech recognition will be valuable in the airline industry, both in operations and in flying and may also have applications on the trading floors of the stock exchange.
There are some long-term outlooks for voice recognition technology. It is anticipated that speech recognition will merge with natural language processing to use both statistical models and natural language grammar structures to produce high quality recognition and synthesis. Speech recognition technology may become incorporated with virtual reality. There will be a gradual evolution from text-to-speech to concept-to-speech. At some point, speech recognition systems may have artificial intelligence that can determine a question from a statement, or request more information if the user does not state something clearly.
One area of voice recognition technology that has experienced tremendous growth is telephony technology. This technology lets a person speak to a computer by phone. Charles Schwab & Co. was a pioneer in this area. The company has a program that allows clients to phone in and check stock prices or buy and sell mutual funds. Instead of punching in numbers, the client says the name or trading symbol of a company or mutual fund. The Voice Broker system gets over 50,000 calls every day. The cost to the company per phone call is about one tenth of that it would be if one of their operators or agents had to handle the call.