StudyBoss » Film » Hollywood and Computer Animation

Hollywood and Computer Animation

Hollywood has gone digital, and the old ways of doing things are dying. Animation and special effects created with computers have been embraced by television networks, advertisers, and movie studios alike. Film editors, who for decades worked by painstakingly cutting and gluing film segments together, are now sitting in front of computer screens. There, they edit entire features while adding sound that is not only stored digitally, but also has been created and manipulated with computers. Viewers are witnessing the results of all this in the form of stories and experiences that they never dreamed of before.

Perhaps the most surprising aspect of all this, however, is that the entire digital effects and animation industry is still in its infancy. The future looks bright. How It Was In the beginning, computer graphics were as cumbersome and as hard to control as dinosaurs must have been in their own time. Like dinosaurs, the hardware systems, or muscles, of early computer graphics were huge and ungainly. The machines often filled entire buildings. Also like dinosaurs, the software programs or brains of computer graphics were hopelessly underdeveloped.

Fortunately for the visual arts, the evolution of oth brains and brawn of computer graphics did not take eons to develop. It has, instead, taken only three decades to move from science fiction to current technological trends. With computers out of the stone age, we have moved into the leading edge of the silicon era. Imagine sitting at a computer without any visual feedback on a monitor. There would be no spreadsheets, no word processors, not even simple games like solitaire. This is what it was like in the early days of computers.

The only way to interact with a computer at that time was through toggle switches, flashing lights, punchcards, and Teletype rintouts. How It All Began In 1962, all this began to change. In that year, Ivan Sutherland, a Ph. D. student at (MIT), created the science of computer graphics. For his dissertation, he wrote a program called Sketchpad that allowed him to draw lines of light directly on a cathode ray tube (CRT). The results were simple and primitive. They were a cube, a series of lines, and groups of geometric shapes. This offered an entirely new vision on how computers could be used.

In 1964, Sutherland teamed up with Dr. David Evans at the University of Utah to develop the world’s first academic computer graphics department. Their goal was to attract only the most gifted students from across the country by creating a unique department that combined hard science with the creative arts. They new they were starting a brand new industry and wanted people who would be able to lead that industry out of its infancy. Out of this unique mix of science and art, a basic understanding of computer graphics began to grow. Algorithms for the creation of solid objects, their modeling, lighting, and shading were developed.

This is the roots virtually every aspect of today’s computer graphics industry is based on. Everything from desktop publishing to virtual reality find their beginnings in the basic research that came out of the University of Utah in the 60’s and 70’s. During this time, Evans and Sutherland also founded the first computer graphics company. Aptly named Evans & Sutherland (E&S), the company was established in 1968 and rolled out its first computer graphics systems in 1969. Up until this time, the only computers available that could create pictures were custom-designed for the military and prohibitively expensive.

E&S’s computer system could draw wireframe images extremely rapidly, and was the first ommercial “workstation” created for computer-aided design (CAD). It found its earliest customers in both the automotive and aerospace industries. Times Were Changing Throughout its early years, the University of Utah’s Computer Science Department was generously supported by a series of research grants from the Department of Defense. The 1970’s, with its anti-war and anti-military protests, brought increasing restriction to the flows of academic grants, which had a direct impact on the Utah department’s ability to carry out research.

Fortunately, as the program wound down, Dr. Alexander Schure, founder and president of New York Institute of Technology (NYIT), stepped forward with his dream of creating computer-animated feature films. To accomplish this task, Schure hired Edwin Catmull, a University of Utah Ph. D. , to head the NYIT computer graphics lab and then equipped the lab with the best computer graphics hardware available at that time. When completed, the lab boasted over $2 million worth of equipment. Many of the staff came from the University of Utah and were given free reign to develop both two- and three-dimensional computer graphics tools.

Their goal was o soon produce a full -length computer animated feature film. The effort, which began in 1973, produced dozens of research papers and hundreds of new discoveries, but in the end, it was far too early for such a complex undertaking. The computers of that time were simply too expensive and too under powered, and the software not nearly developed enough. In fact, the first full length computer generated feature film was not to be completed until recently in 1995. By 1978, Schure could no longer justify funding such an expensive effort, and the lab’s funding was cut back.

The ironic thing is that had the Institute ecided to patent many more of its researcher’s discoveries than it did, it would control much of the technology in use today. Fortunately for the computer industry as a whole, however, this did not happen. Instead, research was made available to whomever could make good use of it, thus accelerating the technologies development. Industry’s First Attempts As NYIT’s influence started to wane, the first wave of commercial computer graphics studios began to appear.

Film visionary George Lucas (creator of Star Wars and Indiana Jones trilogies) hired Catmull from NYIT in 1978 to start the Lucasfilm Computer Development Division, and a group of over half-dozen computer graphics studios around the country opened for business. While Lucas’s computer division began researching how to apply digital technology to filmmaking, the other studios began creating flying logos and broadcast graphics for various corporations including TRW, Gillette, the National Football League, and television programs, such as “The NBC Nightly News” and “ABC World News Tonight.

Although it was a dream of these initial computer graphics companies to make movies with their computers, virtually all the early commercial computer raphics were created for television. It was and still is easier and far more profitable to create graphics for television commercials than for film. A typical frame of film requires many more computer calculations than a similar image created for television, while the per-second film budget is perhaps about one-third as much income.

The actual wake-up call to the entertainment industry was not to come until much later in 1982 with the release of Star-Trek II: The Wrath of Kahn. That movie contained a monumental sixty seconds of the most exciting full-color computer graphics yet seen. Called the “Genesis Effect,” the sequence starts out with a view of a dead planet hanging lifeless in space. The camera follows a missiles trail into the planet that is hit with the Genesis Torpedo. Flames arc outwards and race across the surface of the planet.

The camera zooms in and follows the planets transformation from molten lava to cool blues of oceans and mountains shooting out of the ground. The final scene spirals the camera back out into space, revealing the cloud-covered newly born planet. These sixty seconds may sound uneventful in light of current digital effects, but this remarkable scene epresents many firsts. It required the development of several radically new computer graphics algorithms, including one for creating convincing computer fire and another to produce realistic mountains and shorelines from fractal equations.

This was all created by the team at Lucasfilm’s Computer Division. In addition, this sequence was the first time computer graphics were used as the center of attention, instead of being used merely as a prop to support other action. No one in the entertainment industry had seen anything like it, and it unleashed a flood of queries from Hollywood directors seeking to find out both ow it was done and whether an entire film could be created in this fashion. Unfortunately, with the release of TRON later that same year and The Last Starfighter in 1984, the answer was still a decided no.

Both of these films were touted as a technological tour-de-force, which, in fact, they were. The films’ graphics were extremely well executed, the best seen up to that point, but they could not save the film from a weak script. Unfortunately, the technology was greatly oversold during the film’s promotion and so in the end it was technology that was blamed for the film’s failure. With the 1980s ame the age of personal computers and dedicated workstations. Workstations are minicomputers that were cheap enough to buy for one person.

Smaller was better, aster, an much, much cheaper. Advances in silicon chip technologies brought massive and very rapid increases in power to smaller computers along with drastic price reductions. The costs of commercial graphics plunged to match, to the point where the major studios suddenly could no longer cover the mountains of debt coming due on their overpriced centralized mainframe hardware. With their expenses mounting, and without the extra capital to upgrade to the ewer cheaper computers, virtually every independent computer graphics studio went out of business by 1987.

All of them, that is, except PDI, which went on to become the largest commercial computer graphics house in the business and to serve as a model for the next wave of studios. The Second Wave Burned twice by TRON and The Last Starfighter, and frightened by the financial failure of virtually the entire industry, Hollywood steered clear of computer graphics for several years. Behind the scenes, however, it was building back and waiting for the next big break. The break materialized in the form of a watery reation for the James Cameron 1989 film, The Abyss.

For this film, the group at George Lucas’ Industrial Light and Magic (ILM) created the first completely computer-generated entirely organic looking and thoroughly believable creature to be realistically integrated with live action footage and characters. This was the watery pseudopod that snaked its way into the underwater research lab to get a closer look at its human inhabitants. In this stunning effect, ILM overcame two very difficult problems: producing a soft-edged, bulgy, and irregular shaped object, and convincingly anchoring that object in a live-action sequence.

Just s the 1982 Genesis sequence served as a wake-up call for early film computer graphics, this sequence for The Abyss was the announcement that computer graphics had finally come of age. A massive outpouring of computer-generated film graphics has since ensued with studios from across the entire spectrum participating in the action. From that point on, digital technology spread so rapidly that the movies using digital effects have become too numerous to list in entirety. However they include the likes of Total Recall, Toys, Terminator 2: Judgment Day, The Babe, In the Line of Fire, Death Becomes Her, and of course, Jurassic Park.

Cite This Work

To export a reference to this article please select a referencing style below:

Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.

Leave a Comment