Sunday 14 August 2011

Appeal

Appeal in a cartoon character corresponds to what would be called charisma in an actor.[35] A character who is appealing is not necessarily sympathetic — villains or monsters can also be appealing — the important thing is that the viewer feels the character is real and interesting.[35] There are several tricks for making a character connect better with the audience; for likable characters a symmetrical or particularly baby-like face tends to be effective.[36] A complicated or hard to read face will lack appeal, it may more accurately be described as 'captivation' in the composition of the pose, or the character design.

Solid drawing

The principle of solid drawing means taking into account forms in three-dimensional space, giving them volume and weight.[12] The animator needs to be a skilled draughtsman and has to understand the basics of three-dimensional shapes, anatomy, weight, balance, light and shadow, etc.[32] For the classical animator, this involved taking art classes and doing sketches from life.[33] One thing in particular that Johnston and Thomas warned against was creating "twins": characters whose left and right sides mirrored each other, and looked lifeless.[34] Modern-day computer animators draw less because of the facilities computers give them, yet their work benefits greatly from a basic understanding of animation principles, and their additions to basic computer animation

Exaggeration

Exaggeration is an effect especially useful for animation, as perfect imitation of reality can look static and dull in cartoons.[12] The level of exaggeration depends on whether one seeks realism or a particular style, like a caricature or the style of an artist. The classical definition of exaggeration, employed by Disney, was to remain true to reality, just presenting it in a wilder, more extreme form.[29] Other forms of exaggeration can involve the supernatural or surreal, alterations in the physical features of a character, or elements in the storyline itself.[30] It is important to employ a certain level of restraint when using exaggeration; if a scene contains several elements, there should be a balance in how those elements are exaggerated in relation to each other, to avoid confusing or overawing the viewe

Timing

Timing refers to the number of drawings or frames for a given action, which translates to the speed of the action on film.[12] On a purely physical level, correct timing makes objects appear to abide to the laws of physics; for instance, an object's weight decides how it reacts to an impetus, like a push.[27] Timing is critical for establishing a character's mood, emotion, and reaction.[12] It can also be a device to communicate aspects of a character's personality

Secondary action

Adding secondary actions to the main action gives a scene more life, and can help to support the main action. A person walking can simultaneously swing his arms or keep them in his pockets, he can speak or whistle, or he can express emotions through facial expressions.[24] The important thing about secondary actions is that they emphasize, rather than take attention away from the main action. If the latter is the case, those actions are better left out.[25] In the case of facial expressions, during a dramatic movement these will often go unnoticed. In these cases it is better to include them at the beginning and the end of the movement, rather than during

Arcs

Most human and animal actions occur along an arched trajectory, and animation should reproduce these movements for greater realism. This can apply to a limb moving by rotating a joint, or a thrown object moving along a parabolic trajectory. The exception is mechanical movement, which typically moves in straight lines.

Slow in and slow out

The movement of the human body, and most other objects, needs time to accelerate and slow down. For this reason, an animation looks more realistic if it has more frames near the beginning and end of a movement, and fewer in the middle.[12] This principle goes for characters moving between two extreme poses, such as sitting down and standing up, but also for inanimate, moving objects, like the bouncing ball in the above illustration

Follow through and overlapping action

These closely related techniques help render movement more realistic, and give the impression that characters follow the laws of physics. "Follow through" means that separate parts of a body will continue moving after the character has stopped. "Overlapping action" is the tendency for parts of the body to move at different rates (an arm will move on different timing of the head and so on). A third technique is "drag", where a character starts to move and parts of him take a few frames to catch up.[12] These parts can be inanimate objects like clothing or the antenna on a car, or parts of the body, such as arms or hair. On the human body, the torso is the core, with arms, legs, head and hair appendices that normally follow the torso's movement. Body parts with much tissue, such as large stomachs and breasts, or the loose skin on a dog, are more prone to independent movement than bonier body parts.[19] Again, exaggerated use of the technique can produce a comical effect, while more realistic animation must time the actions exactly, to produce a convincing result.[20]
Thomas and Johnston also developed the principle of the "moving hold". A character not in movement can be rendered absolutely still; this is often done, particularly to draw attention to the main action. According to Thomas and Johnston, however, this gave a dull and lifeless result, and should be avoided. Even characters sitting still can display some sort of movement, such as the torso moving in and out with breathin

Straight ahead action and pose to pose

These are two different approaches to the actual drawing process. "Straight ahead action" means drawing out a scene frame by frame from beginning to end, while "pose to pose" involves starting with drawing a few key frames, and then filling in the intervals later.[12] "Straight ahead action" creates a more fluid, dynamic illusion of movement, and is better for producing realistic action sequences. On the other hand, it is hard to maintain proportions, and to create exact, convincing poses along the way. "Pose to pose" works better for dramatic or emotional scenes, where composition and relation to the surroundings are of greater importance.[16] A combination of the two techniques is often used.[17]
Computer animation removes the problems of proportion related to "straight ahead action" drawing; however, "pose to pose" is still used for computer animation, because of the advantages it brings in composition.[18] The use of computers facilitates this method, as computers can fill in the missing sequences in between poses automatically. It is, however, still important to oversee this process, and apply the other principles discussed

Staging

This principle is akin to staging as it is known in theatre and film.[11] Its purpose is to direct the audience's attention, and make it clear what is of greatest importance in a scene; what is happening, and what is about to happen.[12] Johnston and Thomas defined it as "the presentation of any idea so that it is completely and unmistakably clear", whether that idea is an action, a personality, an expression or a mood.[11] This can be done by various means, such as the placement of a character in the frame, the use of light and shadow, and the angle and position of the camera.[13] The essence of this principle is keeping focus on what is relevant, and avoiding unnecessary detail

Anticipation

Anticipation is used to prepare the audience for an action, and to make the action appear more realistic.[7] A dancer jumping off the floor has to bend his knees first; a golfer making a swing has to swing the club back first. The technique can also be used for less physical actions, such as a character looking off-screen to anticipate someone's arrival, or attention focusing on an object that a character is about to pick up.[8]
Anticipation: A baseball player making a pitch prepares for the action by moving his arm back.
For special effect, anticipation can also be omitted in cases where it is expected. The resulting sense of anticlimax will produce a feeling of surprise in the viewer, and can often add comedy to a scene.[9] This is often referred to as a 'surprise gag'

Squash and stretch

The most important principle is "squash and stretch",[2] the purpose of which is to give a sense of weight and flexibility to drawn objects. It can be applied to simple objects, like a bouncing ball, or more complex constructions, like the musculature of a human face.[3][4] Taken to an extreme point, a figure stretched or squashed to an exaggerated degree can have a comical effect.[5] In realistic animation, however, the most important aspect of this principle is the fact that an object's volume does not change when squashed or stretched. If the length of a ball is stretched vertically, its width (in three dimensions, also its depth) needs to contract correspondingly horizontally

12 basic principles of animation

The Twelve Basic Principles of Animation is a set of principles of animation introduced by the Disney animators Ollie Johnston and Frank Thomas in their 1981 book The Illusion of Life: Disney Animation.[a][1] Johnston and Thomas in turn based their book on the work of the leading Disney animators from the 1930s onwards, and their effort to produce more realistic animations. The main purpose of the principles was to produce an illusion of characters adhering to the basic laws of physics, but they also dealt with more abstract issues, such as emotional timing and character appeal.
The book and its principles have become generally adopted, and have been referred to as the "Bible of the industry."[by whom?] In 1999 the book was voted number one of the "best animation books of all time" in an online poll. Though originally intended to apply to traditional, hand-drawn animation, the principles still have great relevance for today's more prevalent computer animation

Non-traditional systems

An alternative approach was developed where the actor is given an unlimited walking area through the use of a rotating sphere, similar to a hamster ball, which contains internal sensors recording the angular movements, removing the need for external cameras and other equipment. Even though this technology could potentially lead to much lower costs for mocap, the basic sphere is only capable of recording a single continuous direction. Additional sensors worn on the person would be needed to record anything more.
Another alternative is using a 6DOF (Degrees of freedom) motion platform with an integrated omni-directional treadmill with high resolution optical motion capture to achieve the same effect. The captured person can walk in an unlimited area, negotiating different uneven terrains. Applications include medical rehabilitation for balance training, biomechanical research and virtual reality.

RF positioning

RF (radio frequency) positioning systems are becoming more viable as higher frequency RF devices allow greater precision than older RF technologies. The speed of light is 30 centimeters per nanosecond (billionth of a second), so a 10 gigahertz (billion cycles per second) RF signal enables an accuracy of about 3 centimeters. By measuring amplitude to a quarter wavelength, it is possible to improve the resolution down to about 8 mm. To achieve the resolution of optical systems, frequencies of 50 gigahertz or higher are needed, which are almost as line of sight and as easy to block as optical systems. Multipath and reradiation of the signal are likely to cause additional problems, but these technologies will be ideal for tracking larger volumes with reasonable accuracy, since the required resolution at 100 meter distances is not likely to be as high

Facial motion capture

Most traditional motion capture hardware vendors provide for some type of low resolution facial capture utilizing anywhere from 32 to 300 markers with either an active or passive marker system. All of these solutions are limited by the time it takes to apply the markers, calibrate the positions and process the data. Ultimately the technology also limits their resolution and raw output quality levels.
High fidelity facial motion capture, also known as performance capture, is the next generation of fidelity and is utilized to record the more complex movements in a human face in order to capture higher degrees of emotion. Facial capture is currently arranging itself in several distinct camps, including traditional Vicon based motion capture data, blend shaped based solutions, capturing the actual topology of an actor's face, and proprietary systems.

Magnetic systems

Magnetic systems calculate position and orientation by the relative magnetic flux of three orthogonal coils on both the transmitter and each receiver. The relative intensity of the voltage or current of the three coils allows these systems to calculate both range and orientation by meticulously mapping the tracking volume. The sensor output is 6DOF, which provides useful results obtained with two-thirds the number of markers required in optical systems; one on upper arm and one on lower arm for elbow position and angle. The markers are not occluded by nonmetallic objects but are susceptible to magnetic and electrical interference from metal objects in the environment, like rebar (steel reinforcing bars in concrete) or wiring, which affect the magnetic field, and electrical sources such as monitors, lights, cables and computers. The sensor response is nonlinear, especially toward edges of the capture area. The wiring from the sensors tends to preclude extreme performance movements. The capture volumes for magnetic systems are dramatically smaller than they are for optical systems. With the magnetic systems, there is a distinction between “AC” and “DC” systems: one uses square pulses, the other uses sine wave pulse.

Mechanical motion

Mechanical motion capture systems directly track body joint angles and are often referred to as exo-skeleton motion capture systems, due to the way the sensors are attached to the body. Performers attaches the skeletal-like structure to their body and as they move so do the articulated mechanical parts, measuring the performer’s relative motion. Mechanical motion capture systems are real-time, relatively low-cost, free-of-occlusion, and wireless (untethered) systems that have unlimited capture volume. Typically, they are rigid structures of jointed, straight metal or plastic rods linked together with potentiometers that articulate at the joints of the body. These suits tend to be in the $25,000 to $75,000 range plus an external absolute positioning system.

Inertial systems

Inertial Motion Capture technology is based on miniature inertial sensors, biomechanical models and sensor fusion algorithms. The motion data of the inertial sensors (inertial guidance system) is often transmitted wirelessly to a computer, where the motion is recorded or viewed. Most inertial systems use gyroscopes to measure rotational rates. These rotations are translated to a skeleton in the software. Much like optical markers, the more gyros the more natural the data. No external cameras, emitters or markers are needed for relative motions. Inertial mocap systems capture the full six degrees of freedom body motion of a human in real-time. Benefits of using Inertial systems include: no solving, portability, and large capture areas. Disadvantages include lower positional accuracy and positional drift which can compound over time.
These systems are similar to the Wii controllers but are more sensitive and have greater resolution and update rates. They can accurately measure the direction to the ground to within a degree. The popularity of inertial systems is rising amongst independent game developers, mainly because of the quick and easy set up resulting in a fast pipeline. A range of suits are now available from various manufacturers and base prices range from $25,000 to $80,000 USD.

Markerless

Emerging techniques and research in computer vision are leading to the rapid development of the markerless approach to motion capture. Markerless systems such as those developed at Stanford, University of Maryland, MIT, and Max Planck Institute, do not require subjects to wear special equipment for tracking. Special computer algorithms are designed to allow the system to analyze multiple streams of optical input and identify human forms, breaking them down into constituent parts for tracking. Applications of this technology extend deeply into popular imagination about the future of computing technology. Several commercial solutions for markerless motion capture have also been introduced, including systems by Organic Motion[3][4] and Xsens.[5] Microsoft's Kinect system, released for the Xbox 360, is capable of Markerless motion capture

Semi-passive imperceptible marker

One can reverse the traditional approach based on high speed cameras. Systems such as Prakash use inexpensive multi-LED high speed projectors. The specially built multi-LED IR projectors optically encode the space. Instead of retro-reflective or active light emitting diode (LED) markers, the system uses photosensitive marker tags to decode the optical signals. By attaching tags with photo sensors to scene points, the tags can compute not only their own locations of each point, but also their own orientation, incident illumination, and reflectance.
These tracking tags work in natural lighting conditions and can be imperceptibly embedded in attire or other objects. The system supports an unlimited number of tags in a scene, with each tag uniquely identified to eliminate marker reacquisition issues. Since the system eliminates a high speed camera and the corresponding high-speed image stream, it requires significantly lower data bandwidth. The tags also provide incident illumination data which can be used to match scene lighting when inserting synthetic elements. The technique appears ideal for on-set motion capture or real-time broadcasting of virtual sets but has yet to be proven.

Time modulated active marker

Active marker systems can further be refined by strobing one marker on at a time, or tracking multiple markers over time and modulating the amplitude or pulse width to provide marker ID. 12 megapixel spatial resolution modulated systems show more subtle movements than 4 megapixel optical systems by having both higher spatial and temporal resolution. Directors can see the actors performance in real time, and watch the results on the mocap driven CG character. The unique marker IDs reduce the turnaround, by eliminating marker swapping and providing much cleaner data than other technologies. LEDs with onboard processing and a radio synchronization allow motion capture outdoors in direct sunlight, while capturing at 480 frames per second due to a high speed electronic shutter. Computer processing of modulated IDs allows less hand cleanup or filtered results for lower operational costs. This higher accuracy and resolution requires more processing than passive technologies, but the additional processing is done at the camera to improve resolution via a subpixel or centroid processing, providing both high resolution and high speed. These motion capture systems are typically under $50,000 for an eight camera, 12 megapixel spatial resolution 480 hertz system with one actor.

Active marker

Active optical systems triangulate positions by illuminating one LED at a time very quickly or multiple LEDs with software to identify them by their relative positions, somewhat akin to celestial navigation. Rather than reflecting light back that is generated externally, the markers themselves are powered to emit their own light. Since Inverse Square law provides 1/4 the power at 2 times the distance, this can increase the distances and volume for capture.
The TV series ("Stargate SG1") episode was produced using an active optical system for the VFX. The actor had to walk around props that would make motion capture difficult for other non-active optical systems.
ILM used active Markers in Van Helsing to allow capture of the Harpies on very large sets. The power to each marker can be provided sequentially in phase with the capture system providing a unique identification of each marker for a given capture frame at a cost to the resultant frame rate. The ability to identify each marker in this manner is useful in realtime applications. The alternative method of identifying markers is to do it algorithmically requiring extra processing of the data.

Passive markers

Passive optical system use markers coated with a retroreflective material to reflect light that is generated near the cameras lens. The camera's threshold can be adjusted so only the bright reflective markers will be sampled, ignoring skin and fabric.
The centroid of the marker is estimated as a position within the 2 dimensional image that is captured. The grayscale value of each pixel can be used to provide sub-pixel accuracy by finding the centroid of the Gaussian.
An object with markers attached at known positions is used to calibrate the cameras and obtain their positions and the lens distortion of each camera is measured. If two calibrated cameras see a marker, a 3 dimensional fix can be obtained. Typically a system will consist of around 6 to 24 cameras. Systems of over three hundred cameras exist to try to reduce marker swap. Extra cameras are required for full coverage around the capture subject and multiple subjects.
Vendors have constraint software to reduce the problem of marker swapping since all markers appear identical. Unlike active marker systems and magnetic systems, passive systems do not require the user to wear wires or electronic equipment. Instead, hundreds of rubber balls are attached with reflective tape, which needs to be replaced periodically. The markers are usually attached directly to the skin (as in biomechanics), or they are velcroed to a performer wearing a full body spandex/lycra suit designed specifically for motion capture. This type of system can capture large numbers of markers at frame rates as high as 2000fps. The frame rate for a given system is often balanced between resolution and speed: a 4-megapixel system normally runs at 370 hertz, but can reduce the resolution to .3 megapixels and then run at 2000 hertz. Typical systems are $100,000 for 4-megapixel 360-hertz systems, and $50,000 for .3-megapixel 120-hertz systems.

Optical systems

Optical systems utilize data captured from image sensors to triangulate the 3D position of a subject between one or more cameras calibrated to provide overlapping projections. Data acquisition is traditionally implemented using special markers attached to an actor; however, more recent systems are able to generate accurate data by tracking surface features identified dynamically for each particular subject. Tracking a large number of performers or expanding the capture area is accomplished by the addition of more cameras. These systems produce data with 3 degrees of freedom for each marker, and rotational information must be inferred from the relative orientation of three or more markers; for instance shoulder, elbow and wrist markers providing the angle of the elbow.

Methods and systems

Motion tracking or motion capture started as a photogrammetric analysis tool in biomechanics research in the 1970s and 1980s, and expanded into education, training, sports and recently computer animation for television, cinema, and video games as the technology matured. A performer wears markers near each joint to identify the motion by the positions or angles between the markers. Acoustic, inertial, LED, magnetic or reflective markers, or combinations of any of these, are tracked, optimally at least two times the frequency rate of the desired motion, to submillimeter positions

Applications

Video games often use motion capture to animate athletes, martial artists, and other in-game characters.[2] This has been done since the Atari Jaguar CD-based game Highlander: The Last of the MacLeods, released in 1995.
Movies use motion capture for CG effects, in some cases replacing traditional cel animation, and for completely computer-generated creatures, such as Gollum, The Mummy, King Kong, Davy Jones from Pirates of the Caribbean, the Na'vi from the film Avatar, and Clu from Tron: Legacy.
Sinbad: Beyond the Veil of Mists was the first movie made primarily with motion capture, although many character animators also worked on the film.
In producing entire feature films with computer animation, the industry is currently split between studios that use motion capture, and studios that do not. Out of the three nominees for the 2006 Academy Award for Best Animated Feature, two of the nominees (Monster House and the winner Happy Feet) used motion capture, and only Disney·Pixar's Cars was animated without motion capture. In the ending credits of Pixar's film Ratatouille, a stamp appears labelling the film as "100% Pure Animation -- No Motion Capture!"
Motion capture has begun to be used extensively to produce films which attempt to simulate or approximate the look of live-action cinema, with nearly photorealistic digital character models. The Polar Express used motion capture to allow Tom Hanks to perform as several distinct digital characters (in which he also provided the voices). The 2007 adaptation of the saga Beowulf animated digital characters whose appearances were based in part on the actors who provided their motions and voices. James Cameron's Avatar used this technique to create the Na'vi that inhabit Pandora. The Walt Disney Company has produced Robert Zemeckis's A Christmas Carol using this technique. In 2007, Disney acquired Zemeckis' ImageMovers Digital (that produces motion capture films), but then closed it in 2011, after a string of failures.
Television series produced entirely with motion capture animation include Laflaque in Canada, Sprookjesboom and Cafe de Wereld in The Netherlands, and Headcases in the UK.
Virtual Reality and Augmented Reality allow users to interact with digital content in real-time. This can be useful for training simulations, visual perception tests, or performing a virtual walk-throughs in a 3D environment. Motion capture technology is frequently used in digital puppetry systems to drive computer generated characters in real-time.
Gait analysis is the major application of motion capture in clinical medicine. Techniques allow clinicians to evaluate human motion across several biometric factors, often while streaming this information live into analytical software.
During the filming of James Cameron's Avatar all of the scenes involving this process were directed in realtime using a screen which converted the actor setup with the motion costume into what they would look like in the movie, making it easier for Cameron to direct the movie as it would be seen by the viewer. This method allowed Cameron to view the scenes from many more views and angles not possible from a pre-rendered animation. He was so proud of his pioneering methods he even invited Steven Spielberg and George Lucas on set to view him in actio

Disadvantages

  • Specific hardware and special programs are required to obtain and process the data.
  • The cost of the software, equipment and personnel required can potentially be prohibitive for small productions.
  • The capture system may have specific requirements for the space it is operated in, depending on camera field of view or magnetic distortion.
  • When problems occur, it is easier to reshoot the scene rather than trying to manipulate the data. Only a few systems allow real time viewing of the data to decide if the take needs to be redone.
  • The initial results are limited to what can be performed within the capture volume without extra editing of the data.
  • Movement that does not follow the laws of physics generally cannot be captured.
  • Traditional animation techniques, such as added emphasis on anticipation and follow through, secondary motion or manipulating the shape of the character, as with squash and stretch animation techniques, must be added later.
If the computer model has different proportions from the capture subject, artifacts may occur. For example, if a cartoon character has large, over-sized hands, these may intersect the character's body if the human performer is not careful with their physical motion

Advantages

Motion capture offers several advantages over traditional computer animation of a 3D model:
  • More rapid, even real time results can be obtained. In entertainment applications this can reduce the costs of keyframe-based animation. For example: Hand Over.
  • The amount of work does not vary with the complexity or length of the performance to the same degree as when using traditional techniques. This allows many tests to be done with different styles or deliveries.
  • Complex movement and realistic physical interactions such as secondary motions, weight and exchange of forces can be easily recreated in a physically accurate manner.
  • The amount of animation data that can be produced within a given time is extremely large when compared to traditional animation techniques. This contributes to both cost effectiveness and meeting production deadlines.
Potential for free software and third party solutions reducing its cost

Motion captur

Motion capture, motion tracking, or mocap are terms used to describe the process of recording movement and translating that movement on to a digital model. It is used in military, entertainment, sports, and medical applications, and for validation of computer vision[1] and robotics. In filmmaking it refers to recording actions of human actors, and using that information to animate digital character models in 2D or 3D computer animation. When it includes face and fingers or captures subtle expressions, it is often referred to as performance capture.
In motion capture sessions, movements of one or more actors are sampled many times per second, although with most techniques (recent developments from Weta use images for 2D motion capture and project into 3D), motion capture records only the movements of the actor, not his or her visual appearance. This animation data is mapped to a 3D model so that the model performs the same actions as the actor. This is comparable to the older technique of rotoscope, such as the 1978 The Lord of the Rings animated film where the visual appearance of the motion of an actor was filmed, then the film used as a guide for the frame-by-frame motion of a hand-drawn animated character.

Animation

Animation is the rapid display of a sequence of images of 2-D or 3-D artwork or model positions in order to create an illusion of movement. The effect is an optical illusion of motion due to the phenomenon of persistence of vision, and can be created and demonstrated in several ways. The most common method of presenting animation is as a motion picture or video program, although there are other method

Saturday 13 August 2011

Computer animation

Computer animation is the process used for generating animated images by using computer graphics. The more general term computer generated imagery encompasses both static scenes and dynamic images, while computer animation only refers to moving images.
Modern computer animation usually uses 3D computer graphics, although 2D computer graphics are still used for stylistic, low bandwidth, and faster real-time renderings. Sometimes the target of the animation is the computer itself, but sometimes the target is another medium, such as film.
Computer animation is essentially a digital successor to the stop motion techniques used in traditional animation with 3D models and frame-by-frame animation of 2D illustrations. Computer generated animations are more controllable than other more physically based processes, such as constructing miniatures for effects shots or hiring extras for crowd scenes, and because it allows the creation of images that would not be feasible using any other technology. It can also allow a single graphic artist to produce such content without the use of actors, expensive set pieces, or props.
To create the illusion of movement, an image is displayed on the computer screen and repeatedly replaced by a new image that is similar to the previous image, but advanced slightly in the time domain (usually at a rate of 24 or 30 frames/second). This technique is identical to how the illusion of movement is achieved with television and motion pictures.
For 3D animations, objects (models) are built on the computer monitor (modeled) and 3D figures are rigged with a virtual skeleton. For 2D figure animations, separate objects (illustrations) and separate transparent layers are used, with or without a virtual skeleton. Then the limbs, eyes, mouth, clothes, etc. of the figure are moved by the animator on key frames. The differences in appearance between key frames are automatically calculated by the computer in a process known as tweening or morphing. Finally, the animation is rendered.
For 3D animations, all frames must be rendered after modeling is complete. For 2D vector animations, the rendering process is the key frame illustration process, while tweened frames are rendered as needed. For pre-recorded presentations, the rendered frames are transferred to a different format or medium such as film or digital video. The frames may also be rendered in real time as they are presented to the end-user audience. Low bandwidth animations transmitted via the internet (e.g. 2D Flash, X3D) often use software on the end-users computer to render in real time as an alternative to streaming or pre-loaded high bandwidth animations