Saturday 26 February 2011

Mudbox Portrait

We have now been given the project to digitally sculpt a fellow student in our class by using reference photos taken by ourselves and by using these sculpt their face by using Mudbox. I have chosen to sculpt Josh Jackaman.

I have never used Mubox before so I am looking forward to trying something new other than Maya and to be honest I am not a fan of Maya as I find it a bit in your face. Mudbox has a far easier to look at display with very simple tools to use, you can add materials or remove them, you can smooth your work out or even cut it away to create jagged edges. It is like using a piece of clay but without the mess. I also found it very easy to use and got the hang of the use of the different tools however this was when I was just making up a random face, once I took the pictures for josh and began referencing the face I found it very irritating and frustrating just how different angles make someone look symmetrical but when you look closely and move the attributes into the right place according to your reference image you discover how wrong it looks.
But overall once you get past this bump it is a very user friendly program as it can show you roughly where the basic topology is when you put the wireframe on which is very helpful. I am though very proud of my final model as it does look like who it is supposed to be and I feel Josh is glad he doesn’t look like a monster.

Face Topology

Facial Action Coding System
Facial Action Coding System (FACS) is a system to taxonomize human facial expressions, originally developed by Paul Ekman and Wallace V. Friesen in 1978. It is a common standard to systematically categorize the physical expression of emotions, and it has proven useful to psychologists and to animators.
Uses
Using FACS, human coders can manually code nearly any anatomically possible facial expression, deconstructing it into the specific Action Units (AU) and their temporal segments that produced the expression. As AUs are independent of any interpretation, they can be used for any higher order decision making process including recognition of basic emotions, or pre-programmed commands for an ambient intelligent environment. The FACS Manual is over 500 pages in length and provides the AUs, as well as Dr. Ekman’s interpretation of their meaning.
FACS defines AUs, which are a contraction or relaxation of one or more muscles. It also defines a number of Action Descriptors, which differ from AUs in that the authors of FACS have not specified the muscular basis for the action and have not distinguished specific behaviors as precisely as they have for the AUs.
For example, FACS can be used to distinguish two types of smiles as follows:
  • Insincere and voluntary Pan American smile: contraction of zygomatic major alone
  • Sincere and involuntary Duchenne smile: contraction of zygomatic major and inferior part of orbicularis oculi.
Although the labeling of expressions currently requires trained experts, researchers have had some success in using computers to automatically identify FACS codes, and thus quickly identify emotions. Computer graphical face models, such as CANDIDE or Artnatomy, allow expressions to be artificially posed by setting the desired action units.
The use of FACS has been proposed for use in the analysis of depression, and the measurement of pain in patients unable to express themselves verbally.
FACS is designed to be self-instructional. People can learn the technique from a number of sources, including manuals and workshops, and obtain certification through testing. A variant of FACS has been developed to analyze facial expressions in chimpanzees.
P. Ekman and W. V. Friesen also developed EMFACS (Emotion Facial Action Coding System) and FACSAID (Facial Action Coding System Affect Interpretation Dictionary) which consider only emotion-related facial actions. For example:
Happiness = 6+12
Sadness = 1+4+15
Surprise = 1+2+5B+26
Fear = 1+2+4+5
Anger = 4+ 5+7+23
Disgust = 9+16+15
Contempt = R12A+R14A

Codes for Action Units

For clarification, FACS is an index of facial expressions, but does not actually provide any bio-mechanical information about the degree of muscle activation. Though muscle activation is not part of FACS, the main muscles involved in the facial expression have been added here for the benefit of the reader.
Action Units (AUs) are the fundamental actions of individual muscles or groups of muscles.
Action Descriptors (ADs) are unitary movements that may involve the actions of several muscle groups (e.g., a forward‐thrusting movement of the jaw). The muscular basis for these actions hasn’t been specified and specific behaviors haven’t been distinguished as precisely as for the AUs.
For most accurate annotation, FACS suggests agreement from at least two independent certified FACS encoders.

Intensity Scoring
Intensities of FACS are annotated by appending letters A–E (for minimal-maximial intensity) to the Action Unit number (e.g. AU 1A is the weakest trace of AU 1 and AU 1E is the maximum intensity possible for the individual person).
  • A Trace
  • B Slight
  • C Marked or Pronounced
  • D Severe or Extreme
  • E Maximum

Saturday 19 February 2011

The Making of Metal Gear Solid 4

When Hideo Kojima the Producer/Director of MGS4 was young he wrote a action-adventure novel with a hard boiled action hero who drove a car and was in hard core action chases. However he had never driven a car before therefore he couldn't write the novel with a sense of realisim and as Metal Gear is all about military combat and none of the development team had ever held a gun or experienced military combat this would make designing and developing the game difficult as it is easy to watch something but unless you experience how something works such as driving a car or firing a gun you can never bring a sense of realisim.
Aswell as MGS being a fictional all the military equipment and vehicles are all real production models or in production and along with that the stroy is based on life time events from something in the world news, throughout the MGS series all of them have a moral to the story and a point not just some made up stealth shooter game.
To help understand the enviroments of the game Hideo Kojima got the development team to travel around to different locations around the world to help look at the architecture of the buildings and understand the wilderness as MGS4 is not in just one environment but has locations everywhere around the world from built up urban areas, to snow and blizzard like locations. The benefit of visitng these locations help the devlopers visualize the environments in the game.
The desigining of the characters starts with a brainstorm then it is edited and created into a 3D animation with a voice actor to bring the character to life and many of the people couldn't imagine the main character Snake as an old man.They also changed the style of gameplay as it is the last of the MGS series. As for the 4 beautys in the game they were all modelled and had motion capture pictures taken of them from many different angles to help build a real model of there faces. Kojima wanted the player to discover the inner beauty of the beasts when you defeat them.
For the sound the effects they all had to blend the voice acting, sound and effects into the background and are individually made for the game experience. This game had a the idea about going for surrond sound for the cinema experince as the game has the movie like cinematics. The music is used to help create an event in the game and help tell the story of a difficult situation or theme. The music was taken to Skywalker ranch and the this was a dream for the production team. The scripts are now designed as movies with long and eventful dialog. Along with the voice acting you have the motion capture actors for the animators to take it and edit it for the story.
To begin with the team started there game making at Kobe however the great Hanshin Earthquake destroyed everything in Kobe, this then meant that the team had to move to Ebisu in Tokyo to finish MGS2.
Metal Gear became a world wide phenomina at E3 in Atlanta and it now hasn't been the same since. While the team stays in Tokyo Kojima and some of the producers travel the world and speak about the games that will be coming up in the distant future. This is because it is open to the whole public allowing them to see the response of the consumers.




Saturday 29 January 2011

Activision vs Tale of Tales

Activision is an American video game developer and publisher, it is owned in majority by the French conglomerate Vivendi SA. The current CEO and president is Bobby Kotick. Activision was founded on the 1st October 1979 and was the world’s first independent developer and distributer of video games for gaming consoles. Its first products were cartridges for the Atari 2600 video console system published from July 1980.
Activision is now one of the largest third party video games publishers in the world and it was the top publisher in 2007 in the United States.
On the 2nd December 2007, it was announced that Activision would be acquired by Vivendi, with Vivendi contributing its gaming division plus cash, in exchange for a majority stake in the new company. The merger between Activision and Vivendi games took place on the 9th July 2008, with the newly formed Activision Blizzard. Activision still exists as a subsidiary owned by Activision Blizzard, and it will still develop games such as Call of Duty and Guitar Hero.
Activision develops high stake games and very popular brand named games such as; Call of Duty, Spyro, Prototype, Tony Hawk, Wolfenstien, Guitar Hero, Star Wars Jedi Knight Academy etc. They are also well known for creating movie games when a popular movie comes out such as; Spiderman, GoldenEye, Transformers, Kun Fu Panda, Madagascar, Shriek etc. In my opinion these games are just pointless as most of them of rushed for the release date for the movie, they also feel very pointless as they have been made with a sense of profit making as the gameplay is rushed and graphics are usually poor.

Tale of Tales BVBA is a Belgian developer of art games and screensavers founded in 2002 by Auriea Harvey (Concept art, 3D modelling and texture mapping) and Michaël Samyn (Interaction, effects and games programming) who had been working together in the creation of web sites and electronic art since 1991. They live very close to the Saint Bravo Cathedral, which they consider to be their greatest influence. The studio is named after Giambattista Basile’s book The Tale of Tales (Lo Cunto de li cuntil) with their main series being retellings of fairy tales in the form of adventure games, each subtitled “a tale of tales” and linked together by a common character referred to as the Deaf-mute Girl in the pretty white dress.
February 2010 saw the release of Vanitas, described as “a memento mori for your digital hands” their first work for the iPhone OS platform and their first music by Zoë Keating.

The purpose of Tale of Tales is to create elegent and emotional but with rich interactive entertainment. They explicitly want to cater to people who are not enchanted by most contemporary computer games and who wouldn’t mind more variety in their gameplay experiences. All of their products feature innovative forms of interaction, engaging poetic narratives and simple controls.

Tale of Tales started life with the design of 8, an epic single player PC exploration game inspired by the various versions of the folk tale, Sleeping Beauty. The 8 project is being re-visited in 2010 and developed under the name The Book of 8, which is in the early prototype stage.
The Endless Forest is our second big project: an online multiplayer game. The Endless Forest was launched in September 2005 and continues to evolve.
The Path is our first commercially available single player game, released in March 2009. A spiritual sequel to 8, The Path is a short horror game inspired by the tale of Little Red Ridinghood.
In 2009 we also released, Fatale based on the play Salome by Oscar Wilde. Fatale explores the story of Salomé in motion and stilness. An interactive vignette much like an explorable painting.
The Graveyard is quiet and short experience about death and life we released in 2008 to wide acclaim and controversy. In 2009 we also put an iphone version of The Graveyard on the AppStore.
Also in 2009 we created Vanitas a virtual box of treasures for iPhone and iPod touch.

Sunday 23 January 2011

Within a Minute of Star Wars Episode III

scene 158
26 shots
1185 frames
910 artists
70441 man hours

Mark began speaking about the production pyramid of games and movies and how similar they are so I decided to look at the special features disk of Star Wars to gain more knowledge and just how complex this pipeline can be when making a highly anticipated movie.
The Producer Rick McCallum has to get artists and technicians together so that George Lucas can bring his vision to life.

To begin with the movie has to start with a screen play. George’s script was around 120 pages and the duel on Mustafar was just only 3 pages and he had to write about the reason why they were even on this planet and what was the reason for the duel.

The Concept artists have to draw what George describes and the Concept leader Erik Tiemens has to find a way to express George’s vision. They start with basic pencil sketches and quick brain storms then it moves onto some basic 3D modelling and Photoshop sketches, this allows George to choose from a variety of different views of what he believes will be the best image for his movie. They put all the images on boards so they can present more than one at a single time. These designs would last for 3-4 months as George changes his script as he can now begin to picture the planet.
Once this is complete the artists then do some quick draft story boards of how the scene will work out and where the next scene will be shot.

Once George is happy with what has been chosen all the art work gets past down to the Pre-Visualization team who create a living story board all in 3D. This is done because it is a quick and flexible way to design the shots of the movie and where the actors would move to as the scene progresses and if the Director is not happy with a particular shot or movement it can be changed within a few hours. It is almost like working on a set with real actors as the team will give you a performance shot and then you can discuss what works and what doesn’t and you can have more shots again until you are happy with the final work.

Production Office then to sort out actors contracts and keep the team all working together with the crew and sorts out everyone’s schedule and ensuring that everything is on time and all communication is up to scratch. They also make sure that everyone gets paid.

Catering has to be there to make sure that all crew members are fed and given the correct amount of food and there 5 a day. It also helps moral as the amount of work that has to be done in 12 hours can be draining.

Production Design has to create all the sets and make all the concept sketches in 3D from 2D. They start with mini scaled down models and they then look at where the most shots will be taken and make the architecture of that given shot.

The Construction team including carpenters, plasterers, riggers, steel riggers, moulders and painters to make the sets look as real as possible and also looking at costs for the production. The sets are then torn down after the scene has been shot this can be within the same day of just finishing making the set.

Props are made including weapons and other props such as chairs. In this scene on the Mustafar duel the main props are the lightsabres and they have to make lightweight versions, heavy weight and rubber. The prop designers don’t just have to make fancy looking weapons though they also have to make props that the actors like and prefer to use in this case Ewan McGregor chose what sabre he felt comfortable using.

Hair and Makeup have to then make actors look beautiful but in this case on the Mustafar scene their job was to make them look hot, sweaty, beaten up and dramatic.

Costumes are then made for the actors to use. They have to create every wardrobe for every actor. In this scene the clothes are getting burnt and they have to show the progression of this throughout the fight, creating wear and tear. There were around 14 different trousers, 16 undershirts, 16 over jackets, 12 belts just for Anakin in this one scene.

Actors have to bring the script to life not just by their acting but they had to do the actual duel and show emotion and passion for the reason of the fight. They had to perform on green screen as the world isn’t real but most of their focus is on the fight.

Stunt department go through the fight scenes and take the place of digital characters including stunt doubles for all the actors. They got out the lead stunt co-ordinator Nick Gillard 4 months before they started shooting so they could get as much test footage as possible. Nick choreographs every fight and gives every character there own distinctive fighting style for Anakin and Obi-Wan. The team also had to create a moving set for the falling piece of debris that lands into the lava. This meant that the stunt doubles had to take the actors place when this happens they have to scan both actor’s faces and map on digitally the actors face onto the stunt double.

Director has to make sure that the filming is going just the way he imagined it and how he wanted to picture the scene. He also gives the actors ideas on how he imagined this fight for years. He also has a script advisor how makes sure everything is in the correct place and the right time.

Cinematography helps capture the image of the director and create the mood and lighting of every scene. In the Mustafar scene the team had to create the correct lighting for a world that hadn’t been digitally made yet so the lighting had to be correct. The team also has to make sure that the camera doesn’t lose focus when shooting. On this scene the world isn’t real so they have 2 cameras shooting one that captured digitally and recording that separately from the others. As the film is getting filmed the engineers send the film back to George on the Plasma screens so he can see how the film would look in that given shot.

Sound recording is taken after post production as this scene has no dialog and is just fighting. This means that every actor has to record the grunts and noises after the scene is shot.

Editoral team has to work with the director and edit the film into the correct length and put the appropriate footage that he wants. It is done during filming as well as after the film. Once the scene is shot the tap is taken out and copied, this is sent to another editor and logged onto the computer and passed to a digital editor and finally passed to the film editor to begin editing the film. On episode III they had 2 editors working at the same time. 1 in Sydney and 1 in Skywalker ranch. The joys of editing is that you can shoot around the film and move the footage around to how the Director wants. If a scene isn’t there that the Director wants they re-shoot in London but only for a few weeks. Another job is for the editor to add in some effects and footage taken, in this case the volcano eruptions of Mustafar.

ILM Production then take the basic edit film and have to create all the green screen shots into real life planets or creatures. They have to add all the textures and look of the planet that only existed in Georges mind. They also take all the artwork and use that to help create the world and visual effects.

VFX Supervisors they are there on every scene with the Director to understand the master plan of the whole film. The difficult thing about this scene was the lava as it spurts up and runs everywhere during the scene. This artist would work on a single shot for months.

3D Matchmove/Layout work to each reference point and this is very important as the whole scene is shot on green screen. To make sure that they are not lost they add markers against the screens to show if an action is happening during the shot that the Director wants. They use the markers to track the movements of the cameras so when they add the digital environment it matches the correct perspective.

Animation department have to bring the characters to life and act with the real characters such as Yoda. During the Mustafar scene they had to animate the prop falling into the lava ocean and with the 2 actors looking as though they are holding on for dear life to do this they over reacted the impact of the landing.

Digital Environments team then draw the world and take the footage of the smoke and the lava. This drawing is basically a massive back drop and is used for the whole Mustafar scene. Once this final drawing is finished they then animate it with the clouds moving and volcanic eruptions. This takes several months to complete a 25,000 pixel painting.

Lighting and Rendering team have to light the scene and create shadows and reflections, to make the world feel real. In this scene they have to make the lava look real by creating brightness underneath the props and creating darker parts for areas where the lava can’t reach. In the lava they use it as loads of pixels which create light. They also add steam and movement to the arm to make it as realistic as possibly when the lava lands onto it.

Digital Modelling have to build and texture the arm of the prop in 3D this takes 3-5 weeks. They had to build inside the hollow model as it breaks apart due to the lava landing on it.

Practical Models are made by using styro foam to create the landscape of the rocks this is then tilted at 10 degrees to change the flow of the lava. To create the lava they used a methasil food additive with an underflow light to change the brightness.

Motion Control then take images of and capture the movements of difficult situations that you cannot normally take. They light the model in different ways on the rig so they can get the different effects that they are after.

Rotoscopting they isolate the green screen shots allowing the actors to be removed so they can work on the background and edit the CG elements. They add shadows and the colours of the lightsabres.

Compositing group add all the elements together to create the overall final shot. This will include smoke effects, lava effects and model elements.

Sound design have to look at every shot of the movie as the shots have been taken with noise that will drown out the dialog so this has to be re-recorded. This makes the film sound less realistic so they have to re generate the sound. This will include footsteps, grunts and dialog.
They also have to design sounds for the environment as lava has no sound they have to make the sound of the bubbling lava and roars as it spurts up into the air. This will take months of work as the sound is added in layers at a time just for one scene.

Score is the music industry in this case John Williams would create a new theme of “Dual of the Fates” and the “Battle of the Hero’s” was made as this fight is about 2 friends fighting against each other instead of the traditional enemies. Also the music is added because there is no dialog and it’s just fighting.

Sound mix then put all the correct sounds into scene to make it as appropriate and dramatic as the scene needs to be. This is done is whilst the film is being shown so they can go back and re-edit it if someone disagrees in something.

Final Screening is then shown on the big screen to all the main departments and hopefully it doesn’t require any more editing and if it does then they have a few hours to change something and then that is it whether they are happy with it or not.

All of this is gone through with 137 other scenes and this then creates Revenge of the Sith.   

Monday 10 January 2011

12 Principles of animation

The Twelve Basic Principles of Animation is a set of principles of animation introduced by the Disney animators Ollie Johnston and Frank Thomas in their 1981 book The Illusion of Life: Disney Animation. Johnston and Thomas in turn based their book on the work of the leading Disney animators from the 1930s onwards, and their effort to produce more realistic animations. The main purpose of the principles was to produce an illusion of characters adhering to the basic laws of physics, but they also dealt with more abstract issues, such as emotional timing and character appeal.

The book and its principles have become generally adopted, and have been referred to as the "Bible of the industry. In 1999 the book was voted number one of the "best animation books of all time" in an online poll. Though originally intended to apply to traditional, hand-drawn animation, the principles still have great relevance for today's more prevalent computer animation.



Squash and stretch
The most important principle is "squash and stretch", the purpose of which is to give a sense of weight and flexibility to drawn objects. It can be applied to simple objects, like a bouncing ball, or more complex constructions, like the musculature of a human face.  Taken to an extreme point, a figure stretched or squashed to an exaggerated degree can have a comical effect. In realistic animation, however, the most important aspect of this principle is the fact that an object's volume does not change when squashed or stretched. If the length of a ball is stretched vertically, its width (in three dimensions, also its depth) needs to contract correspondingly horizontally.

Anticipation
Anticipation is used to prepare the audience for an action, and to make the action appear more realistic.  A dancer jumping off the floor has to bend his knees first; a golfer making a swing has to swing the club back first. The technique can also be used for less physical actions, such as a character looking off-screen to anticipate someone's arrival, or attention focusing on an object that a character is about to pick up.

Staging
This principle is akin to staging as it is known in theatre and film. Its purpose is to direct the audience's attention, and make it clear what is of greatest importance in a scene; what is happening, and what is about to happen. Johnston and Thomas defined it as "the presentation of any idea so that it is completely and unmistakably clear", whether that idea is an action, a personality, an expression or a mood. This can be done by various means, such as the placement of a character in the frame, the use of light and shadow, and the angle and position of the camera. The essence of this principle is keeping focus on what is relevant, and avoiding unnecessary detail.

Straight ahead action and pose to pose
These are two different approaches to the actual drawing process. "Straight ahead action" means drawing out a scene frame by frame from beginning to end, while "pose to pose" involves starting with drawing a few, key frames, and then filling in the intervals later. "Straight ahead action" creates a more fluid, dynamic illusion of movement, and is better for producing realistic action sequences. On the other hand, it is hard to maintain proportions, and to create exact, convincing poses along the way. "Pose to pose" works better for dramatic or emotional scenes, where composition and relation to the surroundings are of greater importance. A combination of the two techniques is often used.

Computer animation removes the problems of proportion related to "straight ahead action" drawing; however, "pose to pose" is still used for computer animation, because of the advantages it brings in composition. The use of computers facilitates this method, as computers can fill in the missing sequences in between poses automatically. It is, however, still important to oversee this process, and apply the other principles discussed.

Follow through and overlapping action
These closely related techniques help render movement more realistic, and give the impression that characters follow the laws of physics. "Follow through" means that separate parts of a body will continue moving after the character has stopped. "Overlapping action" is the tendency for parts of the body to move at different rates (an arm will move on different timing of the head and so on). A third technique is "drag", where a character starts to move and parts of him take a few frames to catch up. These parts can be inanimate objects like clothing or the antenna on a car, or parts of the body, such as arms or hair. On the human body, the torso is the core, with arms, legs, head and hair appendices that normally follow the torso's movement. Body parts with much tissue, such as large stomachs and breasts, or the loose skin on a dog, are more prone to independent movement than bonier body parts. Again, exaggerated use of the technique can produce a comical effect, while more realistic animation must time the actions exactly, to produce a convincing result.

Slow in and slow out
The movement of the human body, and most other objects, needs time to accelerate and slow down. For this reason, an animation looks more realistic if it has more frames near the beginning and end of a movement, and fewer in the middle. This principle goes for characters moving between two extreme poses, such as sitting down and standing up, but also for inanimate, moving objects.
Arcs
Most human and animal actions occur along an arched trajectory, and animation should reproduce these movements for greater realism. This can apply to a limb moving by rotating a joint, or a thrown object moving along a parabolic trajectory. The exception is mechanical movement, which typically moves in straight lines.

Secondary action
Adding secondary actions to the main action gives a scene more life, and can help to support the main action. A person walking can simultaneously swing his arms or keep them in his pockets, he can speak or whistle, or he can express emotions through facial expressions. The important thing about secondary actions is that they emphasize, rather than take attention away from the main action. If the latter is the case, those actions are better left out. In the case of facial expressions, during a dramatic movement these will often go unnoticed. In these cases it is better to include them at the beginning and the end of the movement, rather than during.

Timing
Timing refers to the number of drawings or frames for a given action, which translates to the speed of the action on film. On a purely physical level, correct timing makes objects appear to abide to the laws of physics; for instance, an object's weight decides how it reacts to an impetus, like a push. Timing is critical for establishing a character's mood, emotion, and reaction. It can also be a device to communicate aspects of a character's personality.

Exaggeration
Exaggeration is an effect especially useful for animation, as perfect imitation of reality can look static and dull in cartoons. The level of exaggeration depends on whether one seeks realism or a particular style, like a caricature or the style of an artist. The classical definition of exaggeration, employed by Disney, was to remain true to reality, just presenting it in a wilder, more extreme form. Other forms of exaggeration can involve the supernatural or surreal, alterations in the physical features of a character, or elements in the storyline itself. It is important to employ a certain level of restraint when using exaggeration; if a scene contains several elements, there should be a balance in how those elements are exaggerated in relation to each other, to avoid confusing or overawing the viewer.

Solid drawing
The principle of solid drawing means taking into account forms in three-dimensional space, giving them volume and weight. The animator needs to be a skilled draughtsman and has to understand the basics of three-dimensional shapes, anatomy, weight, balance, light and shadow etc. For the classical animator, this involved taking art classes and doing sketches from life. One thing in particular that Johnston and Thomas warned against was creating "twins": characters whose left and right sides mirrored each other, and looked lifeless. Modern-day computer animators draw less because of the facilities computers give them, yet their work benefits greatly from a basic understanding of animation principles, and their additions to basic computer animation.

Appeal
Appeal in a cartoon character corresponds to what would be called charisma in an actor. A character who is appealing is not necessarily sympathetic — villains or monsters can also be appealing — the important thing is that the viewer feels the character is real and interesting. There are several tricks for making a character connect better with the audience; for likable characters a symmetrical or particularly baby-like face tends to be effective.

Animation Basics

Beginning of the new term and we have been assigned a new project of animation. Animation has many different definitions;
  • Animation is the rapid display of a sequence of images of 2-D or 3-D artwork or model positions in order to create an illusion of movement.
  • The making of animated cartoons.
  • Quality of being active or spirited or alive and vigorous.
  • The condition of living or the state of being alive; "while there's life there's hope"; "life depends on many chemical and physical processes.
  • Animation is a simulation of movement by displaying sequential images in (timed) succession.

Animation
Animation is the rapid display of a sequence of images of 2-D or 3-D artwork or model positions in order to create an illusion of movement. The effect is an optical illusion of motion due to the phenomenon of persistence of vision, and can be created and demonstrated in several ways. The most common method of presenting animation is as a motion picture or video program, although there are other methods.

Early examples of attempts to capture the phenomenon of motion drawing can be found in Paleolithic cave paintings, where animals are depicted with multiple legs in superimposed positions, clearly attempting to convey the perception of motion.
A 5,000 year old earthen bowl found in Iran in Shahr-i Sokhta has five images of a goat painted along the sides. This has been claimed to be an example of early animation. However, since no equipment existed to show the images in motion, such a series of images cannot be called animation in a true sense of the word.
A Chinese zoetrope-type device had been invented in 180 AD. The phenakistoscope, praxinoscope, and the common flip book were early popular animation devices invented during the 19th century.
These devices produced the appearance of movement from sequential drawings using technological means, but animation did not really develop much further until the advent of cinematography.
There is no single person who can be considered the "creator" of film animation, as there were several people working on projects which could be considered animation at about the same time.

Traditional animation
Traditional animation (also called cel animation or hand-drawn animation) was the process used for most animated films of the 20th century. The individual frames of a traditionally animated film are photographs of drawings, which are first drawn on paper. To create the illusion of movement, each drawing differs slightly from the one before it. The animators' drawings are traced or photocopied onto transparent acetate sheets called cels, which are filled in with paints in assigned colors or tones on the side opposite the line drawings. The completed character cels are photographed one-by-one onto motion picture film against a painted background by a rostrum camera.
Examples of traditionally animated feature films include Pinocchio (United States, 1940), Animal Farm (United Kingdom, 1954), and Akira (Japan, 1988). Traditional animated films which were produced with the aid of computer technology include The Lion King (US, 1994) Sen to Chihiro no Kamikakushi (Spirited Away) (Japan, 2001), and Les Triplettes de Belleville (2003).

Stop-motion animation
Stop-motion animation is used to describe animation created by physically manipulating real-world objects and photographing them one frame of film at a time to create the illusion of movement. There are many different types of stop-motion animation, usually named after the type of media used to create the animation. Computer software is widely available to create this type of animation.Puppet animation typically involves stop-motion puppet figures interacting with each other in a constructed environment, in contrast to the real-world interaction in model animation. The puppets generally have an armature inside of them to keep them still and steady as well as constraining them to move at particular joints. Examples include The Tale of the Fox (France, 1937), The Nightmare Before Christmas (US, 1993), Corpse Bride (US, 2005), Coraline (US, 2009), the films of Jiri Trnka and the TV series Robot Chicken (US, 2005–present).
Computer animation encompasses a variety of techniques, the unifying factor being that the animation is created digitally on a computer.
2D animation
2D animatopn figures are created and/or edited on the computer using 2D bitmap graphics or created and edited using 2D vector graphs. This includes automated computerized versions of traditional animation techniques such as of Tweening, Morphing, Onion Skinning and Interpolated Rotoscoping.
Examples: Fosters Homes for Imaginary Friends, Danny Phantom, Waltz with Bizar, The Grim adventures of Billy and Mandy
  • Analog computer animation
  • Flash animation
  • Powerpoint animation
3D animation
3D animation are digitally modelled and manipulated by an animator. In order to manipulate a mesh, it is given a digital skeletal structure that can be used to control the mesh. This process is called rigging. Various other techniques can be applied, such as mathematical functions (ex. gravity, particle simulations), simulated fur or hair, effects such as fire and water and the use of Motion capture to name but a few, these techniques fall under the category of 3d dynamics. Many 3 D animations are very believable and are commonly used as Visual effects for recent movies.
Film = 24 frames per second.
PAL (Phrase Alternative Line – UK standard) = 25 frames per second.
NTSC (National Television System Committee – US Standard) = 29.97 frames per second.

25f = 1 second
250f = 10 seconds
1500f = 1 minute


Ones (24 images per second)
01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Twos (12 images per second)
01 01 02 02 03 03 04 04 05 05 06 06 07 07 08 08 09 09 10 10 11 11 12 12
Threes (8 images per second)
01 01 01 02 02 02 03 03 03 04 04 04 05 05 05 06 06 06 07 07 07 08 08 08
Fours (6 images per second)
01 01 01 01 02 02 02 02 03 03 03 03 04 04 04 04 05 05 05 05 06 06 06 06

However when you use less images the quality of the animation and frames per second are poor compared to using more images.

Persistence of vision
Persistence of vision is the phenomenon of the eye when an afterimage is thought to persist for approximately one twenty-fifth of a second.
The myth of persistence of vision is the mistake that the human perception of motion (brain centered) is the result of persistence of vision (eye centered). This myth was debunked in 1912 by Wertheimer but persists in many citations in many classic and modern film-theory texts. A more plausible theory to explain motion perception (at least on a descriptive level) is two distinct perceptual illusions: phi phenomenon and beta movement.

Phi Phenomenon
The phi phenomenon is an optical illusion defined by Max Wertheimer in which the persistence of vision formed a part of the base of the theory of the cinema, applied by Hugo Münsterberg in 1916. This optical illusion is based in the principle that the human eye is capable of perceiving movement from pieces of information, for example, a succession of images. In other words, from a slideshow of a group of frozen images at a certain speed of images per second, we are going to observe constant movement.
The phi phenomenon is an optical illusion of our brain that allows us to perceive constant movement instead of a sequence of images. In other words, we are inventing information that does not exist (between image and image) to perceive the movement. The phi phenomenon, that might be considered the maker of the correct working of the cinema, is only a limitation of the human eye, which is based on the persistence of vision.
File:Lilac-Chaser.gif



Beta Movement
Beta movement is an optical illusion in which our brain perceives continuous movement from a succession of adjacent light pulses. We interpret that there is movement when in fact what it happens is an exchange of luminous messages.
A good beta movement example could be considered any kind of LED indicator that shows some information. In the following example, it seems that the three points are moving but the truth is that a group of lights turn on and turn off.