Abstract
Behavioural systems are amazing ways of representing flocks of animals or
simply one animal’s behaviour. This project aims to create a different
behavioural system to the ones existing by attempting to mimic the behaviour of
dogs and bouncing balls. The report will explain briefly the research into the
behaviour of dogs. It will also cover important aspects of a behavioural system
such as the state engine, field of view, and collision detection. The system
has been written in MEL for use in Maya and the report will also explain the
use of expressions in the system.
Contents
Logic 5
The state engine 7
Dynamics 13
Collision
detection 14
Actions 15
GUI 16
Difficulties 17
Improvements 18
Other interesting
facts 19
Conclusion 20
Bibliography 21
Appendix 22
Instructions for
running the system 23
Introduction
Many times I have thought that behavioural animation was just a way of
using code to animate many objects at once. However, when working on this
project I found this not to be the case. Behavioural animation strives to model
behaviour. The number of characters it involves differs from project to
project, but it is the actual mimicking of the behaviour that is essential. As
Craig Reynolds says:
‘Typical
computer animation models only the shape and physical properties of the
characters, whereas behavioral or character-based animation seeks to model the behavior of the character.’(Reynolds,
1987:26)
The theory behind a behavioural
system begins with the belief that if one member can be made to behave then
getting all the rest to behave should be easy. It should be considered that ‘Each (member) is an independent
actor, or intelligent agent.’(Schaefer, Mackulak,
Cochran, Cherilla 1998:1155) If
each member or boid knows how to interact and behave, then increasing the
number of boids will not affect how each member behaves. The biggest difference
will be that there will be more obstacles to take into consideration and
therefore more calculations to compute.
The behavioural system I have created consists of a dog or two dogs and a bouncing ball. The basics of this system are simple. The dog wants the ball, he looks for it and when he sees it he runs towards it and catches it. If there are two dogs, the two dogs have an exact same desire for the ball.
This report will outline how I went about creating this particular system and some of the main points that should be considered when writing a system of the sort.
I took inspiration for this idea whilst I was watching dogs play on the beach; it was very interesting to see how dogs were constantly looking for something to play with. I observed how they would rush to a ball as soon as they saw one moving. It was even more amusing to see a group of dogs chase after the same ball. When this was the case they would run even more frantically towards the ball until one of them caught it, even at this point the dogs would still try like crazy to get a hold of the ball. However once the dog that had the ball let go of it they didn’t seem to be interested in it anymore until it was thrown again by a person and was once again in motion.
Whilst watching the dogs play I had to agree with Reynolds when he states that ‘Flocks and related synchronized group behaviors such as schools of fish or herds of land animals are both beautiful to watch and intriguing to contemplate.’ (Reynolds, 1987:25) A group of dogs may not be a herd, but in the same way the behaviour is intriguing to watch.
I decided to observe this behaviour a little more closely and obtained a video of a ball being thrown to a single dog and a ball being thrown to two dogs at the same time; it gave me an idea of what attracted them to the ball, as the bouncier it was the more eager they were to catch it. The dogs that were filmed were quite calm dogs, very unlike the ones I had seen at the beach, and on many occasions did not even budge. On the whole they seemed more sedentary when the ball was rolling slowly than when it was in faster or bouncier motion.
Dog behaviour can be very complex and as I observed every dog has a different way of acting towards the situation. I decided to try to generalise and simplify the behaviour in order to begin work on the system.
Logic
After researching into the behaviour of the dogs I became very excited about the project and decided to see what had been done before and start researching the logic side of the system.
In Craig Reynolds’ paper ‘Flocks, Herds, and Schools: A
Distributed Behavioral Model’ he states that ‘all that should be
required to create a simulated flock is to create some instances of the
simulated bird model and allow them to interact.’(Reynolds, 1987:25) I took this
as a starting point, if I could get the system to work for one dog and one
ball, then enlarging the system should not be too complicated.
I began to figure out the logic behind the behaviour I wanted to mimic and
came up with a set of initial rules and priorities:
When one ball comes into the dog’s field of view the
dog will follow it until it catches it, if there is more than one ball in front
of it, it will choose a ball to catch according to the following priority:
1.
Highest bouncing ball.
2.
Lower bouncing ball
3.
Rolling ball.
It will completely ignore a stopped ball.
Figure1. Diagram of priority for
bouncing balls.
In the case of having two dogs in the system I thought
of the following logic:
Each dog will have the same behaviour towards the balls,
but if dog 2 is in the field of view of dog 1, then dog 1 will forget all other
balls and chase the ball dog 2 is after until one of them catches the ball.
Figure2.
Diagram of priority when 2 dogs are involved.
This logic seemed appropriate for what I was aiming to achieve. It gave me a good starting point as I needed to begin implementing the system. I decided to start very simply with one ball and one cylinder that represented a dog; but I needed a structure to the system in order to make it work, which is where the state engine comes in.
The state engine
State Engines or state machines as they are sometimes defined can be used in many different types of systems, including behavioural systems. Cisco Systems define a state machine as:
‘a device that stores the state of something and at a
given time can operate on input to move from one state to another and/or cause
an action or output to take place.’ (Cisco Systems)
In the case of a behavioural system creating a simple state engine can be quite a straight forward way of getting started. The programmer or scripter “splits” the behaviour into separate states, for example a creature could be in 3 different states: a hungry state, an eating state, and a full/satisfied state. The state engine would take in a value; say a value for hunger which might be represented as a value between 0.0 and 1.0. In The Computer Animator’s Technical Handbook the authors suggest that perhaps hunger could be set by default to 0, in which case the creature would be at the full state. With time the hunger variable would increase until reaching 0.6 for example. The state engine would read in this variable and switch the state of the creature to the hungry state, in this state it would look for food, and when food were found the hunger variable would decrease and the state would switch once again to full/satisfied.
Figure 3. Graphical representation of a state engine for the behaviour of a creature.
The State Engine in the dog behavioural system started off being very simple and existing as the main procedure in the MEL script. It consisted of 3 states:
The dog is looking for the ball.
The dog runs to the ball.
The dog catches the ball.
Figure 4, Graphical representation of the state engine used in this project.
At this point the system was very primitive and consisted of a cylinder that had an extra attribute called lookinessState, depending on the position of the sphere (the ball) this attribute would change from 0 to 1 to 2, and nothing much happened. The dog would only be aware of the ball if the ball was in front of the z-axis. Although very simple, it was a very good start, and an encouraging way of seeing some sort of behaviour quickly.
The final state engine was not implemented in the script as had been planned, but in an expression in Maya. I chose for it to be this way simply because it was the quickest and most efficient way for the state of the dog to constantly be checked. Expressions can be called at every frame, so they seemed most suitable. Below is the expression which controls the state of the dog, that is the state engine of the system:
if ( fieldofview() == 0 )
Boid.lookynessState = 0;
else if ( fieldofview() == 1 )
Boid.lookynessState = 1;
else if ( fieldofview() == 2 )
Get();
if ((frame % 24) == 0 && Boid.lookynessState
== 1)
Looky();
In this final state engine the three states are still controlled by the same attribute, but now a field of view and a form of collision detection determine when this attribute will change. When the state changes, other procedures are called to control the action the dog must take. The first State (which is defined by LookynessState being at 0) is different as the actions which accompany that state are controlled by an expression. (More is discussed about this on page 14).
The field of view
One of the most important aspects of any system is how the behavioural model perceives the world around it. Initially I hadn’t given this part of the system much thought until I began scripting and realised that it is a fundamental element of any behavioural system. There are different ways in which this could be done, I had thought of implementing some sort of collision detection algorithm to create a field of view. For example in the diagram below, whenever the ball intersected with the cone the ball would be visible.
Figure 5. Example of an idea for
using collision detection to implement a field of view.
This method was attempted but after realising that Maya doesn’t have inbuilt collision detection except when working with dynamics I decided to leave it. Making the cone a part of the dynamics system, that is a rigid body, would have made the ball bounce off it. This was not the desired effect as all I was attempting was to give it a field of view without affecting the trajectory of the ball. I thought I might look into what other methods there were.
Image processing was suggested to me by various people. This idea is very intriguing, it relies on processing a rendered image and checking whether the ball, in the case of my system, is visible or not, this can be done by using colour coding, for example making the ball red. This is quite a new technique which has produced amazing results. Massive (the behavioural software used by Weta for The Lord of the Rings) uses rendered images and colour coding in order to determine whether an enemy is in sight or if it is a friend and to perceive what is around the agent. What’s more, I discovered that it not only uses rendered images to determine the agent’s field of view, but also uses sound:
‘Massive-generated
characters are convincing in part because their inputs come from the digital
landscape around them; each has eyes and ears on which it must rely to navigate
through battle. At Helm's Deep, when a group of Middle
Earth's most fearsome creatures (the mutated Uruk-Hai fighters) advance, they
generate a humming sound. The effect is ominous, and the audio cues, Regelous
says, also keep the digitally generated creatures from tripping over each
other. When one creature comes closer, his humming becomes more distinct, and
the next fighter over knows to make room.’ (Koeppel)
Although I did not consider using sound in my system, (as I didn’t think it was necessary for a simple behaviour), I did consider using a camera as a field of view and rendering out images from that camera in order to see whether the ball was visible or not. However, I had decided to approach this project by using MEL and expressions, image processing seemed a bit of an odd choice to make as I’d probably have to process each image outside of Maya in an OpenGL program.
I decided to go for the third approach I had learned of, a viewing frustum. A frustum usually consists of six planes, a near plane, a far plane, top, bottom, left, and right. It is frequently used for determining what is inside the camera view and clipping anything that is outside.
In the case of the dog it consisted of only three planes: the near plane was necessary (had it not been there the field of view would have extended to behind the dog’s head), but the far plane was not so, as the dog could have an infinite field of view for the purpose of this system. The left and right planes were also kept, but the top and bottom weren’t necessary as the Y translation of the ball was not great enough to fall out of the dog’s field of view.
To determine whether the ball is visible or not the pivot of the ball is checked to see whether it is in front of all the planes.
If the distance from the ball to the plane is positive then it is in front of the plane, if it is negative then it is behind the plane. So if the distance from the ball to all the planes is positive, then the ball is visible.
To calculate this distance three points from each plane are taken, and the position of the ball: (from the diagram, figure 6) for Plane A (p1, p2, p3), and ball position S.
N = (p1-p2) X
(p2-p3) X
is the cross product.
D = N • p1 • is
the dot product.
Distance from ball to plane = (S • N )
+ D
This calculation must be repeated for each plane.
Figure 6. Viewing frustum used to represent the
dog’s field of view.
One of the benefits of using MEL is that there is no need to go through the trigonometric calculations in order to find the three points as the position of the planes is already known. All that is necessary is to select three vertices on each plane and take in their global coordinates, to begin the rest of the calculations.
Dynamics
The behavioural animation for the
ball is less complex than that of the dog, because it does not have to think,
it has no intelligence. In the first stages of scripting the system I had thought
that I would animate the bouncing ball using expressions, but I soon realised
that it was more time consuming than I had expected and the results were not
pleasing. I went back to one of my first ideas in which I had thought of using particles (Maya dynamics)
as a base for the system, and decided that although the dog should not be a
part of the dynamics system, (as its behaviour is too complex), the ball and
the floor should be (they are rigid bodies, active and passive respectively ).
This is a quick and effective way of representing the ball’s behaviour.
Collision detection
Collision detection is usually
another very big part of any type of behavioural system. Although, perhaps it
has not been so important in this particular project as the number of “boids”
or “agents” (dogs) has never surpassed 2, therefore there has never been much
chance of a collision occurring. However, if further work is carried out on it,
and the number is increased, then further collision detection algorithms will
have to be implemented in order for the system to work appropriately, otherwise
objects soaring through each other could produce a ‘disconcerting visual
effect’.(M. Moore, J.Wilhelms, 1988:289)
This is not to say that it hasn’t been used in the system. Collision detection is a determining factor in the change from the second state to the third state.
I came across a problem when the dog moved along the motion path to catch the ball: it was never reaching its final state as it was never catching the ball. I needed some sort of way of telling the dog that it was close enough to the ball to catch it.
Whilst researching for this project I had looked at some collision detection algorithms, and thought of how to implement them. I thought perhaps of testing for each frame whether a vertex from the ball had intersected a face from the head of the dog. In theory this should work, but it would be very time consuming and there would be many calculations involved, so when a simpler way of collision detection was suggested to me I decided to go for that one as it seemed more appropriate for this system.
This collision detection consists of checking the distance between the ball and the dog at every frame; when that distance is very small (even if they haven’t actually collided) then consider it a collision. This proved to be even more effective than I thought it would be. The position of the dog and the ball is taken by using the MEL command ‘xform’ which returns the position of the pivot of each, therefore the distance being checked is from pivot to pivot, since the head of the dog is further forwards than its pivot, it appears as if the head of the dog has collided with the ball most of the time. The fact that a point represents the model has caused problems in behavioural systems in the past (see the appendix), but for mine it seemed to be an advantage.
In order to calculate this distance Pythagoras’s theorem is used:
√(ball_positionX
– dog_positionX)² + (ball_positionZ – dogpositionZ)²
ball_positionX refers to the X coordinate of the sphere’s pivot, and dog_positionX refers to the X coordinate of the dog’s pivot point. The same applies For the Z coordinate. When this distance is less than 2 units then it is considered a collision.
Actions
As stated before in the section about the state engine, with every state change a procedure defining an action is called, these are the three main actions involved in the dog’s behaviour:
Rotation: This action represents the first state; the dog is looking for the ball. It is defined by an expression because this is the method most suited for this particular action, the dog rotates randomly anywhere from 0 to 360 degrees every 24 frames. Expressions allow you to have this type of control over how often an action will occur (in this case 24 frames), and it is easier to animate a simple rotation through expressions than through a procedure in a script.
Motion path: This action represents the second state; the dog runs to the ball. It is triggered when the field of view procedure returns the value 1 to indicate that the ball is visible. The expression then calls the procedure ‘Get’. A curve is then created from the dog to the ball and a path animation is started along that curve.
Getting the ball: This action represents the final state; the dog has caught the ball. It is triggered by the collision detection in the ‘fieldofview’ procedure. When the distance between the ball and the dog is very small then the procedure returns the value 2. The ball scales down to resemble it being deflated and playback stops. The dog has caught the ball.
GUI
Although the main aim of this project isn’t to create a user friendly program I feel it is important that others can run it on their own, in order to inspect it or just have a try at using it. I created a very simple user interface with three buttons that will hopefully make it easy to use:
Sphere Position: This button simply changes the position of the sphere randomly in order to make it more interesting and allow the ball to be positioned in many different places so that the dog has to search for it. The user can also move the ball within the 3D view in Maya, but using the sphere position button is safer as it moves the ball within the region of the floor so that the ball won’t be lost.
Go Dog: Very simply this button calls upon the main script to run, it activates the dogs behaviour so it looks for the ball and catches it.
Reset: Once the dog has caught the ball, the system stops and the curves created for the motion of the dog are left in the scene; this button deletes these curves and also alters the position of the dog, so that the system is set for the next time it runs.
Figure7.
Graphical user interface to control the system.
Difficulties
The project was not an easy one; and many problems were encountered. The first one would have to be getting the field of view to work properly. For a long time I could not get it to work, it would detect the ball as being in front of one plane and behind another, when it was clearly in front of both. I eventually solved this by making sure the normals were all facing towards the inside of the field of view, and reviewing the calculations until all errors had been removed.
Another big problem I encountered had to do with operating systems. Although it is a script and should run on Maya on any machine, as the system got more complex it tended to crash Maya constantly on Linux. It got to the point where working on it was no longer possible under Linux, and all the development had to be done under Windows where it rarely crashed. I tried to solve this issue; however I decided that it would be better to use my time to continue the development of the system.
The last main problem of the system which I have not been able to solve to this day is getting it to run properly when two dogs are in the scene. The dogs are exactly the same and carry out the same expressions. In theory they should both respond in the same way, however one of the dogs always appears to not be able to see the ball, or if it does see it, the system stops but the dog is unable to move towards it. I tried giving each dog a separate script but this does not seem to solve the problem. It is a problem I will have to go back to and try to solve.
Although many other little problems were encountered, particularly when creating the motion paths, they were quite simply solved by trial and error, and re-writing parts of the script.
Improvements
There are many ways in which this project could be
furthered. The first step would be to manage to get the system to work when two
dogs are in place.
It would also be an improvement if the script worked
under all operating systems.
But I think the best improvements would come by
expanding the logic, and making use of the conditions I had thought of when I
first began the project. This would improve the dog’s behaviour and bring it closer
to resembling an actual dog; more conditions could be brought in such as speed,
height of the ball, and texture of the floor. There are endless possibilities
on how intelligent the dog could be, with the exception of software and
hardware constrictions.
I would also like to make the numbers of dogs and
balls larger. Make a big system (thought Maya would slow down greatly) with
many dogs and many balls. If I did this I would have to begin taking into
account things like flock centering and more complex collision detection in
order to avoid collisions with other dogs. I also might have to put a far plane
on the field of view, in order to cut down on the number of objects one dog
sees.
Another improvement that would make the system more
visually pleasing would be to animate the dog in a more realistic manner. That
is, to have its legs moving, its mouth opening, tail wagging, and maybe even
actually biting the ball. It would be great to see it having movements to
accompany the behaviour that is
already defined in the system. This might make it a bit more comical as well,
as I think a dog’s behaviour towards a
ball is quite funny and it would be very amusing if that could come across in
the system.
Other interesting facts
When I was
researching I found that some of the information I was reading seemed very
complex, so after a while I decided to just get into the scripting and see how
far I got, when I went back to reading the papers I had initially looked at, I
realised that I had done very similar things to that which I had read, without
even realising it. For example in Craig Reynolds’s paper he states: ‘The use of shapes instead of dots is visually significant’(Reynolds,
1987:26). I didn’t realize how true this actually was until I
substituted the cylinder I had begun using with an actual model of a dog. The
whole project seemed to come together as it could now be understood by an
audience. It was clear that it was a dog chasing a ball and not a cylinder
merely following a sphere.
I’d like to briefly explain why I chose to use MEL in
Maya rather than OpenGL. These were the main reasons:
I knew I would have dynamics available to me which
would help save time in animating the ball.
The models could be easily made or imported and the
positions could easily be found by using MEL commands.
Animating with expressions or motion paths would be an
easier way to get the dog to move as the expression editor has already been set
up for that purpose.
I have more experience in using MEL; and although I
wasn’t very confident when I began, I felt more confident writing in MEL than
in C++ and Open GL.
Conclusion
I took this project on as a personal innovation, a challenge. I was not very confident about my scripting skills, and was actually quite pessimistic on how far I could get with the implementation of the system. On the whole I consider it a great success. I managed to create a field of view and motion path for the dog as well as a state engine. I have created a behavioural model and I consider that a great personal achievement as well as a technical one. I am aware that better, more complex behavioural systems exist, but I never intended to create an even better system than existing ones single handed.
Though very simplified, the dog’s behaviour is mimicked, I have managed to script the interaction between the dog and a ball, and the way dogs interact with bouncing balls is one of the things that attracted me to this project. This project has lots of possibilities to be furthered and is definitely something I will be working on improving. Mimicking behaviour is not easy, and I’d like to get it closer to a dog’s behaviour.
I’d like to finish with another quote from Craig Reynolds:
‘The most
interesting motion of a simulated flock comes from interaction with other
objects in the environment (…) Similarly, behavioral
obstacles might not merely be in the way; they might be objects of fear such as
predators.’(Reynolds,1987:32)
Or as in the case of the dogs and the balls objects in
the environment may be objects of fun such as bouncing balls.
Bibliography
1. Reynolds, C.W., Flocks, Herds, and
Schools: A Distributed Behavioral Model, July 1987, proceedings of ACM
SIGGRAPH, (Computer Graphics V21, #4), 1987.
2. L.A. Schaefer, G.T. Mackulak,
J.K. Cochran, J.L. Cherilla, Application of a
General Particle System Model to Movement of Pedestrians and Vehicles,
proceedings of the 1998 Winter Simulation Conference.
3. M. Moore, J. Wilhelms,
Collision Detection and Response for Computer Animation, August 1988,
proceeding of ACM SIGGRAPH, (Computer Graphics V 22, # 4),1988.
4. L. Pocock, J.
Rosebush, The Computer Animator’s Technical Handbook, Morgan Kaufmann
Publishers, San Francisco, 2002.
5. Cisco Systems
As read on
6. D.
Koeppel, Massive Attack, Popular Science
Magazine
www.popsci.com/popsci/science/article/0,12543,390918-3,00.html
As read on
7. M.R. Wilkins, C. Kazmier,
MEL Scripting for Maya Animators, Morgan Kaufmann Publishers,
Acknowledgements
Stephen Bell
Appendix
‘One problem with modelling
vehicles as points, rather than areas, existing in 2D space can be detected by
viewing the output with animation software. Even though the points representing
vehicles do not collide, the areas over which the vehicles exist
overlap.’(Schaefer, Mackulak, Cochran, Cherilla, 1998: 1155-1156)
Instructions for running the system
First of all, this script will probably crash Maya on Linux so it is advisable to run it under Windows.
(select all of the code and press the little enter key)
The GUI should then pop up and by using the buttons you can control the system:
SPHERE POSITION: Changes the position of the ball.
GO DOG: Makes the dog look for the ball and catch it.
RESET: Resets the system so you can try it again.
STOP: This button only comes up in the two dog system it is to stop in case the system stops working and creates too many curves.