CASA, the yearly conference on Computer Animation and Social Agents, was held in St Malo, France, from 31/05/10 to 03/06/10 (further information here). As its name implies, the conference focuses are:
- Animation techniques (motion capture, motion control, physics-based animation...),
- Social agents (emotions, facial animation, crowd simulation...).
During the conference, I only attended to Craig W. Reynolds keynote and to the collocated crowd simulation workshop.
Craig Reynolds is to autonomous agents navigation what Louis Pasteur is to vaccination, he virtually invented the field. His seminal work of 1987 introduced steering behavior, simple rules to models the way autonomous agents navigate in groups. When I worked on the bibliography of my master's thesis, I was amazed to see that almost every paper I read was citing this 1987 paper or its 1999 little brother; Google scholar count more than 3000 citations! One other thing that is quite interesting about Reynolds work is its rarity, I personally know only those two articles (and google scholar tends to agree). Needless to say I was really eager to attend to the legend's talk.
Well, I was quite disappointed.
The talk he gave was titled "Crowds and emergent teamwork", it presented Reynolds observation on emergent constructions in nature, done mostly by insects (ants, termites, bees...), and his attempts to design autonomous agents able to do the same. The final model gave agents simple rules to add bricks to the construction. The same king of emergent construction was observed. The rest of the keynote was filled by what seems to be the entire flickr account of the host : insects construction (no global plan but a functional emergent structure), human constructions (everything is designed globally)... Some aspects of the keynote were interesting but nothing really new, controversial or brilliant... I do think my expectation were too high though...
There was many presentations during that day, more or less interesting, a few of them staid in my mind since then. The first one was titled "On the interface between steering and animation for autonomous characters" and was presented by Petros Faloutsos.
The authors worked on a problem that every one trying to work on autonomous characters has to face: how to obtain a smooth animation AND an efficient steering behavior (particularly collision avoidance).
As a matter of fact the two problematics are solved separately with, most of the time, a top-down approach:
- the steering part is solved in 2D using sliding discs to represent entities, the decision being velocity changes;
- the animation engine then chooses the best motion to match the computed velocity.
In 'dumb' mode steering can compute velocities that are unattainable using available motion which results in foot sliding and awkward motions. When less dumb, steering has access to a velocity model describing which velocities are valid, this model can be provided directly from the animation engine. But in order to be manageable in terms of computation complexity, this velocity model is often simplistic and the animation problems still occur.
The approach presented here is quite different, no more disc motion planning, but a footprint planner. Constrained by bio-mechanical rules expressing the distance between the two feet, their sizes as well as the pedestrian body size, the planner place footprints in order to reach a goal and avoid collision. The presented videos was impressive, but the 'real' paper still have to be published, Faloutsos was talking about SIGGRAPH Asia but it seems they didn't make it.
The following presentation I'd like to focus on is Jan Ondrej "Collision avoidance from synthetic vision".
Jan is a Phd Student working under the supervision of Julien Pettré in the Bunraku research group, of which Golaem spun off. This presentation was in fact a real-conditions rehearsal of Jan SIGGRAPH presentation of his paper.
The presented work is quite unique, it propose to solve collision avoidance problem using vision algorithm. A low resolution render of simplified geometry is computed for each entity from its point of view, the result is used by a rather simple algorithm to compute a collision free velocity. And, yes, its quite efficient, and the resulting planning is good (perhaps too good to be human-like, though), anyway let's see the result.
My major take away on this is the inherent advantage of this kind of vision based algorithm: visibility occlusions handling for free (entities hiding each other, walls of difference heights...). What is costly using traditional geometry queries is almost free here thanks to the render phase. I hope I'll be able to experiment with this soon !
The rest of the workshop features other interesting talk about various aspects of crowd simulations including:
- An overview of Legion software;
- A presentation of the work done by Disney Imagineering R&D department and SpirOps on crowd simulation in theme parks;
- An overview of Ming Lin's UNC Gamma group work;
- Various presentation about Dublin Trinity college GV2's work on a populated virtual Dublin.
An interesting workshop and the chance to meet the cream of crowd simulation researchers !