I'm happy to be at SIGGRAPH this year, throughout the week I'll try to post little reports of what I've done there. After a very long trip (3 flight and a delay of 2h30 at Atlanta), Stéphane and I arrived at our hotel in New Orleans just in time to dinner and go to bed.
The conference start badly for me, my registration form had been lost but the payment had been made... 2 hours later and a few call (thanks Stéphane) to France and I was in, excited like a kid at he zoo ! The first session I attended was featuring the ACM SIGGRAPH award and the keynote of Randy Thom a famous sound designer. The awards was given to great researcher and artists, no one I knew except Rob Cook, the guy who created Pixar's Renderman, he got The Steven Anson Coons Award. The keynote by Randy Thom was quite a disappointment, this guy, who has participated in tons of movies, chose to talk about "Designing a Movie for Sound" by presenting the work of others: Apocalypse now and Wall-E. The keynote was essentially excerpts from those 2 movies, interviews of people who had worked on (From the DVD bonuses concrning Wall-E) and common sense.
On the afternoon, I started by the screening of the nominees at the animation festival, with lots of great stuff, by the way the french touch won again :) Next was 4 interesting talks about animation and physics in games and movies. These did not features technical breakthrough but industrial "real world" implementations of advanced techniques. I was particularly interested by the Pixar guy talking about the physic simulation of the balloon canopy in "Up". Last, but not least the technical papers fast forward. The rules are simple: each team presenting a technical paper at SIGGRAPH this year has to present his work in 50 seconds, some sort of teaser for the talk. Some were really funny, some were really impressive, some were really shy and most of the asiatics team choose to present a video with a recorded comment instead of talking... The 2 hours felt like two minutes ! I want more ! That's all for today, tomorrow the exhibition begins !
This tuesday was the second day of my first SIGGRAPH, not so great though... Let's evacuate the disappointments:
- Thinking that they got a booth at the main exhibition, I was too late to get an invitation for the Lucas Film little party (not so bad as my feet are pretty ruined right now);
- The exhibition in itself is a disappointment to me, the booths are really calm, there's nothing spectacular, and last, but not least, there aren't many (new) opportunities for Golaem (potentials customers or partners) ;
On the bright side:
- This morning Will Wright kenote was really really cool ! He talked about lots of things including human perception, cognitive load, convergences of entertainment medias, social hiveminds & how to use dedicated fans work to improve the experience of casual ones. The way he considers as "entertainment business" sports, TV, videos games, but also painting, music, theater etc. annoy me though.
- I learned quite intersting stuffs on the Intel booth following a presentation
about Larabee (their new processor featuring both traditional x86 and scalar
processor on many cores) programming and getting a demo of Parallel studio
(plugin to visual C++ allowing the profiling and debugging of parallel code).
I had a weird talk with some of their devs though:
- "Is it possible to use the open source version of TBB (Thread Building Blocks, a multithread C++ API) in a commercial product ?
- "You'll have to check with your lawyer."
- "Hum, but what Intel's lawyer would say about it."
- "We don't know but the official statement is that its Intel best interest if you use it."
- I got two Pixar's renderman teapots !
Day two is over.
Wednesday had been quite a busy day for me, I started the day at 8:30 for a group of technical papers presentation called "Motion Synthesis and Editing". I'm really not a specialists but it was in interesting, there was three presented papers:
- Generalizing Motion Edits With Gaussian Processes, it shows a method to refineexisting motions via editing them automatically based on the samples of the desired edit, I didn't quite get everything (too early I think) but it seems powerful;
- Optimization-Based Interactive Motion Synthesis, the described platform, given a goal and constraints, is able to synthesize the motion needed to attain that goal by optimizing different physical parameters, the article is waiting for me to read it;
- Lie Group Integrators for Animation and Control of Vehicles, the algorithms shown here are able to synthesize the motion of a non-holonomic vehicles via optimization given a goal and some constraint using the Lie Group, I was quite impressed by the result they get but after the talk I was told by Stéphane that those kind of stuff was known in robotics for years.
I then attended to the third keynote of this SIGGRAPH: New York Times Steve Duendes talking about data visualization and aesthetics. As a follower of blog dedicated to the subject such as Flowing Data I really enjoyed the talk, it seems quite odd to find this subject at SIGGRAPH though. As nothing really interested me in the beginning of the afternoon, I took this opportunity to do some light tourism. It was pretty hot but New Orleans has some cute corners. I spent the end of the afternoon on technical stuffs: a nvidia talk about their scene graph and how its interactions with other of their stuffs and a chat with an Autodesk engineer about Kynapse, their solution for virtual characters pathfinding, steering and a little bit of AI. I then joined other french SIGGRAPHers for a drink and a dinner, that's why you didn't have the pleasure to read my prose yesterday.
This Thursday was my 4th day for this SIGGRAPH, it was also the last day for the exhibition. After a little sleepover my day started by the fist session of technical papers dedicated to character animations, really interesting but quite complex for my bad skills in the domain.
- Dextrous manipulation from a graping pose. This paper presents a way to compute a character's hand motion from the motion of the object it handles. As I understand, once the hand has grasped the object, the system will handle its movements taking into account the friction coeficient of the material (but not its weight).
- Optimal Gait and Form for Animal Locomotion, is a method able to automatically generate the morphology and the motion for any animal having any number of legs. The animal is described using cylinders of variable sizes and some constraints. Their method is able to recreate belivable horse, giraffes, etc... They can even create motions for imaginary animal with 5 legs (the video is watchable here).
- Performance-Based Control Interface for Character Animation. The work presented here is quite impressive and could be immediatly applied to game consoles. The authors original observations is that with the current trend for motion-driven games inputs (wii, project natal...), soon we'll be able to match exactly the character's motions to the player's. But in some cases (climbing a ladder, jumping on a trampoline), the user can't reproduce the movements in his living room. The presented method is able to match the player's motions type to a type of character's motion while keeping as much 1 to 1 motion mapping as possible. On the ladder example, the arms and hip motions of the players can be mapped directly on the character while the rest is taken from offline recorded motions.
- Detail-Preserving Continuum Simulation of Straight Hair. I must admit I did not pay that much attention to this talk...
The following session I attended was called "Effects Omelette": 4 talks about various movies special effects.
- People from Disney animation talked about how they setup the production pipeline of Bolt in order to produce only one movie in 2D and stereoscopic 3D.
- "Virtual" taylors from Pixar gave a talk about how they designed the characters clothes in Up. Pretty interesting to see how they "virtualized" traditionnal tayloring techniques in order to get realistic looking clothes.
- One of the technical guys who had worked on Up talked about the net simulations he designed (in fact it was the same piece of work that was used for the ballon canopy).
- Finally, Digital Domain presented how they simulate the Eiffel tower collapsing in GI Joe, introducing their talk by "The movie has been released last week, surprisingly it has some good reviews !"...
The afternoon, I explored the third floor : modern art, scientific posters and japanese presenting weird, but interesting, "emerging technologies". I then attended to a production session about how they 3d-printed every faces in "Coraline", very impressive ! But it was quite a paradox to learn that, for the faces, the created 3d models, animated them for lip synching and facial expression and then printed every frame to recreate, in stop motion, those animations.
This thursday night was the official reception night: New Orleans Mardi Gras, fanfare and buffet.