In 2022, we tested using motion capture – or mocap – and 3D scans to analyse how badminton champ Loh Kean Yew unleashes a smash. In late 2023, when we started work on projects focused on kitefoiler Max Maeder, we wanted to revisit these techniques and try others.

Motion capture tests

We decided to test the DeepMotion software tool again to generate an animated rig – the “skeleton” of a 3D model – from videos, and the 3D scanning platform Polycam for the model and textures.

For the mocap, we planned to test different camera set-ups, including fixed and tracking cameras, in a boat, at the beach, using fast action or slow motion, and so on. Some of this footage we already had from earlier shoots with Max, but it was limited. Thus, we planned to capture a variety of shots when he was due to shoot with us. We also stabilised some of the footage before uploading it to DeepMotion to improve accuracy.

Examples of different camera set-ups we planned and tested.

3D capture tests

In preparation for scanning the 3D model of Max, we tested some alternatives like Kiri Engine, but went with Polycam after the initial tests.

We then booked our studio and asked a local kitefoiler to bring in his gear so we could see how well Polycam would be able to capture a 3D model of someone with all the kitefoiling equipment, which was something we wanted to feature in our project.

The idea was to scan the kitefoiler in a T-pose – a default pose for the 3D model of a person – to make it easier to generate the bones rig later.

Test scanning session.

Since the scanning process requires more than 200 images to result in a good-quality model, each session took at least five minutes. Tripods were placed on either side of the athlete so that he could rest his arms on them while posing.

Polycam results from the test session.

Putting them together

While these tests were ongoing, the video team, along with some of our team members from Digital Graphics, met Max on the week of Feb 26, when we did our water and studio shoots with him.

The water shoot, where we filmed Max kitefoiling out on the water.
Max annotating his techniques on paper.
3D-scanning Max and his kitefoiling equipment.

We got the variety of shots we planned and tested them out using the old model while our team worked on cleaning up the 3D model of Max.

After much testing with DeepMotion, we noticed a couple of issues.

The biggest challenge in detecting the model’s movement was the lack of ground for reference, as the subject was basically floating over the water. DeepMotion has an option for gliding moves but still requires some post-processing because of flaky or even anatomically impossible foot and leg motions.

Another problem was the hands, in particular the fingers, because the handlebar got in the way during certain moves or with certain camera angles. DeepMotion was unable to fully capture the nuances of the techniques used in kitefoiling.

We decided to then use the DeepMotion rig and the videos we shot for reference and animate the model manually. This allowed us to have full control over the animation so that we could capture the nuances of Max’s techniques.

As for the 3D model, it required some fixes, like closing the mesh on the hands and under the feet. Then, a preliminary rig was created using the Mixamo automatic character rigging tool, and refined on Blender, the 3D computer graphics software tool that our team uses.

Development of the projects

We published three stories on Max, and this model was used in the second and third stories.

In the first article, we decided to use a split-screen view to show both a wide and close-up shot as the sport features a large kite that is as crucial as the nuanced movements used in carrying out the techniques, so performance and loading times were concerns if we were to use WebGL to render the 3D graphics on the web browser.

3D model

We had to first prepare the 3D model for WebGL. For our story, we decided to focus on the three techniques that Max told us were crucial to kitefoiling – foiling, jibing and tacking. The 3D model and the respective animations for these techniques were already done in Blender. However, we had to figure out how to make them work with WebGL.

To best optimise the model for WebGL, we needed the three animations stored within the same Max rig (foiling, jibing and tacking). However, the original rig consisted only of Max’s model and animations for his model, and did not include the separate parts – that is, the board, the kite, the kite handle and the attached strings. This meant that any animation recorded of these separate items would not be saved into the Max rig, but into the models of the individual objects themselves.

The various objects that had to be rigged to the Max model.

To solve this, we had to go through the following:

Creating bones for parts

Bones were created for these separate parts, and the parts were parented to the bones to create a rig for them.

Bones of the parts.

Animating the parts

The animations for each object had to be transferred to the bone that controlled its corresponding part in the Max rig. The bone has Blender bone constraints called “Copy Transforms” that make the bone imitate the animation of whatever object it is tied to. To make it usable for WebGL, this imitation has to be baked into actual keyframes. For this project, the animations had to be baked through visual keying to the bones, and once that was done, the animations were deleted from the object. This allowed the Max rig to have the objects in it animated.

Checking on actions

Foiling, jibing and tacking were different sets of animations, also known as actions. The animations for each of these parts had to be baked into the correct actions as well. This would mean that the jibe action, for instance, would include all the jibe animations for Max, as well as the individual parts, like the kite, the board and the strings.

Different parts of Max had to be separated into different objects and named in a specific manner so that we could use those names for the highlights.

Water

We also wanted to animate the water in a realistic way. Simulation was out of the question because it could be too resource-heavy, so using a shader was the solution.

We tested many alternatives, some inspired by realistic water colours and reflections using only the fragment shader (Seascape – ShaderToy), and others using a vertex shader to modify the geometry and simpler colours.

Fragment shader adapted from ShaderToy.

In the end, mainly because of performance issues after everything was on the page, the solution was using a modified version of the Water2 Three.js example for its optical features, adding a vertex shader to generate the waves.

An initial version of the water shader used to generate waves and foam.

Views

For the closer view, it was necessary to create a custom orbital camera that would follow the model animation. This was accomplished by changing the camera position and rotation when the model moves.

The camera moves based on the relative position of the model.

To create the split screen, we used the multiple views feature, with which you can set the areas of the renderer you want to divide and the objects that will show in each of them. On mobile devices, we also animated the size and position of these views to grow so that readers are able to better see the animations.

Transitions between scenes

Another optimisation used in this story was for the fade transition between coloured and greyscale models in the equipment section.

Since simply replacing the textures would not be possible if we were to achieve the desired effect, there were two separate models. But instead of a binary glTF file format with embedded assets, we used the JSON file format version with linked assets. The key was that the glTF files would share the file containing vertex and animation data, so the browser with clean cache would fetch it once from the web.

The transition itself was done using two stacked canvases, and the one on top fades when the user scrolls.

Instead of replacing materials, two versions of the model were used.

Publishing the project

We started work on this project in November 2023, and it was finally published in July 2024, ahead of the Olympics. We spent those eight months testing, animating and developing to ensure that we were able to feature Max’s sport in the best way possible. Check out the piece here.

We also reused many components and animations for the follow-up project that we did for the analysis piece on Max after he won the Olympic bronze medal for kitefoiling – read that piece here.