The Straits Times spatial analysis team, which is part of the digital graphics team, specialises in using innovative technology such as 3D reconstruction in storytelling. The team comprises 3D designers, developers, reporters and professionals with backgrounds in cartography and architecture.

In June 2023, the team was tasked to visually reconstruct the riots that occurred in Little India 10 years ago.

The 3D reconstruction would be a painstaking process that would take almost half a year to complete. With a deadline to meet, the team had to find the most efficient methods to tell the story, while ensuring accuracy.

The tech used

Blender was the choice software to build on the scenes as it was free and easily available to everyone on the team. Being an open-source software, it allowed team members to turn to online forums to solve some of the problems encountered.

Mixamo is a 3D computer graphics technology company. Based in San Francisco, the company develops and sells Web-based services for 3D character animation. It can be accessed using an Adobe account. We used Mixamo to rig and apply animations to our models. The team then used Blender to edit any animations that required tweaking.

Rigging is the process of creating a movable character with “bones”, so that animators are able to move and animate objects and models. Mixamo stores animations into these rigs, which means if the rig is put into a model, the model will show that corresponding animation.

PureRef, another software, was used to align videos to the 3D scenes.

Programs that were used in the re-enactment.

The 3D re-enactment was done in three stages.

Stage 1: Ideation

We discussed different styles, considering clean 3D reconstruction renders, and ST opted for the graphic novel look for its visual appeal. The style is great for showcasing linear events clearly while capturing the scene and atmosphere of the night. Colour palettes were carefully chosen to best convey the mood of the story.

A shared moodboard was created to give the team an overview of the storyline and pick out elements of storytelling that stood out. In this case, the emphasis was on the watercolour textures and bold outlines that are common in graphic novels.

We also did some style testing to see how the shaders and textures would look like in the final product. Shaders can take any pixel on the screen and alter its position, colour, saturation, and brightness.

Early tests of the graphic line style.

We needed to take note of how the shaders would look when objects were animated and moving. To do so, we used the shading tab in Blender for nodes and adjusted accordingly.

A screenshot of the shading tab in Blender of a bus.

Stage 2: Building the 3D set

To ensure that the scene was accurately reconstructed, the team relied on reference videos, forensic documents and detailed maps of the area gathered by the news desk team. Digital graphics designer Lim Zu Ning did most of the modelling, texturing and rigging of the different elements.

A rough map of Little India.

Interactive graphics editor Rodolfo Pazos provided the team with a 3D layout of the scene. He had created this map 10 years ago, right after the riot happened. The file, while basic, had most of the vital elements of the scene mapped out.

An overall 3D view of the scene.

The scene’s layout included the positions of the bus and the victim mapped to Tekka Lane and Race Course Road, where the riot took place. The movements of the bus in relation to the main events were also tracked on the layout.

The initial map layout.

The 3D scene had a lot of issues, mostly in optimisation. For example, the shophouses had roof tiles that were made out of millions of polygons, causing Blender to lag. The lag was resolved after the tiles were deleted.

Roof topology.

Various edits reduced the huge file size to a more manageable one and allowed the team to start modelling the vehicles and characters. At this point, the team had to figure out vital details, such as the dimensions of the roads and buses.

Bus meshes.

As the exact details of the bus models were not available, numerous images and videos of the incident, where the bus was visible, were used to accurately model the bus. The cars involved in the riot scenes were constructed using generic vehicles as a reference.

Next, the team modelled the characters for the scene. The models were inspired by low-poly models, which uses the lowest possible number of polygons to create a 3D model. This enabled ST to create characters that resembled humans but avoided life-like characterisation, which may be unsettling for readers.

However, the characters had to be identifiable so that readers could tell them apart. Mr Lee Kim Huat, the bus driver who is the main character, was given an older face and glasses. For Mr Sakthivel Kumaravelu, the worker who died in the riot, the team used a “default civilian” model in the software. To model the police, the production team used the Singapore police and riot police attire as references.

After modelling the characters, the props, such as lamp posts, street signs, trash and police gear, were next. These concluded the first cut of the modelling.

Character models that were used.

A lot of research was done to nail down details and ensure the reconstruction was accurate. Digital graphics journalist Charlene Chua, who was in charge of this project, reached out to forensic data scientist Michael Tay for more information on the incident. He provided details such as the specific location of where the incident occurred, and helped the team shape the project better.

The team used Google Maps’ measurement feature to figure out the exact width and length of roads and pavements in the reconstruction. Extra props had to be modelled and these included a plastic bag, an umbrella and police shield.

Stage 3: Animation

As the team was short on time, it looked to motion capture to shorten the animation process. Manually animating the models would take a long time, as it requires one to keyframe each motion, while keeping in mind the mechanics of how a human body naturally moves.

Mixamo cut the animation time significantly, as it allowed us to use a base mocap animation that was already readily available; and the team did not need to manually animate the models.

Mixamo interface.

There were three digital graphics designers, Lim Zu Ning, Charlotte Tan and Nikita Pereira, working on this project – each holding a separate working file – so it was important that all changes were reflected on everyone’s files.

This was achieved by setting up a project folder and a system that linked the working files together.

Our system for a more organised way of working together.

In our case, each shot in the sequence had its own Blender file. We created a shot template file that had our background and rig models linked in, basic lighting added and extra imported objects. Having a shot template ensured that there would be no missing elements in each file and made it easier for us to set up each shot.

The textured scene in the Blender viewport.

PureRef helped the team with matching the camera angle of the original videos of the Little India riots to the 3D re-enactment. Overlaying our reference on top of the Blender file helped us align the camera to the video footage as accurately as possible.

One of the final steps was the post-processing of videos that were to be embedded in the story, and this task was handled by Charlotte.

Publishing

The team took nearly seven months to work on the project from start to finish. To date, the re-enactment has amassed a large number of page views, and was received well by ST readers.

Read it here.