How we Combined 3D, Mobile Web and Virtual Reality to Explore Car Interiors Online
DISCLAIMER: MINI (BMW AG) was not involved and did not endorse ELEKS Ltd. in these experiments.
At ELEKS Digital, we constantly master new tools and tricks to do things smarter, faster and more effective. And experiment a lot. Exploration of the virtual reality technology inspired us to combine 3D, mobile web and VR to create a realistic digital experience. Our goal was to make the experience an integral part of the customer’s marketing ecosystem.
The idea sparked quickly. The experience we committed to create had to provide the user with a tangible way to explore car interiors. Automotive became our industry of choice as many car manufacturers constantly hunt for ingenious ways to provide potential customers with information about their products. Not to mention that our whole team has a passion for cars.
The result? Go to digital.eleks.com/mvr from your mobile for the full experience. Or watch the video:
Let’s talk about how we did it. To learn why we did it, click here.
Having the time limit of 4 weeks, we narrowed the functionality of the project to these basic features:
- 360° interior view using a gyroscope
- Switching the camera position to different seats
- Changing the seat and dashboard trim textures
- Interactive interior elements (e.g. radio, horn)
- Transparent and natural UI without buttons and controls.
Our 3D model had to combine all the above, be realistic and run smoothly on the majority of mobile platforms and devices.
Many would argue that 3D plus mobile web would totally kill the performance. Here’s how we managed to prove the opposite.
Obviously, true-to-life renders made in professional 3D computer graphics software are extremely heavy. For our project we needed to put a 3D model “into” a browser, so we had to follow the WebGL standards.
To concentrate on the development of application features instead of coding shaders for every type of material and calculating endless lights and shadows, we used three.js as the basic library. Our choice was justified by a number of reasons: the library is open-source, widely used, increasingly popular and more or less comprehensively documented. Not only did it bring us to a higher level of abstraction, it allowed loading external models in various formats: .obj, .js, binary, as well as provided us with a variety of controls implemented into the library.
We started with a highly polygonal and very detailed model created in 3D Max. Initial tests showed that even on desktop the model needed quite powerful hardware to work smoothly. Normally, such model renders 60 times per second, and every time the calculations are made for every vertex. For a high-poly model, these are billions of extra operations per iteration causing extreme consumption of resources. As for mobile devices, most of them refused to load the model at all. This unsuccessful attempt showed us that even the most powerful modern mobile gadgets were not able to cope with highly polygonal models.
The Smaller, the Better
So we fixed upon making a low-poly model that would look exactly like a high-poly one. We used blender and Maya 3D editors to create a new topology for it. It was decided to divide the model into several parts based on their material characteristics, the possibility to change their textures, etc. To retain the model’s real-life accuracy and smoothness, we had to use normal maps (the images that imitate the lighting of bumps and dents, adding details without using more polygons) and specular maps (the images by which the renderer defines the object’s shininess and highlights). We baked normal maps in Blender and Maya previously making UV maps for all the details. The completed UV maps prompted us to use diffuse maps, allowing us to choose any interior material and quickly change it in any graphic editor.
Eventually, all maps were ready and loaded, but we faced a deadlock again. Big and detailed maps were of course of a very nice quality, but a lot of devices couldn’t cope with them. To avoid using large textures, we chose the smallest image size with the best quality possible. After that, our model was loading even on the simplest WebGL-supporting mobile devices.
Seeking after realistic rendering, we lit the model in a 3D editor using an HDRI environment and baked the shadow map. Well, actually we faked it a little bit. The developed shadow and lights were added to diffuse maps, allowing us to create the illusion of natural light without using additional texture maps. Real shadow maps are not fully supported in the last stable version of three.js but are expected in the next one. Baked lights and shadows also made it possible to apply the minimum number of light sources in WebGL reducing the overall rendering process load. We also used environment maps and skyboxes to create reflections of glossy plastic and metallic materials. As a result, we’ve got an image of a surprisingly realistic model even with a low use of resources, supplemented by the feature of easily changing textures and objects with a single click.
Interactive objects were another feature that helped create an additional effect of presence and immersive experience. The problem with 3D space is the theoretical inability to detect the exact object the user has tapped. Fortunately, three.js offers a ray casting approach that allows determining the objects within a certain point of the screen.
We were absolutely satisfied with the result. The Virtual Reality approach made interior preview more natural, almost like bringing the user into the car. There was no trouble converting 3D to VR. We split the view into two cameras, one for each eye, and used accelerometer data to control them.
Insights and Findings
With this experiment we proved that mobile web, 3D and VR can smoothly work together to create an immersive user experience. This was a technological feasibility test every idea has to pass, no matter how great or simple it might seem at first. Having created a small working demo, you can practice unconventional methods to break through constraints and limitations. Although we didn’t aim to implement a lot of features in our project, the overall idea was quickly and successfully tested, showing great potential for future commercial implementation.