BMClogo

Many industries from Hollywood computer-generated images to product design are crucial, 3D modeling tools often use text or image cues to determine different aspects of the visual appearance, such as color and form. Although this is the first point of contact, these systems are still limited by ignoring the thing about the human experience center: touch.

The uniqueness of physical objects is based on their tactile properties, such as roughness, bumps or the feeling of materials such as wood or stone. Existing modeling methods often require advanced computer-aided design expertise and rarely support haptic feedback, which is critical to how we perceive and interact with the physical world.

With this in mind, researchers at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) have created a new system for typing 3D models using image cues, effectively replicating visual appearance and tactile properties.

The CSAIL team’s “tactical” tool allows creators to stylize 3D models based on images, while also blending the expected tactile properties of the texture. Tactics distinguish visual and geometric styles, allowing visual and tactile properties to be copied from a single image input.

Video thumbnails

Play video

The “tactical” tool allows creators to stylize 3D models based on images, while also combining the expected tactile properties of the texture.

Tactstyle could have far-reaching applications that extend from home decor and personal accessories to tactile learning tools, said Dr. Faraz Faruqi, lead author of the project’s new paper. Tactstyle allows users to download basic designs (such as Thingiverse’s headset rack) and customize it with the desired style and texture. In education, learners can explore various textures from around the world without leaving the classroom, and in product design, rapid prototyping becomes easier as designers quickly print multiple iterations to perfect the tactile quality.

“You can imagine using this system for universal objects, such as telephone stands and earplug cases to make more complex textures and enhance haptic feedback in a variety of ways,” Farouchi said. “You can create haptic educational tools that showcase a range of different concepts in areas such as biology, geometry, and terrain.”

The traditional approach to replicating textures involves the use of specialized haptic sensors, such as Gelsight developed by MIT, which actually touches an object to capture its surface micro-terrain as a “height field.” But this requires copying of the surface with physical objects or records. Tactstyle allows users to copy surface microscopy by leveraging the generated AI to generate height fields directly from texture images.

Most importantly, for platforms like the 3D printing repository Thingiverse, it is difficult to design individually and customize them. Indeed, if the user lacks sufficient technical background, the risk of the design actually “breaking” it can be manually changed to avoid printing it again. All of these factors stimulated Faruqi to wonder about building a tool that can download downloadable models in advanced, but this can also retain functionality.

In the experiment, tactical styles showed a significant improvement over traditional stylized methods by generating accurate correlations between visual images of textures and their height fields. This can copy tactile properties directly from the image. A psychophysical experiment shows that users can perceive the generated textures of the strategy, similar to the expected haptic and tactile characteristics of the visual inputs, resulting in a unified haptic and visual experience.

Tactstyle utilizes a preexisting method called “style2fab” to modify the model’s color channels to match the visual style of the input image. The user first provides the image of the desired texture and then uses a finely tuned variational autoencoder to convert the input image into the corresponding height field. This height field is then applied to modify the model’s geometry to create haptic properties.

The color and geometry stylization modules work in tandem, style both the visual and tactile properties of the 3D model from a single image input. Faroci said the core innovation lies in the geometric style module, which uses a miniature diffusion model to generate height fields from texture images, something that previous style frameworks cannot be replicated accurately.

Looking to the future, Farouchi said the team aims to expand the tactical style to generate novel 3D models using generative AI with embedded textures. This requires accurate exploration of the pipelines needed to replicate the form and function of the 3D model to be manufactured. They also plan to investigate the “Visuo Haptic mismatch” to create new experiences with materials that defied conventional expectations, like something that seemed to be made of marble, but felt like it was made of wood.

Faruqi and Mueller co-authored the new paper with doctoral students Maxine Perroni-Scharf and Yunyi Zhu, interviewed undergraduate Jaskaran Singh Walia, visited master student Shuyue Feng and assistant professor Donald Degraen, Donald Degraen in Human Interface Technology (HINFAL). LAB NZ NZ NZ NZ NZ NZ NZ IN NEW ZEALAND IN NEW ZEALAND.

Source link