BMClogo

For pregnant women, ultrasound is an information (sometimes necessary) procedure. They often produce two-dimensional black and white scans that invade the fetus, which can reveal key insights including biological gender, approximate size and such as heart problems or cleft lip (such as heart problems). If your doctor wants to take a closer look, they may use magnetic resonance imaging (MRI), which uses magnetic fields to capture images that can be combined to create a 3D view of the fetus.

MRI is not all. For doctors, 3D scans are hard to explain enough to diagnose problems because our visual system is not used to handling 3D volume scans (in other words, the appearance of the package also shows us the internal structure of the subject). Enter machine learning, which can help model fetal development more clearly and accurately from the data – although no such algorithm can model its random movements and various body shapes.

That said, until MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), Boston Children’s Hospital (BCH), and Harvard Medical School, a new approach called “fetal SMPL”, and introduced clinicians to the health of the fetus. It is adapted from “SMPL” (Skin Multiplayer Linear Model), a 3D model developed in computer graphics to capture the body shape and posture of an adult to represent the body shape of the fetal body and accurately posture. Fetal SMPL is then trained with 20,000 MRIs to predict the position and size of the fetus and create a 3D representation similar to the sculpture. Inside each model is a skeleton with 23 articulated joints called “motion trees” that the system uses to pose and move like the fetus seen during training.

The extensive real-world scans learned from fetal SMPL help it improve its accuracy. Imagine stepping into the footsteps of a stranger while blindfolding your eyes, not only is it perfectly appropriate, but you guessed the shoes they were wearing correctly – again, the tool closely matches the fetal position and size in an MRI frame that has never been seen before. Fetal SMPL was only misaligned on average about 3.1 mm, a smaller gap than single meter.

This method can enable doctors to accurately measure things like the size of a baby’s head or abdomen and compare these metrics to healthy fetuses of the same age. Fetal SMPL demonstrates its clinical potential in early tests, where it obtains accurate alignment results on a small number of real-world scans.

“Estimating the shape and posture of a fetus can be challenging because they are stuffed into the tight range of the uterus,” said lead author Yingcheng Liu SM ’21, Ph.D. in MIT and Csail researcher Yingcheng Liu SM ’21. “Our method overcomes this challenge using an interconnected skeletal system beneath the surface of a 3D model, which represents the fetal body and its movements. It then relies on a coordinate descent algorithm to make predictions, essentially, between guessing posture and shape from tricky data until a reliable estimate is found.”

In the uterus

Fetal SMPL: a system that simulates infant growth, called “smil”, was tested for the shape and posture accuracy that researchers can find closest to the baseline. Since babies coming out of the uterus are larger than fetus, the team narrowed these models by 75% to upgrade the game venue.

The system outperformed this baseline on the fetal MRIS dataset between 24 and 37 weeks of ingestion at Boston Children’s Hospital. Fetal SMPL is able to recreate the actual scan more accurately because its model is closely linked to Real MRI.

This method effectively arranges its model with the image, and only three iterations are needed to achieve reasonable alignment. In one experiment, the experiment calculated how many incorrect guesses the fetal SMPL made before the final estimate, and its accuracy began in step 4.

The researchers have just begun testing their systems in the real world, where it produces similarly accurate models in initial clinical testing. Although these results are encouraging, the team notes that they need to apply their results to larger populations, different gestational ages and cases of various diseases to better understand the capabilities of the system.

Only deep in the skin

Liu also pointed out that their system only helps analyze what doctors see on the fetal surface, because only the bone-like structures are located under the skin of the model. To better monitor the baby’s internal health, such as liver, lungs and muscle development, the team intends to volumeify its tools to model the fetus’ internal anatomy by scanning. Such an upgrade will make the model more humane, but the current version of Fetal SMPL has made an accurate (and unique) upgrade to 3D fetal health analysis.

“This study introduces an approach designed specifically for fetal MRI that effectively captures fetal movement and enhances assessment of fetal development and health,” said Kiho IM, associate professor of pediatrics at Harvard Medical School and a scientist in the Neonatal Medicine Department of the Center for Neonatal Medicine at the BCH Center for Fetal Neuronal Factors and Developmental Sciences. IM did not participate in the paper, adding that this approach “not only improves the diagnostic utility of fetal MRI, but also provides insights related to body movement for the early functional development of the fetal brain.”

“This work achieves a groundbreaking milestone by extending the mannequin of the parametric surface to the earliest shapes of human life: the fetus,” said Sergi Pujades, associate professor at Grenoble Alpes who was not involved in the study. It allows us to entangle human shapes and movements, which has proven to be key to understanding the adult body and metabolic status and how infant movement is associated with neurodevelopmental disorders. Furthermore, the fetal model originates from this fact and is compatible with the shapes of adults (smil) and infants (SMILs) and allows humans to be compatible with humans’ shapes. An unprecedented opportunity further quantifies how the growth and movement of human forms are affected by different conditions. ”

Liu wrote the paper with three Csail members: Peiqi Wang SM ’22, PhD ’25; MIT PhD student Sebastian Diaz; and senior writers Polina Golland, Sunlin and Priscilla Chou Professor of Electrical Engineering and Computer Science, Principal Researcher at MIT Csail, and leader of the Medical Vision Group. Esra Abaci Turk, assistant professor of pediatrics at BCH, Benjamin Billot, Inria researcher, and Professor Patricia Ellen Grant, professor of pediatrics and professor of radiology at Harvard Medical School, are also authors of the paper. The National Institutes of Health and the MIT-Sell-West subproject partly support this effort.

The researchers will present their work at the International Medical Image Computing and Computer-Aided Interventions (MICCAI) in September.

Source link