Tools & Strategies News

Deep-Learning Model Shows Promise in Measuring Joint Attention

A new study described a deep-learning model that showed potential in measuring joint attention in children with autism spectrum disorder.

Deep learning capabilities.

Source: Getty Images

By Mark Melchionna

- A study published in JAMA Network Open describes a deep-learning (DL) model that showed the ability to determine differences between children with autism spectrum disorder (ASD) and typical development (TD) patterns by using videos of joint attention behaviors to monitor symptoms.

According to the Centers for Disease Control and Prevention (CDC), about one in 36 children have ASD. While the CDC has a defined process of screening for ASD, certain factors can be indicative of symptom severity among patients with this condition, like joint attention.

Joint attention is a social function that is often limited in children with ASD. The JAMA Network Open study defines joint attention as sharing a focus on certain objects with neighboring individuals, which is a component of social learning. Although there are no existing methods for increasing levels of joint attention, researchers aimed to use DL to determine how this factor contributes to ASD symptom severity. They also sought to train DL models to define joint attention differences between ASD and TD.

To do this, researchers conducted a diagnostic study that provided children with and without ASD with joint attention tasks. They captured responses via video. A total of 95 patients completed the tasks, all between 24 and 72 months old, and were not compromised visually or auditorily.

Of the 95 patients, 45 had ASD, and 24 were boys. When reviewing responses, researchers used the Childhood Autism Rating Scale.

To assess the DL model, researchers used measures such as area under the receiver operating characteristic curve (AUROC), accuracy, and precision.

The DL model comparing ASD and TD displayed good predictive performance for initiation of joint attention, with an AUROC of 99.6 percent, an accuracy of 97.6 percent, and a precision of 95.5 percent. The model also showed good predictive performance for low-level response to joint attention, with an AUROC of 99.8 percent, 98.8 percent accuracy, and 98.9 percent precision. Regarding the high-level response to joint attention, the DL model showed an AUROC of 99.5 percent, 98.4 percent accuracy, and 98.8 percent precision.

For initiation of joint attention, the DL-based models for ASD symptom severity also showed relatively high predictive performance. These achieved an AUROC of 90.3 percent, 84.8 percent accuracy, and 76.2 percent precision. Regarding the low-level response to joint attention, the DL-based models achieved an AUROC of 84.4 percent, 78.4 percent accuracy, and 74.7 percent precision. Regarding the high-level response to joint attention, they achieved an AUROC of 84.2 percent, 81 percent accuracy, and 68.6 percent precision.

Based on these results, researchers concluded that there is potential associated with using artificial intelligence-based tools to measure joint attention. However, further evidence is needed.

The use of DL to accelerate patient assessment continues to grow. 

In November 2022, Emory University and University Hospitals Cleveland researchers created a DL-based level fat assessment resource that foreshadowed high risk of COVID-19.

Through this tool, researchers obtained automated measurements of liver fat from standard computerized tomography (CT) scans. Usually, the use of CT scans to identify fatty liver disease is time-consuming, noted a press release. The DL model, however, showed the potential to assist clinicians in making this process faster.