Skovgaardbates0313

Z Iurium Wiki

Verze z 9. 9. 2024, 00:22, kterou vytvořil Skovgaardbates0313 (diskuse | příspěvky) (Založena nová stránka s textem „Experimental results on a recently released laparoscopic dataset demonstrate the clear superiority of the proposed methods. The proposed method can facilit…“)
(rozdíl) ← Starší verze | zobrazit aktuální verzi (rozdíl) | Novější verze → (rozdíl)

Experimental results on a recently released laparoscopic dataset demonstrate the clear superiority of the proposed methods. The proposed method can facilitate the access of key information in surgeries, training of junior clinicians, explanations to patients, and archive of case files.Accurate lymphoma segmentation on Positron Emission Tomography (PET) images is of great importance for medical diagnoses, such as for distinguishing benign and malignant. To this end, this paper proposes an adaptive weighting and scalable distance regularized level set evolution (AW-SDRLSE) method for delineating lymphoma boundaries on 2D PET slices. There are three important characteristics with respect to AW-SDRLSE 1) A scalable distance regularization term is proposed and a parameter q can control the contour's convergence rate and precision in theory. 2) A novel dynamic annular mask is proposed to calculate mean intensities of local interior and exterior regions and further define the region energy term. 3) As the level set method is sensitive to parameters, we thus propose an adaptive weighting strategy for the length and area energy terms using local region intensity and boundary direction information. AW-SDRLSE is evaluated on 90 cases of real PET data with a mean Dice coefficient of 0.8796. Comparative results demonstrate the accuracy and robustness of AW-SDRLSE as well as its performance advantages as compared with related level set methods. In addition, experimental results indicate that AW-SDRLSE can be a fine segmentation method for improving the lymphoma segmentation results obtained by deep learning (DL) methods significantly.Recent research on deep neural networks (DNNs) has primarily focused on improving the model accuracy. Given a proper deep learning framework, it is generally possible to increase the depth or layer width to achieve a higher level of accuracy. However, the huge number of model parameters imposes more computational and memory usage overhead and leads to the parameter redundancy. In this article, we address the parameter redundancy problem in DNNs by replacing conventional full projections with bilinear projections (BPs). For a fully connected layer with D input nodes and D output nodes, applying BP can reduce the model space complexity from O(D²) to O(2D), achieving a deep model with a sublinear layer size. However, the structured projection has a lower freedom of degree compared with the full projection, causing the underfitting problem. Therefore, we simply scale up the mapping size by increasing the number of output channels, which can keep and even boosts the model accuracy. This makes it very parameter-efficient and handy to deploy such deep models on mobile systems with memory limitations. Experiments on four benchmark data sets show that applying the proposed BP to DNNs can achieve even higher accuracies than conventional full DNNs while significantly reducing the model size.Electronic health records (EHRs) are characterized as nonstationary, heterogeneous, noisy, and sparse data; therefore, it is challenging to learn the regularities or patterns inherent within them. In particular, sparseness caused mostly by many missing values has attracted the attention of researchers who have attempted to find a better use of all available samples for determining the solution of a primary target task through defining a secondary imputation problem. Methodologically, existing methods, either deterministic or stochastic, have applied different assumptions to impute missing values. However, once the missing values are imputed, most existing methods do not consider the fidelity or confidence of the imputed values in the modeling of downstream tasks. Undoubtedly, an erroneous or improper imputation of missing variables can cause difficulties in the modeling as well as a degraded performance. In this study, we present a novel variational recurrent network that 1) estimates the distribution of missing variables (e.g., the mean and variance) allowing to represent uncertainty in the imputed values; 2) updates hidden states by explicitly applying fidelity based on a variance of the imputed values during a recurrence (i.e., uncertainty propagation over time); and 3) predicts the possibility of in-hospital mortality. It is noteworthy that our model can conduct these procedures in a single stream and learn all network parameters jointly in an end-to-end manner. We validated the effectiveness of our method using the public data sets of MIMIC-III and PhysioNet challenge 2012 by comparing with and outperforming other state-of-the-art methods for mortality prediction considered in our experiments. In addition, we identified the behavior of the model that well represented the uncertainties for the imputed estimates, which showed a high correlation between the uncertainties and mean absolute error (MAE) scores for imputation.The performance of a classifier in a brain-computer interface (BCI) system is highly dependent on the quality and quantity of training data. Typically, the training data are collected in a laboratory where the users perform tasks in a controlled environment. However, users' attention may be diverted in real-life BCI applications and this may decrease the performance of the classifier. To improve the robustness of the classifier, additional data can be acquired in such conditions, but it is not practical to record electroencephalogram (EEG) data over several long calibration sessions. A potentially time- and cost-efficient solution is artificial data generation. Hence, in this study, we proposed a framework based on the deep convolutional generative adversarial networks (DCGANs) for generating artificial EEG to augment the training set in order to improve the performance of a BCI classifier. To make a comparative investigation, we designed a motor task experiment with diverted and focused attention conditions. MDK-7553 We used an end-to-end deep convolutional neural network for classification between movement intention and rest using the data from 14 subjects. The results from the leave-one subject-out (LOO) classification yielded baseline accuracies of 73.04% for diverted attention and 80.09% for focused attention without data augmentation. Using the proposed DCGANs-based framework for augmentation, the results yielded a significant improvement of 7.32% for diverted attention (p less then 0.01) and 5.45% for focused attention (p less then 0.01). In addition, we implemented the method on the data set IVa from BCI competition III to distinguish different motor imagery tasks. The proposed method increased the accuracy by 3.57% (p less then 0.02). This study shows that using GANs for EEG augmentation can significantly improve BCI performance, especially in real-life applications, whereby users' attention may be diverted.External memory-based neural networks, such as differentiable neural computers (DNCs), have recently gained importance and popularity to solve complex sequential learning tasks that pose challenges to conventional neural networks. However, a trained DNC usually has a low-memory utilization efficiency. This article introduces a variation of DNC architecture with a convertible short-term and long-term memory, named CSLM-DNC. Unlike the memory architecture of the original DNC, the new scheme of short-term and long-term memories offers different importance of memory locations for read and write, and they can be converted over time. This is mainly motivated by the human brain where short-term memory stores large amounts of noisy and unimportant information and decays rapidly, while long-term memory stores important information and lasts for a long time. The conversion of these two types of memory is allowed and is able to be learned according to their reading and writing frequency. We quantitatively and qualitatively evaluate the proposed CSLM-DNC architecture on the tasks of question answering, copy and repeat copy, showing that it can significantly improve memory efficiency and learning performance.As a group of complex neurodevelopmental disorders, autism spectrum disorder (ASD) has been reported to have a high overall prevalence, showing an unprecedented spurt since 2000. Due to the unclear pathomechanism of ASD, it is challenging to diagnose individuals with ASD merely based on clinical observations. Without additional support of biochemical markers, the difficulty of diagnosis could impact therapeutic decisions and, therefore, lead to delayed treatments. Recently, accumulating evidence have shown that both genetic abnormalities and chemical toxicants play important roles in the onset of ASD. In this work, a new multilabel classification (MLC) model is proposed to identify the autistic risk genes and toxic chemicals on a large-scale data set. We first construct the feature matrices and partially labeled networks for autistic risk genes and toxic chemicals from multiple heterogeneous biological databases. Based on both global and local measure metrics, the simulation experiments demonstrate that the proposed model achieves superior classification performance in comparison with the other state-of-the-art MLC methods. Through manual validation with existing studies, 60% and 50% out of the top-20 predicted risk genes are confirmed to have associations with ASD and autistic disorder, respectively. To the best of our knowledge, this is the first computational tool to identify ASD-related risk genes and toxic chemicals, which could lead to better therapeutic decisions of ASD.This article is concerned with the robust convergence analysis of iterative learning control (ILC) against nonrepetitive uncertainties, where the contradiction between convergence conditions for the output tracking error and the input signal (or error) is addressed. A system equivalence transformation (SET) is proposed for robust ILC such that given any desired reference trajectories, the output tracking problems for general nonsquare multi-input, multi-output (MIMO) systems can be equivalently transformed into those for the specific class of square MIMO systems with the same input and output numbers. As a benefit of SET, a unified condition is only needed to guarantee both the uniform boundedness of all system signals and the robust convergence of the output tracking error, which avoids causing the condition contradiction problem in implementing the double-dynamics analysis approach to ILC. Simulation examples are included to demonstrate the validity of our established robust ILC results.When doing image classification, the core task of convolutional neural network (CNN)-based methods is to learn better feature representation. Our analysis has shown that a better feature representation in the layer before softmax operation (BSM-layer) means a better feature embedding location that has a larger distance to the separating hyperplane. By defining this property ``Location Property of CNN, the core task of CNN-based methods can be regarded as to find out the optimal feature embedding location in the BSM-layer. In order to achieve this, in this work, we first propose two feature embedding directions, principal embedding direction (PE-direction) and secondary embedding direction (SE-direction). And then, we further propose a loss-based optimization framework, location property loss (LP-loss), which can make feature representation move in the PE-direction and the SE-direction simultaneously during the training phase. LP-loss consists of two parts, LPPE and LPSE, where LPPE focuses on PE-direction, and LPSE focuses on SE-direction.

Autoři článku: Skovgaardbates0313 (Terrell Johansson)