Spencerkirkland3904
Then we allocate the weight to each loss for each patch-label pair during weakly-supervised training to enable discriminative disease part learning. We finally extract patch features from the network trained with loss reweighting, and utilize the LSTM network to encode the weighed patch feature sequence into a comprehensive feature representation. Extensive evaluations on this dataset and another public dataset demonstrate the advantage of the proposed method. We expect this research will further the agenda of plant disease recognition in the community of image processing.mSOUND is an open-source toolbox written in MATLAB. This toolbox is intended for modeling linear/ nonlinear acoustic wave propagation in media (primarily biological tissues) with arbitrary heterogeneities, in which, the speed of sound, density, attenuation coefficient, power-law exponent, and nonlinear coefficient are all spatially varying functions. The computational model is an iterative one-way model based on a mixed domain method. In this article, a general guideline is given along with three representative examples to illustrate how to set up simulations using mSOUND. Selleckchem Linsitinib The first example uses the transient mixed-domain method (TMDM) forward projection to compute the transient acoustic field for a given source defined on a plane. The second example uses the frequency-specific mixed-domain method (FSMDM) forward projection to rapidly obtain the pressure distribution directly at the frequencies of interest, assuming linear or weakly nonlinear wave propagation. The third example demonstrates how to use TMDM backward projection to reconstruct the initial acoustic pressure field to facilitate photoacoustic tomography (PAT). mSOUND (https//m-sound.github.io/mSOUND/home) is designed to be complementary to existing ultrasound modeling toolboxes and is expected to be useful for a wide range of applications in medical ultrasound including treatment planning, PAT, transducer design, and characterization.2-D sparse arrays may push the development of low-cost 3-D systems, not needing to control thousands of elements by expensive application-specific integrated circuits (ASICs). However, there is still some concern about their suitability in applications, such as Doppler investigation, which inherently involve poor signal-to-noise ratios (SNRs). In this article, a novel real-time 3-D pulsed-wave (PW) Doppler system, based on a 256-element 2-D spiral array, is presented. Coded transmission (TX) and matched filtering were implemented to improve the system SNR. Standard sonograms as well as multigate spectral Doppler (MSD) profiles, along lines that can be arbitrarily located in different planes, are presented. The performance of the system was assessed quantitatively on experimental data obtained from a straight tube flow phantom. An SNR increase of 11.4 dB was measured by transmitting linear chirps instead of standard sinusoidal bursts. For a qualitative assessment of the system performance in more realistic conditions, an anthropomorphic phantom of the carotid arteries was used. Finally, real-time B-mode and MSD images were obtained from healthy volunteers.Deep learning can bring time savings and increased reproducibility to medical image analysis. However, acquiring training data is challenging due to the time-intensive nature of labeling and high inter-observer variability in annotations. Rather than labeling images, in this work we propose an alternative pipeline where images are generated from existing high-quality annotations using generative adversarial networks (GANs). Annotations are derived automatically from previously built anatomical models and are transformed into realistic synthetic ultrasound images with paired labels using a CycleGAN. We demonstrate the pipeline by generating synthetic 2D echocardiography images to compare with existing deep learning ultrasound segmentation datasets. A convolutional neural network is trained to segment the left ventricle and left atrium using only synthetic images. Networks trained with synthetic images were extensively tested on four different unseen datasets of real images with median Dice scores of 91, 90, 88, and 87 for left ventricle segmentation. These results match or are better than inter-observer results measured on real ultrasound datasets and are comparable to a network trained on a separate set of real images. Results demonstrate the images produced can effectively be used in place of real data for training. The proposed pipeline opens the door for automatic generation of training data for many tasks in medical imaging as the same process can be applied to other segmentation or landmark detection tasks in any modality. The source code and anatomical models are available to other researchers.1 1https//adgilbert.github.io/data-generation/.Brain connectivity alterations associated with mental disorders have been widely reported in both functional MRI (fMRI) and diffusion MRI (dMRI). However, extracting useful information from the vast amount of information afforded by brain networks remains a great challenge. Capturing network topology, graph convolutional networks (GCNs) have demonstrated to be superior in learning network representations tailored for identifying specific brain disorders. Existing graph construction techniques generally rely on a specific brain parcellation to define regions-of-interest (ROIs) to construct networks, often limiting the analysis into a single spatial scale. In addition, most methods focus on the pairwise relationships between the ROIs and ignore high-order associations between subjects. In this letter, we propose a mutual multi-scale triplet graph convolutional network (MMTGCN) to analyze functional and structural connectivity for brain disorder diagnosis. We first employ several templates with different scales of ROI parcellation to construct coarse-to-fine brain connectivity networks for each subject. Then, a triplet GCN (TGCN) module is developed to learn functional/structural representations of brain connectivity networks at each scale, with the triplet relationship among subjects explicitly incorporated into the learning process. Finally, we propose a template mutual learning strategy to train different scale TGCNs collaboratively for disease classification. Experimental results on 1,160 subjects from three datasets with fMRI or dMRI data demonstrate that our MMTGCN outperforms several state-of-the-art methods in identifying three types of brain disorders.