Hebertlindberg7351

Z Iurium Wiki

The proposed model is trained by an end-to-end approach and validated over a public dataset. Comparison results with state-of-the-art models and ablation studies demonstrated improved performance in terms of Root Mean Square Error (RMSE) and Pearson Linear Correlation Coefficient.Deep learning methods, especially convolutional neural networks, have been successfully applied to lesion segmentation in breast ultrasound (BUS) images. However, pattern complexity and intensity similarity between the surrounding tissues (i.e., background) and lesion regions (i.e., foreground) bring challenges for lesion segmentation. Considering that such rich texture information is contained in background, very few methods have tried to explore and exploit background-salient representations for assisting foreground segmentation. Additionally, other characteristics of BUS images, i.e., 1) low-contrast appearance and blurry boundary, and 2) significant shape and position variation of lesions, also increase the difficulty in accurate lesion segmentation. In this paper, we present a saliency-guided morphology-aware U-Net (SMU-Net) for lesion segmentation in BUS images. The SMU-Net is composed of a main network with an additional middle stream and an auxiliary network. Specifically, we first propose generation of saliency maps which incorporate both low-level and high-level image structures, for foreground and background. These saliency maps are then employed to guide the main network and auxiliary network for respectively learning foreground-salient and background-salient representations. Furthermore, we devise an additional middle stream which basically consists of background-assisted fusion, shape-aware, edge-aware and position-aware units. This stream receives the coarse-to-fine representations from the main network and auxiliary network for efficiently fusing the foreground-salient and background-salient features and enhancing the ability of learning morphological information for network. Extensive experiments on five datasets demonstrate higher performance and superior robustness to the scale of dataset than several state-of-the-art deep learning approaches in breast lesion segmentation in ultrasound image.In this paper, we report on our experiences of running visual design workshops within the context of a Master's level data visualization course, in a remote setting. These workshops aim to teach students to explore visual design space for data by creating and discussing hand-drawn sketches. We describe the technical setup employed, the different parts of the workshop, how the actual sessions were run, and to what extent the remote version can substitute for in-person sessions. In general, the visual designs created by the students as well as the feedback provided by them indicate that the setup described here can be a feasible replacement for in-person visual design workshops.Motion blur in dynamic scenes is an important yet challenging research topic. Recently, deep learning methods have achieved impressive performance for dynamic scene deblurring. However, the motion information contained in a blurry image has yet to be fully explored and accurately formulated because (i) the ground truth of dynamic motion is difficult to obtain; (ii) the temporal ordering is destroyed during the exposure; and (iii) the motion estimation from a blurry image is highly ill-posed. By revisiting the principle of camera exposure, motion blur can be described by the relative motions of sharp content with respect to each exposed position. In this paper, we define exposure trajectories, which represent the motion information contained in a blurry image and explain the causes of motion blur. A novel motion offset estimation framework is proposed to model pixel-wise displacements of the latent sharp image at multiple timepoints. Under mild constraints, our method can recover dense, (non-)linear exposure trajectories, which significantly reduce temporal disorder and ill-posed problems. Finally, experiments demonstrate that the recovered exposure trajectories not only capture accurate and interpretable motion information from a blurry image, but also benefit motion-aware image deblurring and warping-based video extraction tasks. Codes are available on https//github.com/yjzhang96/Motion-ETR.Conventional deformable registration methods aim at solving an optimization model carefully designed on image pairs and their computational costs are exceptionally high. In contrast, recent deep learning-based approaches can provide fast deformation estimation. These heuristic network architectures are fully data-driven and thus lack explicit geometric constraints which are indispensable to generate plausible deformations, e.g., topology-preserving. Moreover, these learning-based approaches typically pose hyper-parameter learning as a black-box problem and require considerable computational and human effort to perform many training runs. To tackle the aforementioned problems, we propose a new learning-based framework to optimize a diffeomorphic model via multi-scale propagation. Specifically, we introduce a generic optimization model to formulate diffeomorphic registration and develop a series of learnable architectures to obtain propagative updating in the coarse-to-fine feature space. Further, we propose a new bilevel self-tuned training strategy, allowing efficient search of task-specific hyper-parameters. This training strategy increases the flexibility to various types of data while reduces computational and human burdens. We conduct two groups of image registration experiments on 3D volume datasets including image-to-atlas registration on brain MRI data and image-to-image registration on liver CT data. Extensive results demonstrate the state-of-the-art performance of the proposed method with diffeomorphic guarantee and extreme efficiency.In this article, we model a set of pixel-wise object segmentation tasks, i.e., automatic video segmentation (AVS), image co-segmentation (ICS) and few-shot semantic segmentation (FSS), from a unified view of segmenting objects from relational visual data. To this end, an attentive graph neural network (AGNN) is proposed, which tackles these tasks in a holistic fashion. Specifically, AGNN formulates the tasks as a process of iterative information fusion over data graphs. It builds a fully connected graph to efficiently represent visual data as nodes, and relations between data instances as edges. Through parametric message passing, AGNN is able to fully capture knowledge from the relational visual data, enabling more accurate object discovery and segmentation. Experiments show that AGNN can automatically highlight primary foreground objects from video sequences (i.e., AVS), and extract common objects from noisy collections of semantically related images (i.e., ICS). Remarkably, with proper modifications, AGNN can even generalize segmentation ability to new categories with only a few annotated data (i.e., FSS). Taken together, our results demonstrate that AGNN provides a powerful tool that is applicable to a wide range of pixel-wise object pattern understanding tasks, given large-scale, or even a few, relational visual data.Brain-computer interfaces (BCI) that enables people with severe motor disabilities to use their brain signals for direct control of objects have attracted increased interest in rehabilitation. To date, no study has investigated feasibility of the BCI framework incorporating both intracortical and scalp signals. Methods Concurrent local field potential (LFP) from the hand-knob area and scalp EEG were recorded in a paraplegic patient undergoing a spike-based close-loop neurorehabilitation training. Based upon multimodal spatio-spectral feature extraction and Naive Bayes classification, we developed, for the first time, a novel LFP-EEG-BCI for motor intention decoding. A transfer learning (TL) approach was employed to further improve the feasibility. The performance of the proposed LFP-EEG-BCI for four-class upper-limb motor intention decoding was assessed. Results Using a decision fusion strategy, we showed that the LFP-EEG-BCI significantly (p less then 0.05) outperformed single modal BCI (LFP-BCI and EEG-BCI) in terms of decoding accuracy with the best performance achieved using regularized common spatial pattern features. Interrogation of feature characteristics revealed discriminative spatial and spectral patterns, which may lead to new insights for better understanding of brain dynamics during different motor imagery tasks and promote development of efficient decoding algorithms. Moreover, we showed that similar classification performance could be obtained with few training trials, therefore highlighting the efficacy of TL. Conclusion The present findings demonstrated the superiority of the novel LFP-EEG-BCI in motor intention decoding. Significance This work introduced a novel LFP-EEG-BCI that may lead to new directions for developing practical neurorehabilitation systems with high detection accuracy and multi-paradigm feasibility in clinical applications.

The anti-PD-1 immune checkpoint inhibitor nivolumab is currently approved for the treatment of patients with metastatic renal cell carcinoma (mRCC); approximately 25% of patients respond. We hypothesized that we could identify a biomarker of response using radiomics to train a machine learning classifier to predict nivolumab response outcomes.

Patients with mRCC of different histologies treated with nivolumab in a single institution between 2013 and 2017 were retrospectively identified. Patients were labelled as responders (complete response [CR]/particle response [PR]/durable stable disease [SD]) or non-responders based on investigator tumor assessment using RECIST 1.1 criteria. For each patient, lesions were contoured from pre-treatment and first post-treatment computed tomography (CT) scans. This information was used to train a radial basis function support vector machine classifier to learn a prediction rule to distinguish responders from non-responders. The classifier was internally validated by a 10sponders from non-responders. The use of novel texture features (two-point correlation measure, two-point cluster measure, and minimum spanning tree measure) did not improve performance.

We aimed to describe the oncological outcomes after radical cystectomy and chemo-radiation for localized small cell bladder cancer (SCBC).

This population-based analysis of localized SCBC from 1985-2018 in British Columbia included an analysis (analysis 1) of cancer-specific survival (CSS) and overall survival (OS) of patients treated with curative-intent radical cystectomy (RC) and radiation (RT), and an analysis (analysis 2) of CSS and OS in patients treated with RC and chemoRT consistent with the SCBC Canadian consensus guideline.

Seventy-seven patients who were treated with curative intent were identified 33 patients had RC and 44 had RT. Lithium Chloride mw For analysis 1, five-year OS was 29% and 39% for RC and RT, respectively (p=0.51), and five-year CSS was 35% and 52% for RC and RT, respectively (p=0.29). On multivariable analysis, higher Charlson comorbidity index (CCI) and the lack of neoadjuvant chemotherapy (NACHT) were associated with worse OS, while higher CCI and Eastern Cooperative Oncology Group (ECOG) were associated with worse CSS.

Autoři článku: Hebertlindberg7351 (Kaplan Vasquez)