Solisbriggs4879
High levels of CD207+cells in OLP compared with OLL may help explain the differences in the immunopathogenesis of both diseases. Additionally, CD1a + and CD207+ cells appear to be more essential to immunopathogenesis of OLL than to the pathogenesis of OLP.
High levels of CD207+cells in OLP compared with OLL may help explain the differences in the immunopathogenesis of both diseases. Additionally, CD1a + and CD207+ cells appear to be more essential to immunopathogenesis of OLL than to the pathogenesis of OLP.
The compound named 4-[10-(4-(2,5-dioxo-2,5-dihydro-1H-pyrrol-1-yl)butanamido)decyl]-11-[10-(β,d-glucopyranos-1-yl)-1-oxodecyl]-1,4,8,11-tetraazacyclotetradecane-1,8-diacetic acid is a newly synthesised molecule capable of binding in vivo to albumin to form a bioconjugate. This compound was given the name, GluCAB(glucose-chelator-albumin-binder)-maleimide-1. Radiolabelled GluCAB-maleimide-1 and subsequent bioconjugate is proposed for prospective oncological applications and works on the theoretical dual-targeting principle of tumour localization through the "enhanced permeability and retention (EPR) effect" and glucose metabolism.
The precursor, GluCAB-amine-2, and subsequent GluCAB-maleimide-1 was synthesised via sequential regioselective, distal N-functionalisation of a cyclam template with a tether containing a synthetically-derived β-glucoside followed by a second linker to incorporate a maleimide moiety for albumin-binding. GluCAB-amine-2 was radiolabelled with [
Cu]CuCl
in 0.1 M NH
OAc (pH 3.5, 9tem but a higher hepatic presence of the albumin-bound compound was noted. CONCLUSIONS, ADVANCES IN KNOWLEDGE AND IMPLICATIONS FOR PATIENT CARE This initial evaluation paves the way for further investigation into the tumour targeting potential of [64Cu]Cu-GluCAB-maleimide-1. An efficient targeted radioligand will allow for further development of a prospective theranostic agent for more personalized patient treatment which potentially improves overall patient prognosis, outcome and health care.Precise characterization and analysis of anterior chamber angle (ACA) are of great importance in facilitating clinical examination and diagnosis of angle-closure disease. Currently, the gold standard for diagnostic angle assessment is observation of ACA by gonioscopy. However, gonioscopy requires direct contact between the gonioscope and patients' eye, which is uncomfortable for patients and may deform the ACA, leading to false results. To this end, in this paper, we explore a potential way for grading ACAs into open-, appositional- and synechial angles by Anterior Segment Optical Coherence Tomography (AS-OCT), rather than the conventional gonioscopic examination. The proposed classification schema can be beneficial to clinicians who seek to better understand the progression of the spectrum of angle-closure disease types, so as to further assist the assessment and required treatment at different stages of angle-closure disease. To be more specific, we first use an image alignment method to generate sequences of AS-OCT images. The ACA region is then localized automatically by segmenting an important biomarker - the iris - as this is a primary structural cue in identifying angle-closure disease. Finally, the AS-OCT images acquired in both dark and bright illumination conditions are fed into our Multi-Sequence Deep Network (MSDN) architecture, in which a convolutional neural network (CNN) module is applied to extract feature representations, and a novel ConvLSTM-TC module is employed to study the spatial state of these representations. In addition, a novel time-weighted cross-entropy loss (TC) is proposed to optimize the output of the ConvLSTM, and the extracted features are further aggregated for the purposes of classification. The proposed method is evaluated across 66 eyes, which include 1584 AS-OCT sequences, and a total of 16,896 images. The experimental results show that the proposed method outperforms existing state-of-the-art methods in applicability, effectiveness, and accuracy.Accurate segmentation of the pancreas from abdomen scans is crucial for the diagnosis and treatment of pancreatic diseases. However, the pancreas is a small, soft and elastic abdominal organ with high anatomical variability and has a low tissue contrast in computed tomography (CT) scans, which makes segmentation tasks challenging. To address this challenge, we propose a dual-input v-mesh fully convolutional network (FCN) to segment the pancreas in abdominal CT images. Specifically, dual inputs, i.e., original CT scans and images processed by a contrast-specific graph-based visual saliency (GBVS) algorithm, are simultaneously sent to the network to improve the contrast of the pancreas and other soft tissues. To further enhance the ability to learn context information and extract distinct features, a v-mesh FCN with an attention mechanism is initially utilized. In addition, we propose a spatial transformation and fusion (SF) module to better capture the geometric information of the pancreas and facilitate feature map fusion. We compare the performance of our method with several baseline and state-of-the-art methods on the publicly available NIH dataset. The comparison results show that our proposed dual-input v-mesh FCN model outperforms previous methods in terms of the Dice similarity coefficient (DSC), positive predictive value (PPV), sensitivity (SEN), average surface distance (ASD) and Hausdorff distance (HD). Moreover, ablation studies show that our proposed modules/structures are critical for effective pancreas segmentation.The use of MRI for prostate cancer diagnosis and treatment is increasing rapidly. However, identifying the presence and extent of cancer on MRI remains challenging, leading to high variability in detection even among expert radiologists. Improvement in cancer detection on MRI is essential to reducing this variability and maximizing the clinical utility of MRI. To date, such improvement has been limited by the lack of accurately labeled MRI datasets. Data from patients who underwent radical prostatectomy enables the spatial alignment of digitized histopathology images of the resected prostate with corresponding pre-surgical MRI. This alignment facilitates the delineation of detailed cancer labels on MRI via the projection of cancer from histopathology images onto MRI. We introduce a framework that performs 3D registration of whole-mount histopathology images to pre-surgical MRI in three steps. First, we developed a novel multi-image super-resolution generative adversarial network (miSRGAN), which learns informilitates the projection of accurate cancer labels on MRI, allowing for the development of improved MRI interpretation schemes and machine learning models to automatically detect cancer on MRI.The outbreak of COVID-19 around the world has caused great pressure to the health care system, and many efforts have been devoted to artificial intelligence (AI)-based analysis of CT and chest X-ray images to help alleviate the shortage of radiologists and improve the diagnosis efficiency. However, only a few works focus on AI-based lung ultrasound (LUS) analysis in spite of its significant role in COVID-19. In this work, we aim to propose a novel method for severity assessment of COVID-19 patients from LUS and clinical information. Great challenges exist regarding the heterogeneous data, multi-modality information, and highly nonlinear mapping. To overcome these challenges, we first propose a dual-level supervised multiple instance learning module (DSA-MIL) to effectively combine the zone-level representations into patient-level representations. Then a novel modality alignment contrastive learning module (MA-CLR) is presented to combine representations of the two modalities, LUS and clinical information, by matching the two spaces while keeping the discriminative features. PRT4165 manufacturer To train the nonlinear mapping, a staged representation transfer (SRT) strategy is introduced to maximumly leverage the semantic and discriminative information from the training data. We trained the model with LUS data of 233 patients, and validated it with 80 patients. Our method can effectively combine the two modalities and achieve accuracy of 75.0% for 4-level patient severity assessment, and 87.5% for the binary severe/non-severe identification. Besides, our method also provides interpretation of the severity assessment by grading each of the lung zone (with accuracy of 85.28%) and identifying the pathological patterns of each lung zone. Our method has a great potential in real clinical practice for COVID-19 patients, especially for pregnant women and children, in aspects of progress monitoring, prognosis stratification, and patient management.Limb salvage surgery of malignant pelvic tumors is the most challenging procedure in musculoskeletal oncology due to the complex anatomy of the pelvic bones and soft tissues. It is crucial to accurately resect the pelvic tumors with appropriate margins in this procedure. However, there is still a lack of efficient and repetitive image planning methods for tumor identification and segmentation in many hospitals. In this paper, we present a novel deep learning-based method to accurately segment pelvic bone tumors in MRI. Our method uses a multi-view fusion network to extract pseudo-3D information from two scans in different directions and improves the feature representation by learning a relational context. In this way, it can fully utilize spatial information in thick MRI scans and reduce over-fitting when learning from a small dataset. Our proposed method was evaluated on two independent datasets collected from 90 and 15 patients, respectively. The segmentation accuracy of our method was superior to several comparing methods and comparable to the expert annotation, while the average time consumed decreased about 100 times from 1820.3 seconds to 19.2 seconds. In addition, we incorporate our method into an efficient workflow to improve the surgical planning process. Our workflow took only 15 minutes to complete surgical planning in a phantom study, which is a dramatic acceleration compared with the 2-day time span in a traditional workflow.Deep learning models (with neural networks) have been widely used in challenging tasks such as computer-aided disease diagnosis based on medical images. Recent studies have shown deep diagnostic models may not be robust in the inference process and may pose severe security concerns in clinical practice. Among all the factors that make the model not robust, the most serious one is adversarial examples. The so-called "adversarial example" is a well-designed perturbation that is not easily perceived by humans but results in a false output of deep diagnostic models with high confidence. In this paper, we evaluate the robustness of deep diagnostic models by adversarial attack. Specifically, we have performed two types of adversarial attacks to three deep diagnostic models in both single-label and multi-label classification tasks, and found that these models are not reliable when attacked by adversarial example. We have further explored how adversarial examples attack the models, by analyzing their quantitative classification results, intermediate features, discriminability of features and correlation of estimated labels for both original/clean images and those adversarial ones.