Trujillobrix7581

Z Iurium Wiki

This article compares measurements of particle shape parameters from three-dimensional (3D) X-ray micro-computed tomography (μCT) and two-dimensional (2D) dynamic image analysis (DIA) from the optical microscopy of a coastal bioclastic calcareous sand from Western Australia. This biogenic sand from a high energy environment consists largely of the shells and tests of marine organisms and their clasts. PIK-75 datasheet A significant difference was observed between the two imaging techniques for measurements of aspect ratio, convexity, and sphericity. Measured values of aspect ratio, sphericity, and convexity are larger in 2D than in 3D. Correlation analysis indicates that sphericity is correlated with convexity in both 2D and 3D. These results are attributed to inherent limitations of DIA when applied to platy sand grains and to the shape being, in part, dependent on the biology of the grain rather than a purely random clastic process, like typical siliceous sands. The statistical data has also been fitted to Johnson Bounded Distribution for the ease of future use. Overall, this research demonstrates the need for high-quality 3D microscopy when conducting a micromechanical analysis of biogenic calcareous sands.Annotating microscopy images for nuclei segmentation by medical experts is laborious and time-consuming. To leverage the few existing annotations, also across multiple modalities, we propose a novel microscopy-style augmentation technique based on a generative adversarial network (GAN). Unlike other style transfer methods, it can not only deal with different cell assay types and lighting conditions, but also with different imaging modalities, such as bright-field and fluorescence microscopy. Using disentangled representations for content and style, we can preserve the structure of the original image while altering its style during augmentation. We evaluate our data augmentation on the 2018 Data Science Bowl dataset consisting of various cell assays, lighting conditions, and imaging modalities. With our style augmentation, the segmentation accuracy of the two top-ranked Mask R-CNN-based nuclei segmentation algorithms in the competition increases significantly. Thus, our augmentation technique renders the downstream task more robust to the test data heterogeneity and helps counteract class imbalance without resampling of minority classes.Cardiovascular diseases (CVDs) are the primary cause of death. Every year, many people die due to heart attacks. The electrocardiogram (ECG) signal plays a vital role in diagnosing CVDs. ECG signals provide us with information about the heartbeat. ECGs can detect cardiac arrhythmia. In this article, a novel deep-learning-based approach is proposed to classify ECG signals as normal and into sixteen arrhythmia classes. The ECG signal is preprocessed and converted into a 2D signal using continuous wavelet transform (CWT). The time-frequency domain representation of the CWT is given to the deep convolutional neural network (D-CNN) with an attention block to extract the spatial features vector (SFV). The attention block is proposed to capture global features. For dimensionality reduction in SFV, a novel clump of features (CoF) framework is proposed. The k-fold cross-validation is applied to obtain the reduced feature vector (RFV), and the RFV is given to the classifier to classify the arrhythmia class. The proposed framework achieves 99.84% accuracy with 100% sensitivity and 99.6% specificity. The proposed algorithm outperforms the state-of-the-art accuracy, F1-score, and sensitivity techniques.The importance and relevance of digital-image forensics has attracted researchers to establish different techniques for creating and detecting forgeries. The core category in passive image forgery is copy-move image forgery that affects the originality of image by applying a different transformation. In this paper, a frequency-domain image-manipulation method is presented. The method exploits the localized nature of discrete wavelet transform (DWT) to attain the region of the host image to be manipulated. Both patch and host image are subjected to DWT at the same level l to obtain 3l+1 sub-bands, and each sub-band of the patch is pasted to the identified region in the corresponding sub-band of the host image. Resulting manipulated host sub-bands are then subjected to inverse DWT to obtain the final manipulated host image. The proposed method shows good resistance against detection by two frequency-domain forgery detection methods from the literature. The purpose of this research work is to create a forgery and highlight the need to produce forgery detection methods that are robust against malicious copy-move forgery.Applying machine learning (ML), and especially deep learning, to understand visual content is becoming common practice in many application areas. However, little attention has been given to its use within the multimedia creative domain. It is true that ML is already popular for content creation, but the progress achieved so far addresses essentially textual content or the identification and selection of specific types of content. A wealth of possibilities are yet to be explored by bringing the use of ML into the multimedia creative process, allowing the knowledge inferred by the former to influence automatically how new multimedia content is created. The work presented in this article provides contributions in three distinct ways towards this goal firstly, it proposes a methodology to re-train popular neural network models in identifying new thematic concepts in static visual content and attaching meaningful annotations to the detected regions of interest; secondly, it presents varied visual digital effects and corresponding tools that can be automatically called upon to apply such effects in a previously analyzed photo; thirdly, it defines a complete automated creative workflow, from the acquisition of a photograph and corresponding contextual data, through the ML region-based annotation, to the automatic application of digital effects and generation of a semantically aware multimedia story driven by the previously derived situational and visual contextual data. Additionally, it presents a variant of this automated workflow by offering to the user the possibility of manipulating the automatic annotations in an assisted manner. The final aim is to transform a static digital photo into a short video clip, taking into account the information acquired. The final result strongly contrasts with current standard approaches of creating random movements, by implementing an intelligent content- and context-aware video.The four bands of fully polarimetric SAR data convey scattering characteristics of the Earth's background, but perceptually are not very easy for an observer to use. In this work, the four different channels of fully polarimetric SAR images, namely HH, HV, VH, and VV, are combined so that a color image of the Earth's background is derived that is perceptually excellent for the human eye and at the same time provides accurate information regarding the scattering mechanisms in each pixel. Most of the elementary scattering mechanisms are related to specific color and land cover types. The innovative nature of the proposed approach is due to the two different consecutive coloring procedures. The first one is a fusion procedure that moves all the information contained in the four polarimetric channels into three derived RGB bands. This is achieved by means of Cholesky decomposition and brings to the RGB output the correlation properties of a natural color image. The second procedure moves the color information of the RGB image to the CIELab color space, which is perceptually uniform. The color information is then evenly distributed by means of color equalization in the CIELab color space. After that, the inverse procedure to obtain the final RGB image is performed. These two procedures bring the PolSAR information regarding the scattering mechanisms on the Earth's surface onto a meaningful color image, the appearance of which is close to Google Earth maps. Simultaneously, they give better color correspondence to various land cover types compared with existing SAR color representation methods.Malaria is a globally widespread disease caused by parasitic protozoa transmitted to humans by infected female mosquitoes of Anopheles. It is caused in humans only by the parasite Plasmodium, further classified into four different species. Identifying malaria parasites is possible by analysing digital microscopic blood smears, which is tedious, time-consuming and error prone. So, automation of the process has assumed great importance as it helps the laborious manual process of review and diagnosis. This work focuses on deep learning-based models, by comparing off-the-shelf architectures for classifying healthy and parasite-affected cells, by investigating the four-class classification on the Plasmodium falciparum stages of life and, finally, by evaluating the robustness of the models with cross-dataset experiments on two different datasets. The main contributions to the research in this field can be resumed as follows (i) comparing off-the-shelf architectures in the task of classifying healthy and parasite-af further developments and modifications. Moreover, the mobile-oriented architectures showed promising and satisfactory performance in the classification of malaria parasites. The obtained results enable extensive improvements, specifically oriented to the application of object detectors for type and stage of life recognition, even in mobile environments.Ultrasound imaging of the lung has played an important role in managing patients with COVID-19-associated pneumonia and acute respiratory distress syndrome (ARDS). During the COVID-19 pandemic, lung ultrasound (LUS) or point-of-care ultrasound (POCUS) has been a popular diagnostic tool due to its unique imaging capability and logistical advantages over chest X-ray and CT. Pneumonia/ARDS is associated with the sonographic appearances of pleural line irregularities and B-line artefacts, which are caused by interstitial thickening and inflammation, and increase in number with severity. Artificial intelligence (AI), particularly machine learning, is increasingly used as a critical tool that assists clinicians in LUS image reading and COVID-19 decision making. We conducted a systematic review from academic databases (PubMed and Google Scholar) and preprints on arXiv or TechRxiv of the state-of-the-art machine learning technologies for LUS images in COVID-19 diagnosis. Openly accessible LUS datasets are listed. Various machine learning architectures have been employed to evaluate LUS and showed high performance. This paper will summarize the current development of AI for COVID-19 management and the outlook for emerging trends of combining AI-based LUS with robotics, telehealth, and other techniques.Introduced in the late 1980s for generalization purposes, pruning has now become a staple for compressing deep neural networks. Despite many innovations in recent decades, pruning approaches still face core issues that hinder their performance or scalability. Drawing inspiration from early work in the field, and especially the use of weight decay to achieve sparsity, we introduce Selective Weight Decay (SWD), which carries out efficient, continuous pruning throughout training. Our approach, theoretically grounded on Lagrangian smoothing, is versatile and can be applied to multiple tasks, networks, and pruning structures. We show that SWD compares favorably to state-of-the-art approaches, in terms of performance-to-parameters ratio, on the CIFAR-10, Cora, and ImageNet ILSVRC2012 datasets.

Autoři článku: Trujillobrix7581 (Doyle Nyborg)