Marcussenlindgren8463

Z Iurium Wiki

Therefore, this technique may be applicable in clinical environments as an economical, non-contact, and easily deployable monitoring system, and it also represents a potential application in home health monitoring.Cultural heritage images are among the primary media for communicating and preserving the cultural values of a society. The images represent concrete and abstract content and symbolise the social, economic, political, and cultural values of the society. However, an enormous amount of such values embedded in the images is left unexploited partly due to the absence of methodological and technical solutions to capture, represent, and exploit the latent information. With the emergence of new technologies and availability of cultural heritage images in digital formats, the methodology followed to semantically enrich and utilise such resources become a vital factor in supporting users need. This paper presents a methodology proposed to unearth the cultural information communicated via cultural digital images by applying Artificial Intelligence (AI) technologies (such as Computer Vision (CV) and semantic web technologies). To this end, the paper presents a methodology that enables efficient analysis and enrichment of a large collection of cultural images covering all the major phases and tasks. The proposed method is applied and tested using a case study on cultural image collections from the Europeana platform. The paper further presents the analysis of the case study, the challenges, the lessons learned, and promising future research areas on the topic.Quantitative phase imaging (QPI) techniques are widely used for the label-free examining of transparent biological samples. QPI techniques can be broadly classified into interference-based and interferenceless methods. The interferometric methods which record the complex amplitude are usually bulky with many optical components and use coherent illumination. The interferenceless approaches which need only the intensity distribution and works using phase retrieval algorithms have gained attention as they require lesser resources, cost, space and can work with incoherent illumination. With rapid developments in computational optical techniques and deep learning, QPI has reached new levels of applications. In this tutorial, we discuss one of the basic optical configurations of a lensless QPI technique based on the phase-retrieval algorithm. Simulative studies on QPI of thin, thick, and greyscale phase objects with assistive pseudo-codes and computational codes in Octave is provided. Binary phase samples with positive and negative resist profiles were fabricated using lithography, and a single plane and two plane phase objects were constructed. selleck products Light diffracted from a point object is modulated by phase samples and the corresponding intensity patterns are recorded. The phase retrieval approach is applied for 2D and 3D phase reconstructions. Commented codes in Octave for image acquisition and automation using a web camera in an open source operating system are provided.Collisionless media devoid of intrinsic stresses, for example, a dispersed phase in a multiphase medium, have a much wider variety of space-time structures and features formed in them than collisional media, for example, a carrier, gas, or liquid phase. This is a consequence of the fact that evolution in such media occurs in phase space, i.e., in a space of greater dimensions than the usual coordinate space. As a consequence, the process of the formation of features in collisionless media (clustering or vice versa, a loss of continuity) can occur primarily in the velocity space, which, in contrast to the features in the coordinate space (folds, caustics, or voids), is poorly observed directly. link2 To identify such features, it is necessary to use visualization methods that allow us to consider, in detail, the evolution of the medium in the velocity space. This article is devoted to the development of techniques that allow visualizing the degree of anisotropy of the velocity fields of collisionless interpenetratinentire set of beams (vector-tensor fields).Deep learning (DL) convolutional neural networks (CNNs) have been rapidly adapted in very high spatial resolution (VHSR) satellite image analysis. DLCNN-based computer visions (CV) applications primarily aim for everyday object detection from standard red, green, blue (RGB) imagery, while earth science remote sensing applications focus on geo object detection and classification from multispectral (MS) imagery. MS imagery includes RGB and narrow spectral channels from near- and/or middle-infrared regions of reflectance spectra. The central objective of this exploratory study is to understand to what degree MS band statistics govern DLCNN model predictions. link3 We scaffold our analysis on a case study that uses Arctic tundra permafrost landform features called ice-wedge polygons (IWPs) as candidate geo objects. We choose Mask RCNN as the DLCNN architecture to detect IWPs from eight-band Worldview-02 VHSR satellite imagery. A systematic experiment was designed to understand the impact on choosing the optimal three-band combination in model prediction. We tasked five cohorts of three-band combinations coupled with statistical measures to gauge the spectral variability of input MS bands. The candidate scenes produced high model detection accuracies for the F1 score, ranging between 0.89 to 0.95, for two different band combinations (coastal blue, blue, green (1,2,3) and green, yellow, red (3,4,5)). The mapping workflow discerned the IWPs by exhibiting low random and systematic error in the order of 0.17-0.19 and 0.20-0.21, respectively, for band combinations (1,2,3). Results suggest that the prediction accuracy of the Mask-RCNN model is significantly influenced by the input MS bands. Overall, our findings accentuate the importance of considering the image statistics of input MS bands and careful selection of optimal bands for DLCNN predictions when DLCNN architectures are restricted to three spectral channels.With increased use of light-weight materials with low factors of safety, non-destructive testing becomes increasingly important. Thanks to the advancement of infrared camera technology, pulse thermography is a cost efficient way to detect subsurface defects non-destructively. However, currently available evaluation algorithms have either a high computational cost or show poor performance if any geometry other than the most simple kind is surveyed. We present an extension of the thermographic signal reconstruction technique which can automatically segment and image defects from sound areas, while also estimating the defect depth, all with low computational cost. We verified our algorithm using real world measurements and compare our results to standard active thermography algorithms with similar computational complexity. We found that our algorithm can detect defects more accurately, especially when more complex geometries are examined.Recently, our world witnessed major events that attracted a lot of attention towards the importance of automatic crowd scene analysis. For example, the COVID-19 breakout and public events require an automatic system to manage, count, secure, and track a crowd that shares the same area. However, analyzing crowd scenes is very challenging due to heavy occlusion, complex behaviors, and posture changes. This paper surveys deep learning-based methods for analyzing crowded scenes. The reviewed methods are categorized as (1) crowd counting and (2) crowd actions recognition. Moreover, crowd scene datasets are surveyed. In additional to the above surveys, this paper proposes an evaluation metric for crowd scene analysis methods. This metric estimates the difference between calculated crowed count and actual count in crowd scene videos.Current point cloud extraction methods based on photogrammetry generate large amounts of spurious detections that hamper useful 3D mesh reconstructions or, even worse, the possibility of adequate measurements. Moreover, noise removal methods for point clouds are complex, slow and incapable to cope with semantic noise. In this work, we present body2vec, a model-based body segmentation tool that uses a specifically trained Neural Network architecture. Body2vec is capable to perform human body point cloud reconstruction from videos taken on hand-held devices (smartphones or tablets), achieving high quality anthropometric measurements. The main contribution of the proposed workflow is to perform a background removal step, thus avoiding the spurious points generation that is usual in photogrammetric reconstruction. A group of 60 persons were taped with a smartphone, and the corresponding point clouds were obtained automatically with standard photogrammetric methods. We used as a 3D silver standard the clean meshes obtained at the same time with LiDAR sensors post-processed and noise-filtered by expert anthropological biologists. Finally, we used as gold standard anthropometric measurements of the waist and hip of the same people, taken by expert anthropometrists. Applying our method to the raw videos significantly enhanced the quality of the results of the point cloud as compared with the LiDAR-based mesh, and of the anthropometric measurements as compared with the actual hip and waist perimeter measured by the anthropometrists. In both contexts, the resulting quality of body2vec is equivalent to the LiDAR reconstruction.In this work, a novel algorithmic scheme is developed that processes echocardiogram videos, and tracks the movement of the mitral valve leaflets, and thereby estimates whether the movement is symptomatic of a healthy or diseased heart. This algorithm uses automatic Otsu's thresholding to find a closed boundary around the left atrium, with the basic presumption that it is situated in the bottom right corner of the apical 4 chamber view. A centroid is calculated, and protruding prongs are taken within a 40-degree cone above the centroid, where the mitral valve is located. Binary images are obtained from the videos where the mitral valve leaflets have different pixel values than the cavity of the left atrium. Thus, the points where the prongs touch the valve will show where the mitral valve leaflets are located. The standard deviation of these points is used to calculate closeness of the leaflets. The estimation of the valve movement across subsequent frames is used to determine if the movement is regular, or affected by heart disease. Tests conducted with numerous videos containing both healthy and diseased hearts attest to our method's efficacy, with a key novelty in being fully unsupervised and computationally efficient.The classification of histopathology images requires an experienced physician with years of experience to classify the histopathology images accurately. In this study, an algorithm was developed to assist physicians in classifying histopathology images; the algorithm receives the histopathology image as an input and produces the percentage of cancer presence. The primary classifier used in this algorithm is the convolutional neural network, which is a state-of-the-art classifier used in image classification as it can classify images without relying on the manual selection of features from each image. The main aim of this research is to improve the robustness of the classifier used by comparing six different first-order stochastic gradient-based optimizers to select the best for this particular dataset. The dataset used to train the classifier is the PatchCamelyon public dataset, which consists of 220,025 images to train the classifier; the dataset is composed of 60% positive images and 40% negative images, and 57,458 images to test its performance.

Autoři článku: Marcussenlindgren8463 (Graves Spence)