Holbrookhelms0760

Z Iurium Wiki

A majority of foodborne illnesses result from inappropriate food handling practices. One proven practice to reduce pathogens is to perform effective hand-hygiene before all stages of food handling. In this paper, we design a multi-camera system that uses video analytics to recognize hand-hygiene actions, with the goal of improving hand-hygiene effectiveness. Our proposed two-stage system processes untrimmed video from both egocentric and third-person cameras. In the first stage, a low-cost coarse classifier efficiently localizes the hand-hygiene period; in the second stage, more complex refinement classifiers recognize seven specific actions within the hand-hygiene period. We demonstrate that our two-stage system has significantly lower computational requirements without a loss of recognition accuracy. Specifically, the computationally complex refinement classifiers process less than 68% of the untrimmed videos, and we anticipate further computational gains in videos that contain a larger fraction of non-hygiene actions. Our results demonstrate that a carefully designed video action recognition system can play an important role in improving hand hygiene for food safety.Artworks have a layered structure subjected to alterations caused by various factors. The monitoring of defects at sub-millimeter scale may be performed by laser interferometric techniques. The aim of this work was to develop a compact system to perform laser speckle imaging in situ for effective mapping of subsurface defects in paintings. The device was designed to be versatile with the possibility of optimizing the performance by easy parameters adjustment. The system exploits a laser speckle pattern generated through an optical diffuser and projected onto the artworks and image correlation techniques for the analysis of the speckle intensity pattern. A protocol for the optimal measurement was suggested, based on calibration curves for tuning the mean speckle size in the acquired intensity pattern. The system was validated in the analysis of detachments in an ancient painting model using a short pulse thermal stimulus to induce a surface deformation field and standard decorrelation algorithms for speckle pattern matching. The device is equipped with a compact thermal camera for preventing any overheating effects during the phase of the stimulus. The developed system represents a valuable nondestructive tool for artwork diagnostics, allowing the monitoring of subsurface defects in paintings in out-of-laboratory environment.The moisture content of screed samples is an essential parameter in the construction industry, since the screed must dry to a certain level of moisture content to be ready for covering. This paper introduces neutron radiography (NR) and neutron tomography (NT) as new, non-destructive techniques for analysing the drying characteristics of screed. Our NR analyses evaluate the results of the established methods while offering much higher spatial resolution of 200 μm, thereby facilitating a two- and three-dimensional understanding of screed's drying behaviour. Because of NR's exceptionally high sensitivity regarding the total cross section of hydrogen the precise moisture content of screed samples is obtainable, resulting in new observations. The current methods to measure moisture content comprise the 'calcium carbide method', the 'Darr method', and electrical sensor systems.This paper proposes a performance model for estimating the user time needed to transcribe small collections of handwritten documents using a keyword spotting system (KWS) that provides a number of possible transcriptions for each word image. The model assumes that only information obtained from a small training set is available, and establishes the constraints on the performance measures to achieve a reduction of the time for transcribing the content with respect to the time required by human experts. The model is complemented with a procedure for computing the parameters of the model and eventually estimating the improvement of the time to achieve a complete and error-free transcription of the documents.Daltonisation refers to the recolouring of images such that details normally lost by colour vision deficient observers become visible. This comes at the cost of introducing artificial colours. In a previous work, we presented a gradient-domain colour image daltonisation method that outperformed previously known methods both in behavioural and psychometric experiments. In the present paper, we improve the method by (i) finding a good first estimate of the daltonised image, thus reducing the computational time significantly, and (ii) introducing local linear anisotropic diffusion, thus effectively removing the halo artefacts. The method uses a colour vision deficiency simulation algorithm as an ingredient, and can thus be applied for any colour vision deficiency, and can even be individualised if the exact individual colour vision is known.Face-morphing operations allow for the generation of digital faces that simultaneously carry the characteristics of two different subjects. It has been demonstrated that morphed faces strongly challenge face-verification systems, as they typically match two different identities. This poses serious security issues in machine-assisted border control applications and calls for techniques to automatically detect whether morphing operations have been previously applied on passport photos. While many proposed approaches analyze the suspect passport photo only, our work operates in a differential scenario, i.e., when the passport photo is analyzed in conjunction with the probe image of the subject acquired at border control to verify that they correspond to the same identity. To this purpose, in this study, we analyze the locations of biologically meaningful facial landmarks identified in the two images, with the goal of capturing inconsistencies in the facial geometry introduced by the morphing process. We report the results of extensive experiments performed on images of various sources and under different experimental settings showing that landmark locations detected through automated algorithms contain discriminative information for identifying pairs with morphed passport photos. Sensitivity of supervised classifiers to different compositions on the training and testing sets are also explored, together with the performance of different derived feature transformations.In spectral-spatial classification of hyperspectral image tasks, the performance of conventional morphological profiles (MPs) that use a sequence of structural elements (SEs) with predefined sizes and shapes could be limited by mismatching all the sizes and shapes of real-world objects in an image. To overcome such limitation, this paper proposes the use of object-guided morphological profiles (OMPs) by adopting multiresolution segmentation (MRS)-based objects as SEs for morphological closing and opening by geodesic reconstruction. Additionally, the ExtraTrees, bagging, adaptive boosting (AdaBoost), and MultiBoost ensemble versions of the extremely randomized decision trees (ERDTs) are introduced and comparatively investigated for spectral-spatial classification of hyperspectral images. Two hyperspectral benchmark images are used to validate the proposed approaches in terms of classification accuracy. The experimental results confirm the effectiveness of the proposed spatial feature extractors and ensemble classifiers.In order to tackle three-dimensional tumor volume reconstruction from Positron Emission Tomography (PET) images, most of the existing algorithms rely on the segmentation of independent PET slices. To exploit cross-slice information, typically overlooked in these 2D implementations, I present an algorithm capable of achieving the volume reconstruction directly in 3D, by leveraging an active surface algorithm. The evolution of such surface performs the segmentation of the whole stack of slices simultaneously and can handle changes in topology. Bezafibrate purchase Furthermore, no artificial stop condition is required, as the active surface will naturally converge to a stable topology. In addition, I include a machine learning component to enhance the accuracy of the segmentation process. The latter consists of a forcing term based on classification results from a discriminant analysis algorithm, which is included directly in the mathematical formulation of the energy function driving surface evolution. It is worth noting that the t system for PET imaging segmentation.This paper presents a unique approach for the dichotomy between useful and adverse variations of key-point descriptors, namely the identity and the expression variations in the descriptor (feature) space. The descriptors variations are learned from training examples. Based on labels of the training data, the equivalence relations among the descriptors are established. Both types of descriptor variations are represented by a graph embedded in the descriptor manifold. Invariant recognition is then conducted as a graph search problem. A heuristic graph search algorithm suitable for the recognition under this setup was devised. The proposed approach was tested on the FRGC v2.0, the Bosphorus and the 3D TEC datasets. It has shown to enhance the recognition performance, under expression variations, by considerable margins.

The purpose of this study was to develop an automated method for performing quality control (QC) tests in magnetic resonance imaging (MRI) systems, investigate the effect of different definitions of QC parameters and its sensitivity with respect to variations in regions of interest (ROI) positioning, and validate the reliability of the automated method by comparison with results from manual evaluations.

Magnetic Resonance imaging MRI used for acceptance and routine QC tests from five MRI systems were selected. All QC tests were performed using the American College of Radiology (ACR) MRI accreditation phantom. The only selection criterion was that in the same QC test, images from two identical sequential sequences should be available. The study was focused on four QC parameters percent signal ghosting (PSG), percent image uniformity (PIU), signal-to-noise ratio (SNR), and SNR uniformity (SNRU), whose values are calculated using the mean signal and the standard deviation of ROIs defined within the phantom image or in the background. The variability of manual ROIs placement was emulated by the software using random variables that follow appropriate normal distributions.

Twenty-one paired sequences were employed. The automated test results for PIU were in good agreement with manual results. However, the PSG values were found to vary depending on the selection of ROIs with respect to the phantom. The values of SNR and SNRU also vary significantly, depending on the combination of the two out of the four standard rectangular ROIs. Furthermore, the methodology used for SNR and SNRU calculation also had significant effect on the results.

The automated method standardizes the position of ROIs with respect to the ACR phantom image and allows for reproducible QC results.

The automated method standardizes the position of ROIs with respect to the ACR phantom image and allows for reproducible QC results.

Autoři článku: Holbrookhelms0760 (Aggerholm Terry)