Dehnlamont2502

Z Iurium Wiki

Verze z 25. 10. 2024, 21:45, kterou vytvořil Dehnlamont2502 (diskuse | příspěvky) (Založena nová stránka s textem „Phase aberration is widely considered a major source of image degradation in medical pulse-echo ultrasound. Traditionally, near-field phase aberration corr…“)
(rozdíl) ← Starší verze | zobrazit aktuální verzi (rozdíl) | Novější verze → (rozdíl)

Phase aberration is widely considered a major source of image degradation in medical pulse-echo ultrasound. Traditionally, near-field phase aberration correction techniques are unable to account for distributed aberrations due to a spatially varying speed of sound in the medium, while most distributed aberration correction techniques require the use of point-like sources and are impractical for clinical applications where diffuse scattering is dominant. Here, we present two distributed aberration correction techniques that utilize sound speed estimates from a tomographic sound speed estimator that builds on our previous work with diffuse scattering in layered media. We first characterize the performance of our sound speed estimator and distributed aberration correction techniques in simulations where the scattering in the media is known a priori. Phantom and in vivo experiments further demonstrate the capabilities of the sound speed estimator and the aberration correction techniques. In phantom experiments, point target resolution improves from 0.58 to 0.26 and 0.27 mm, and lesion contrast improves from 17.7 to 23.5 and 25.9 dB, as a result of distributed aberration correction using the eikonal and wavefield correlation techniques, respectively.In the field of ultrasonic nondestructive testing (NDT), the total focusing method (TFM) and its derivatives are now commercially available on portable devices and are getting more popular within the NDT community. However, its implementation requires the collection of a very large amount of data with the full matrix capture (FMC) as the worst case scenario. Analyzing all the data also requires significant processing power, and consequently, there is an interest in 1) reducing the required storage capacity used by imaging algorithms, such as delay-and-sum (DAS) imaging and 2) allowing the transmission and postprocessing of inspection data remotely. In this study, a different implementation of the TFM algorithm is used based on the vector coherence factor (VCF) that is used as an image itself. This method, also generally known as phase coherence imaging, presents certain advantages, such as a better sensitivity to diffracting geometries, consistency of defect restitution among different views, and an amplituded that the proposed lightweight acquisition process, which relies on binary signals, allows a reduction of the data throughput of up to 47 times. This throughput reduction is achieved while still presenting very similar results to phase coherence imaging based on the instantaneous phase derived from the Hilbert transform of the full waveform. In an era of increasing wireless network speed and cloud computing, these results allow considering interesting perspectives for the reduction of inspection hardware costs and remote postprocessing.The excitation of surface acoustic waves (SAWs) on the surface of the ferroelectric film [barium strontium titanate (BST)] located on the dielectric substrate (silicon) was studied theoretically. We found that the most effective SAW excitation takes place when spontaneous polarization occurs in the film plane and the wave propagates along the direction adjacent to the direction of the spontaneous polarization vector. Based on a nonlinear model of phase transitions in solid BST solutions, the dependency of the material constants of piezo-effect equations on the misfit strain with a fixed concentration was obtained numerically. The effect of various misfit strains on SAW characteristics was studied for the film located on single-crystal silicon. It was shown that the effectiveness of SAW excitation increases as the misfit strain nears the boundary of phase transition.We consider the general problem known as job shop scheduling, in which multiple jobs consist of sequential operations that need to be executed or served by appropriate machines having limited capacities. For example, train journeys (jobs) consist of moves and stops (operations) to be served by rail tracks and stations (machines). A schedule is an assignment of the job operations to machines and times where and when they will be executed. The developers of computational methods for job scheduling need tools enabling them to explore how their methods work. At a high level of generality, we define the system of pertinent exploration tasks and a combination of visualizations capable of supporting the tasks. We provide general descriptions of the purposes, contents, visual encoding, properties, and interactive facilities of the visualizations and illustrate them with images from an example implementation in air traffic management. We justify the design of the visualizations based on the tasks, principles of creating visualizations for pattern discovery, and scalability requirements. The outcomes of our research are sufficiently general to be of use in a variety of applications.Semi-supervised video object segmentation (VOS) is to predict the segment of a target object in a video when a ground truth segmentation mask for the target is given in the first frame. G007-LK in vivo Recently, space-time memory networks (STM) have received significant attention as a promising approach for semi-supervised VOS. However, an important point has been overlooked in applying STM to VOS The solution (=STM) is non-local, but the problem (=VOS) is predominantly local. To solve this mismatch between STM and VOS, we propose new VOS networks called kernelized memory network (KMN) and KMN with multiple kernels (KMN on DAVIS 2016 validation set are 0.12 and 0.13 seconds per frame, respectively, and the two networks have similar computation times to STM. This paper is an extended version of our preliminary work, which was presented in ECCV2020.Most of unsupervised person Re-Identification (Re-ID) works produce pseudo-labels by measuring the feature similarity without considering the domain discrepancy among cameras, leading to degraded accuracy in pseudo label computation. This paper targets to address this challenge by decomposing the similarity computation into two stage, i.e., the intra-domain and inter-domain computations, respectively. The intra-domain similarity directly leverages CNN features learned within each camera, hence generates pseudo-labels on different cameras to train the Re-ID model in a multi-branch network. The inter-domain similarity considers the classification scores of each sample on different cameras as a new feature vector. This new feature effectively alleviates the domain discrepancy among cameras and generates more reliable pseudo-labels. We further propose the Instance and Camera Style Normalization (ICSN) to enhance the robustness to domain discrepancy. ICSN alleviates the intra-camera variations by adaptively learning a combination of instance and batch normalization. ICSN also boosts the robustness to inter-camera variations through transform normalization which effectively converts the original style of features into target styles. The proposed method achieves competitive performance on multiple datasets under fully unsupervised, intra-camera supervised and domain generalization settings, e.g., it achieves rank-1 accuracy of 64.4% on the MSMT17 dataset, outperforming the recent unsupervised methods by 20+%.Compositional Zero-Shot learning (CZSL) aims to recognize unseen compositions of state and object visual primitives seen during training. A problem with standard CZSL is the assumption of knowing which unseen compositions will be available at test time. In this work, we overcome this assumption operating on the open world setting, where no limit is imposed on the compositional space at test time, and the search space contains a large number of unseen compositions. To address this problem, we propose a new approach, Compositional Cosine Graph Embedding (Co-CGE), based on two principles. First, Co-CGE models the dependency between states, objects and their compositions through a graph convolutional neural network. The graph propagates information from seen to unseen concepts, improving their representations. Second, since not all unseen compositions are equally feasible, and less feasible ones may damage the learned representations, Co-CGE estimates a feasibility score for each unseen composition, using the scores as margins in a cosine similarity-based loss and as weights in the adjacency matrix of the graphs. Experiments show that our approach achieves state-of-the-art performances in standard CZSL while outperforming previous methods in the open world scenario.

Energy Expenditure (EE) estimation plays an important role in objectively evaluating physical activity and its impact on human health. EE during activity can be affected by many factors, including activity intensity, individual physical and physiological characteristics, environment, etc. However, current studies only use very limited information, such as heart rate and step count, to estimate EE, which leads to a low estimation accuracy.

In this study, we proposed a deep multi-branch two-stage regression network (DMTRN) to effectively fuse a variety of related information including motion information, physiological characteristics, and human physical information, which significantly improved the EE estimation accuracy. The proposed DMTRN consists of two main modules a multi-branch convolutional neural network module which is used to extract multi-scale context features from inertial measurement unit (IMU) data and electrocardiogram (ECG) data and a two-stage regression module which aggregated the extracted multi-scale context features containing the physiological and motion information and the anthropometric features to accurately estimate EE.

Experiments performed on 33 participants show that our proposed method is more accurate and the average root mean square error (RMSE) is reduced by 22.8% compared with previous works.

The EE estimation accuracy was improved by the proposed DMTRN model with a well-designed network structure and new input signal ECG.

This study verified that ECG was much more effective than HR for EE estimation and cast light on EE estimation using the deep learning method.

This study verified that ECG was much more effective than HR for EE estimation and cast light on EE estimation using the deep learning method.Computational Fluid Dynamics (CFD) is used to assist in designing artificial valves and planning procedures, focusing on local flow features. However, assessing the impact on overall cardiovascular function or predicting longer-term outcomes may require more comprehensive whole heart CFD models. Fitting such models to patient data requires numerous computationally expensive simulations, and depends on specific clinical measurements to constrain model parameters, hampering clinical adoption. Surrogate models can help to accelerate the fitting process while accounting for the added uncertainty. We create a validated patient-specific four-chamber heart CFD model based on the Navier-Stokes-Brinkman (NSB) equations and test Gaussian Process Emulators (GPEs) as a surrogate model for performing a variance-based global sensitivity analysis (GSA). GSA identified preload as the dominant driver of flow in both the right and left side of the heart, respectively. Left-right differences were seen in terms of vascular outflow resistances, with pulmonary artery resistance having a much larger impact on flow than aortic resistance.

Autoři článku: Dehnlamont2502 (Schack Hopper)