Nedergaardstiles5609

Z Iurium Wiki

Verze z 21. 9. 2024, 19:35, kterou vytvořil Nedergaardstiles5609 (diskuse | příspěvky) (Založena nová stránka s textem „This enables us to use a simple uniform sampling, instead of costly importance sampling based on the BSSRDF. The experimental results show that our method…“)
(rozdíl) ← Starší verze | zobrazit aktuální verzi (rozdíl) | Novější verze → (rozdíl)

This enables us to use a simple uniform sampling, instead of costly importance sampling based on the BSSRDF. The experimental results show that our method can accurately estimate the error. In addition, our sampling method achieves better estimation accuracy in equal-time comparison with the previous methods.Multi-source domain adaptation (MSDA) aims to transfer knowledge from multi-source domains to one target domain. Inspired by single-source domain adaptation, existing methods solve MSDA by aligning the data distributions between the target domain and each source domain. However, aligning the target domain with the dissimilar source domain would harm the representation learning. To address the above issue, an intuitive motivation of MSDA is using the attention mechanism to enhance the positive effects of the similar domains, and suppress the negative effects of the dissimilar domains. Therefore, we propose Attention-Based Multi-Source Domain Adaptation (ABMSDA) by considering the domain correlations to alleviate the effects caused by dissimilar domains. To obtain the domain correlations between source and target domains, ABMSDA firstly trains a domain recognition model to calculate the probability that the target images belong to each source domain. Based on the domain correlations, Weighted Moment Distance (WMD) is proposed to pay more attention on the source domains with higher similarities. Furthermore, Attentive Classification Loss (ACL) is developed to constrain that the feature extractor can generate the alignment and discriminative visual representations. The evaluations on two benchmarks demonstrate the effectiveness of the proposed model, e.g., an average of 6.1% improvement on the challenging DomainNet dataset.Wavelet denoising is a classical and effective approach for reducing noise in images and signals. Suggested in 1994, this approach is carried out by rectifying the coefficients of a noisy image, in the transform domain, using a set of shrinkage functions (SFs). A plethora of papers deals with the optimal shape of the SFs and the transform used. For example, it is widely known that applying SFs in a redundant basis improves the results. However, it is barely known that the shape of the SFs should be changed when the transform used is redundant. In this paper, we introduce a complete picture of the interrelations between the transform used, the optimal shrinkage functions, and the domains in which they are optimized. We suggest three schemes for optimizing the SFs and provide bounds of the remaining noise, in each scheme, with respect to the other alternatives. In particular, we show that for subband optimization, where each SF is optimized independently for a particular band, optimizing the SFs in the spatial domain is always better than or equal to optimizing the SFs in the transform domain. Furthermore, for redundant bases, we provide the expected denoising gain that can be achieved, relative to the unitary basis, as a function of the redundancy rate.Defocus blur detection (DBD), which has been widely applied to various fields, aims to detect the out-of-focus or in-focus pixels from a single image. Despite the fact that the deep learning based methods applied to DBD have outperformed the hand-crafted feature based methods, the performance cannot still meet our requirement. In this paper, a novel network is established for DBD. Unlike existing methods which only learn the projection from the in-focus part to the ground-truth, both in-focus and out-of-focus pixels, which are completely and symmetrically complementary, are taken into account. Specifically, two symmetric branches are designed to jointly estimate the probability of focus and defocus pixels, respectively. Due to their complementary constraint, each layer in a branch is affected by an attention obtained from another branch, effectively learning the detailed information which may be ignored in one branch. The feature maps from these two branches are then passed through a unique fusion block to simultaneously get the two-channel output measured by a complementary loss. Additionally, instead of estimating only one binary map from a specific layer, each layer is encouraged to estimate the ground truth to guide the binary map estimation in its linked shallower layer followed by a top-to-bottom combination strategy, gradually exploiting the global and local information. Experimental results on released datasets demonstrate that our proposed method remarkably outperforms state-of-the-art algorithms.Despite the fact that great progress has been made on single image deraining tasks, it is still challenging for existing models to produce satisfactory results directly, and it often requires a single or multiple refinement stages to gradually improve the quality. However, in this paper, we demonstrate that existing image-level refinement with a stage-independent learning design is problematic with the side effect of over/under-deraining. To resolve this issue, we for the first time propose the mechanism of learning to carry out refinement on the unsatisfactory features, and propose a novel attentive feature refinement (AFR) module. Specifically, AFR is designed as a two-branched network for simultaneous rain-distribution-aware attention map learning and attention guided hierarchy-preserving feature refinement. Guided by task-specific attention, coarse features are progressively refined to better model the diversified rainy effects. By using a separable convolution as the basic component, our AFR module introduces little computation overhead and can be readily integrated into most rainy-to-clean image translation networks for achieving better deraining results. By incorporating a series of AFR modules into a general encoder-decoder network, AFR-Net is constructed for deraining and it achieves new state-of-the-art results on both synthetic and real images. Furthermore, by using AFR-Net as a teacher model, we explore the use of knowledge distillation to successfully learn a student model that is also able to achieve state-of-the-art results but with a much faster inference speed (i.e., it only takes 0.08 second to process a 512×512 rainy image). Code and pre-trained models are available at 〈 https//github.com/RobinCSIRO/AFR-Net 〉 .The modeling of source distributions of finite spatial extent in ultrasound and medical imaging applications is a problem of longstanding interest. In time domain methods, such as the finite difference time domain or pseudospectral approaches, one requirement is the representation of such distributions over a grid, normally Cartesian. Various artefacts, including staircasing errors, can arise. In this short contribution, the problem of the representation of a distribution over a grid is framed as an optimisation problem in the Fourier domain over a preselected set of grid points, thus maintaining control over computational cost, and allowing the fine tuning of the optimisation to the wavenumber range of interest for a particular numerical method. Numerical results are presented in the important special case of the spherical cap or bowl source.A solidly mounted resonator on flexible Polyimide (PI) substrate with high effective coupling coefficient (Kt2) of 14.06% is reported in this paper. This high Kt2 is resulting from the LiNbO3 (LN) single crystalline film and [SiO2/Mo]3 Bragg reflector. The quality of LN film fabricated by Crystal-ion-slicing (CIS) technique using Benzocyclobutene (BCB) bonding layer was close to the bulk crystalline LN. The interfaces of the Al/LN/Al/[SiO2/Mo]3 Bragg reflector/BCB/PI multilayer are sharp and the thickness of each layer is consistent with its design value. The resonant frequency and the Kt2 keep stable when it is bended at different radii. These results demonstrate a feasible approach to realizing RF filters on flexible polymer substrates, which is an indispensable device for building integrated and multi-functional wireless flexible electronic systems.Superharmonic imaging with dual-frequency imaging systems uses conventional low-frequency ultrasound transducers on transmit, and high-frequency transducers on receive to detect higher order harmonic signals from microbubble contrast agents, enabling high-contrast imaging while suppressing clutter from background tissues. Current dual-frequency imaging systems for superharmonic imaging have been used for visualizing tumor microvasculature, with single-element transducers for each of the low- and high-frequency components. However, the useful field of view is limited by the fixed focus of single-element transducers, while image frame rates are limited by the mechanical translation of the transducers. In this paper, we introduce an array-based dual-frequency transducer, with low-frequency and high-frequency arrays integrated within the probe head, to overcome the limitations of single-channel dual-frequency probes. The purpose of this study is to evaluate the line-by-line high-frequency imaging and superharmonic imaging capabilities of the array-based dual-frequency probe for acoustic angiography applications in vitro and in vivo. We report center frequencies of 1.86 MHz and 20.3 MHz with -6 dB bandwidths of 1.2 MHz (1.2 to 2.4 MHz) and 14.5 MHz (13.3 to 27.8 MHz) for the low- and high-frequency arrays, respectively. With the proposed beamforming schemes, excitation pressure was found to range from 336 kPa to 458 kPa at its azimuthal foci. This was sufficient to induce nonlinear scattering from microbubble contrast agents. Specifically, in vitro contrast channel phantom imaging and in vivo xenograft mouse tumor imaging by this probe with superharmonic imaging showed contrast-to-tissue ratio improvements of 17.7 dB and 16.2 dB, respectively, compared to line-by-line micro-ultrasound B-mode imaging.Digital breast tomosynthesis (DBT) is a quasi-three-dimensional imaging modality that can reduce false negatives and false positives in mass lesion detection caused by overlapping breast tissue in conventional two-dimensional (2D) mammography. The patient dose of a DBT scan is similar to that of a single 2D mammogram, while acquisition of each projection view adds detector readout noise. The noise is propagated to the reconstructed DBT volume, possibly obscuring subtle signs of breast cancer such as microcalcifications (MCs). This study developed a deep convolutional neural network (DCNN) framework for denoising DBT images with a focus on improving the conspicuity of MCs as well as preserving the ill-defined margins of spiculated masses and normal tissue textures. We trained the DCNN using a weighted combination of mean squared error (MSE) loss and adversarial loss. Sapitinib cost We configured a dedicated x-ray imaging simulator in combination with digital breast phantoms to generate realistic in silico DBT data for training. We compared the DCNN training between using digital phantoms and using real physical phantoms. The proposed denoising method improved the contrast-to-noise ratio (CNR) and detectability index (d') of the simulated MCs in the validation phantom DBTs. These performance measures improved with increasing training target dose and training sample size. Promising denoising results were observed on the transferability of the digital-phantom-trained denoiser to DBT reconstructed with different techniques and on a small independent test set of human subject DBT images.

Autoři článku: Nedergaardstiles5609 (Lindsey Burt)