Templefeddersen8767

Z Iurium Wiki

Verze z 24. 8. 2024, 14:46, kterou vytvořil Templefeddersen8767 (diskuse | příspěvky) (Založena nová stránka s textem „Our vectorization approach reduces memory consumption by orders of magnitude, enabling real-time visualization performance. [https://www.selleckchem.com/pr…“)
(rozdíl) ← Starší verze | zobrazit aktuální verzi (rozdíl) | Novější verze → (rozdíl)

Our vectorization approach reduces memory consumption by orders of magnitude, enabling real-time visualization performance. Asunaprevir Different types of interactive visualizations are demonstrated to show the effectiveness of our technique, which could help further research on quantum turbulence.Human-in-the-loop topic modeling allows users to explore and steer the process to produce better quality topics that align with their needs. When integrated into visual analytic systems, many existing automated topic modeling algorithms are given interactive parameters to allow users to tune or adjust them. However, this has limitations when the algorithms cannot be easily adapted to changes, and it is difficult to realize interactivity closely supported by underlying algorithms. Instead, we emphasize the concept of tight integration, which advocates for the need to co-develop interactive algorithms and interactive visual analytic systems in parallel to allow flexibility and scalability. In this paper, we describe design goals for efficiently and effectively executing the concept of tight integration among computation, visualization, and interaction for hierarchical topic modeling of text data. We propose computational base operations for interactive tasks to achieve the design goals. To instantiate our concept, we present ArchiText, a prototype system for interactive hierarchical topic modeling, which offers fast, flexible, and algorithmically valid analysis via tight integration. Utilizing interactive hierarchical topic modeling, our technique lets users generate, explore, and flexibly steer hierarchical topics to discover more informed topics and their document memberships.In this paper, we investigate the importance of phase for texture discrimination and similarity estimation tasks. We first use two psychophysical experiments to investigate the relative importance of phase and magnitude spectra for human texture discrimination and similarity estimation. The results show that phase is more important to humans for both tasks. We further examine the ability of 51 computational feature sets to perform these two tasks. In contrast with the psychophysical experiments, it is observed that the magnitude data are more important to these computational feature sets than the phase data. We hypothesise that this inconsistency is due to the difference between the abilities of humans and the computational feature sets to utilise phase data. This motivates us to investigate the application of the 51 feature sets to phase-only images in addition to their use on the original data set. This investigation is extended to exploit Convolutional Neural Network (CNN) features. The results show that our feature fusion scheme improves the average performance of those feature sets for estimating humans' perceptual texture similarity. The superior performance should be attributed to the importance of phase to texture similarity.Edge detection is one of the most fundamental operations in the field of image analysis and computer vision as a critical preprocessing step for high-level tasks. It is difficult to give a generic threshold that works well on all images as the image contents are totally different. This paper presents an adaptive, robust and effective edge detector for real-time applications. According to the two-dimensional entropy, the images can be clarified into three groups, each attached with a reference percentage value based on the edge proportion statistics. Compared with the attached points along the gradient direction, anchor points were extracted with high probability to be edge pixels. Taking the segment direction into account, these points were then jointed into different edge segments, each of which was a clean, contiguous, 1-pixel wide chain of pixels. Experimental results indicate that the proposed edge detector outperforms the traditional edge following methods in terms of detection accuracy. Besides, the detection results can be used as the input information for post-processing applications in real-time.Obtained by wide band radar system, high resolution range profile (HRRP) is the projection of scatterers of target to the radar line-of-sight (LOS). HRRP reconstruction is unavoidable for inverse synthetic aperture radar (ISAR) imaging, and of particular usage for target recognition, especially in cases that the ISAR image of target is not able to be achieved. For the high-speed moving target, however, its HRRP is stretched by the high order phase error. To obtain well-focused HRRP, the phase error induced by target velocity should be compensated, utilizing either measured or estimated target velocity. Noting in case of under-sampled data, the traditional velocity estimation and HRRP reconstruction algorithms become invalid, a novel HRRP reconstruction of high-speed target for under-sampled data is proposed. link2 The Laplacian scale mixture (LSM) is used as the sparse prior of HRRP, and the variational Bayesian inference is utilized to derive its posterior, so as to reconstruct it with high resolution from the under-sampled data. Additionally, during the reconstruction of HRRP, the target velocity is estimated via joint constraint of entropy minimization and sparseness of HRRP to compensate the high order phase error brought by the target velocity to concentrate HRRP. Experimental results based on both simulated and measured data validate the effectiveness of the proposed Bayesian HRRP reconstruction algorithm.Semantic segmentation is a key step in scene understanding for autonomous driving. Although deep learning has significantly improved the segmentation accuracy, current highquality models such as PSPNet and DeepLabV3 are inefficient given their complex architectures and reliance on multi-scale inputs. Thus, it is difficult to apply them to real-time or practical applications. On the other hand, existing real-time methods cannot yet produce satisfactory results on small objects such as traffic lights, which are imperative to safe autonomous driving. In this paper, we improve the performance of real-time semantic segmentation from two perspectives, methodology and data. Specifically, we propose a real-time segmentation model coined Narrow Deep Network (NDNet) and build a synthetic dataset by inserting additional small objects into the training images. The proposed method achieves 65.7% mean intersection over union (mIoU) on the Cityscapes test set with only 8.4G floatingpoint operations (FLOPs) on 1024×2048 inputs. Furthermore, by re-training the existing PSPNet and DeepLabV3 models on our synthetic dataset, we obtained an average 2% mIoU improvement on small objects.In recent years, hashing methods have been proved to be effective and efficient for large-scale Web media search. However, the existing general hashing methods have limited discriminative power for describing fine-grained objects that share similar overall appearance but have a subtle difference. To solve this problem, we for the first time introduce the attention mechanism to the learning of fine-grained hashing codes. Specifically, we propose a novel deep hashing model, named deep saliency hashing (DSaH), which automatically mines salient regions and learns semantic-preserving hashing codes simultaneously. DSaH is a two-step end-to-end model consisting of an attention network and a hashing network. Our loss function contains three basic components, including the semantic loss, the saliency loss, and the quantization loss. As the core of DSaH, the saliency loss guides the attention network to mine discriminative regions from pairs of images.We conduct extensive experiments on both fine-grained and general retrieval datasets for performance evaluation. Experimental results on fine-grained datasets, including Oxford Flowers, Stanford Dogs, and CUB Birds demonstrate that our DSaH performs the best for the fine-grained retrieval task and beats the strongest competitor (DTQ) by approximately 10% on both Stanford Dogs and CUB Birds. DSaH is also comparable to several state-of-the-art hashing methods on CIFAR-10 and NUS-WIDE.Mode coupled vibrations in a UHF ZnO thin film bulk acoustic resonator (FBAR) operating at thickness-extensional (TE) mode are studied by employing weak boundary conditions (WBCs), constructed based on Saint -Venant's principle and mixed variational principle in the piezoelectric theory. The frequency spectra, describing the lateral size-dependence of mode couplings between the main mode (TE) and undesirable eigen-modes, for clamped lateral edges are compared with the existing frequency spectra for free lateral edges to illustrate the boundary influence. The displacement and stress variations in FBAR volume are also presented to intuitionally understand and distinguish the difference of frequency spectra between these two different lateral edges, and then we discuss how to select outstanding lateral sizes to weaken the mounting effect. The frequency spectra predicted from our approximate weak boundary conditions are also compared with and agree well with those predicted by finite element method (FEM) using COMSOL, which proves the correctness and accuracy of our theoretical method. link3 These results indicate that the WBCs could have potentials in the valid predictions of lateral size-dependence of mode couplings in piezoelectric acoustic wave devices.Iterative model-based algorithms are known to enable more accurate and quantitative optoacoustic (photoacoustic) tomographic reconstructions than standard back-projection methods. However, three-dimensional (3D) model-based inversion is often hampered by high computational complexity and memory overhead. Parallel implementations on a graphics processing unit (GPU) have been shown to efficiently reduce the memory requirements by on-the-fly calculation of the actions of the optoacoustic model matrix, but the high complexity still makes these approaches impractical for large 3D optoacoustic datasets. Herein, we show that the computational complexity of 3D model-based iterative inversion can be significantly reduced by splitting the model matrix into two parts one maximally sparse matrix containing only one entry per voxel-transducer pair and a second matrix corresponding to cyclic convolution. We further suggest reconstructing the images by multiplying the transpose of the model matrix calculated in this manner with the acquired signals, which is equivalent to using a very large regularization parameter in the iterative inversion method. The performance of these two approaches is compared to that of standard back-projection and a recently introduced GPU-based model-based method using datasets from in vivo experiments. The reconstruction time was accelerated by approximately an order of magnitude with the new iterative method, while multiplication with the transpose of the matrix is shown to be as fast as standard back-projection.In this paper, we present a comprehensive review of the imbalance problems in object detection. To analyze the problems in a systematic manner, we introduce a problem-based taxonomy. Following this taxonomy, we discuss each problem in depth and present a unifying yet critical perspective on the solutions in the literature. In addition, we identify major open issues regarding the existing imbalance problems as well as imbalance problems that have not been discussed before. Moreover, in order to keep our review up to date, we provide an accompanying webpage which catalogs papers addressing imbalance problems, according to our problem-based taxonomy. Researchers can track newer studies on this webpage available at https//github.com/kemaloksuz/ObjectDetectionImbalance.

Autoři článku: Templefeddersen8767 (Andreassen Young)