Finchehlers5395

Z Iurium Wiki

Extensive experiment results on two large-scale public datasets have shown that the performance of our HSIDHN is competitive to state-of-the-art deep cross-modal hashing methods.The goal of this research is to develop and implement a highly effective deep learning model for detecting COVID-19. To achieve this goal, in this paper, we propose an ensemble of Convolutional Neural Network (CNN) based on EfficientNet, named ECOVNet, to detect COVID-19 from chest X-rays. To make the proposed model more robust, we have used one of the largest open-access chest X-ray data sets named COVIDx containing three classes-COVID-19, normal, and pneumonia. For feature extraction, we have applied an effective CNN structure, namely EfficientNet, with ImageNet pre-training weights. The generated features are transferred into custom fine-tuned top layers followed by a set of model snapshots. The predictions of the model snapshots (which are created during a single training) are consolidated through two ensemble strategies, i.e., hard ensemble and soft ensemble, to enhance classification performance. In addition, a visualization technique is incorporated to highlight areas that distinguish classes, thereby enhancing the understanding of primal components related to COVID-19. The results of our empirical evaluations show that the proposed ECOVNet model outperforms the state-of-the-art approaches and significantly improves detection performance with 100% recall for COVID-19 and overall accuracy of 96.07%. We believe that ECOVNet can enhance the detection of COVID-19 disease, and thus, underpin a fully automated and efficacious COVID-19 detection system.Performance problems in applications should ideally be detected as soon as they occur, i.e., directly when the causing code modification is added to the code repository. To this end, complex and cost-intensive application benchmarks or lightweight but less relevant microbenchmarks can be added to existing build pipelines to ensure performance goals. In this paper, we show how the practical relevance of microbenchmark suites can be improved and verified based on the application flow during an application benchmark run. We propose an approach to determine the overlap of common function calls between application and microbenchmarks, describe a method which identifies redundant microbenchmarks, and present a recommendation algorithm which reveals relevant functions that are not covered by microbenchmarks yet. A microbenchmark suite optimized in this way can easily test all functions determined to be relevant by application benchmarks after every code change, thus, significantly reducing the risk of undetected performance assurance with performance tests of multiple granularities.Virtual reality (VR) technology is an emerging tool that is supporting the connection between conservation research and public engagement with environmental issues. The use of VR in ecology consists of interviewing diverse groups of people while they are immersed within a virtual ecosystem to produce better information than more traditional surveys. However, at present, the relatively high level of expertise in specific programming languages and disjoint pathways required to run VR experiments hinder their wider application in ecology and other sciences. We present R2VR, a package for implementing and performing VR experiments in R with the aim of easing the learning curve for applied scientists including ecologists. The package provides functions for rendering VR scenes on web browsers with A-Frame that can be viewed by multiple users on smartphones, laptops, and VR headsets. It also provides instructions on how to retrieve answers from an online database in R. Three published ecological case studies are used to illustrate the R2VR workflow, and show how to run a VR experiments and collect the resulting datasets. By tapping into the popularity of R among ecologists, the R2VR package creates new opportunities to address the complex challenges associated with conservation, improve scientific knowledge, and promote new ways to share better understanding of environmental issues. The package could also be used in other fields outside of ecology.Considering the Internet of Things (IoT) impact in today's world, uninterrupted service is essential, and recovery has received more attention than ever before. selleckchem Fault-tolerance (FT) is an essential aspect of network resilience. Fault-tolerance mechanisms are required to ensure high availability and high reliability in systems. The advent of software-defined networking (SDN) in the IoT plays a significant role in providing a reliable communication platform. This paper proposes a data plane fault-tolerant architecture using the concepts of software-defined networks for IoT environments. In this work, a mathematical model called Shared Risk Link Group (SRLG) calculates redundant paths as the primary and backup non-overlapping paths between network equipment. In addition to the fault tolerance, service quality was considered in the proposed schemes. Putting the percentage of link bandwidth usage and the rate of link delay in calculating link costs makes it possible to calculate two completely non-overlapping paths with the best condition. We compare our two proposed dynamic schemes with the hybrid disjoint paths (Hybrid_DP) method and our previous work. IoT developments, wireless and wired equipment are now used in many industrial and commercial applications, so the proposed hybrid dynamic method supports both wired and wireless devices; furthermore multiple link failures will be supported in the two proposed dynamic schemes. Simulation results indicate that, while reducing the error recovery time, the two proposed dynamic designs lead to improved service quality parameters such as packet loss and delay compared to the Hybrid_DP method. The results show that in case of a link failure in the network, the proposed hybrid dynamic scheme's recovery time is approximately 12 ms. Furthermore, in the proposed hybrid dynamic scheme, on average, the recovery time, the packet loss, and the delay improved by 22.39%, 8.2%, 5.66%, compared to the Hybrid_DP method, respectively.Earthquakes are a natural phenomenon which may cause significant loss of life and infrastructure. Researchers have applied multiple artificial intelligence based techniques to predict earthquakes, but high accuracies could not be achieved due to the huge size of multidimensional data, communication delays, transmission latency, limited processing capacity and data privacy issues. Federated learning (FL) is a machine learning (ML) technique that provides an opportunity to collect and process data onsite without compromising on data privacy and preventing data transmission to the central server. The federated concept of obtaining a global data model by aggregation of local data models inherently ensures data security, data privacy, and data heterogeneity. In this article, a novel earthquake prediction framework using FL has been proposed. The proposed FL framework has given better performance over already developed ML based earthquake predicting models in terms of efficiency, reliability, and precision. We have analyzed three different local datasets to generate multiple ML based local data models. These local data models have been aggregated to generate global data model on the central FL server using FedQuake algorithm. Meta classifier has been trained at the FL server on global data model to generate more accurate earthquake predictions. We have tested the proposed framework by analyzing multidimensional seismic data within 100 km radial area from 34.708° N, 72.5478° E in Western Himalayas. The results of the proposed framework have been validated against instrumentally recorded regional seismic data of last thirty-five years, and 88.87% prediction accuracy has been recorded. These results obtained by the proposed framework can serve as a useful component in the development of earthquake early warning systems.Crop classification in early phenological stages has been a difficult task due to spectrum similarity of different crops. For this purpose, low altitude platforms such as drones have great potential to provide high resolution optical imagery where Machine Learning (ML) applied to classify different types of crops. In this research work, crop classification is performed at different phenological stages using optical images which are obtained from drone. For this purpose, gray level co-occurrence matrix (GLCM) based features are extracted from underlying gray scale images collected by the drone. To classify the different types of crops, different ML algorithms including Random Forest (RF), Naive Bayes (NB), Neural Network (NN) and Support Vector Machine (SVM) are applied. The results showed that the ML algorithms performed much better on GLCM features as compared to gray scale images with a margin of 13.65% in overall accuracy.In an interactive online learning system (OLS), it is crucial for the learners to form the questions correctly in order to be provided or recommended appropriate learning materials. The incorrect question formation may lead the OLS to be confused, resulting in providing or recommending inappropriate study materials, which, in turn, affects the learning quality and experience and learner satisfaction. In this paper, we propose a novel method to assess the correctness of the learner's question in terms of syntax and semantics. Assessing the learner's query precisely will improve the performance of the recommendation. A tri-gram language model is built, and trained and tested on corpora of 2,533 and 634 questions on Java, respectively, collected from books, blogs, websites, and university exam papers. The proposed method has exhibited 92% accuracy in identifying a question as correct or incorrect. Furthermore, in case the learner's input question is not correct, we propose an additional framework to guide the learner leading to a correct question that closely matches her intended question. For recommending correct questions, soft cosine based similarity is used. The proposed framework is tested on a group of learners' real-time questions and observed to accomplish 85% accuracy.

Enriched electronic health records (EHRs) contain crucial information related to disease progression, and this information can help with decision-making in the health care field. Data analytics in health care is deemed as one of the essential processes that help accelerate the progress of clinical research. However, processing and analyzing EHR data are common bottlenecks in health care data analytics.

The

R package provides mechanisms for integration, wrangling, and visualization of clinical data, including diagnosis and procedure records. First, the

package helps users transform International Classification of Diseases (ICD) codes to a uniform format. After code format transformation, the

package supports four strategies for grouping clinical diagnostic data. For clinical procedure data, two grouping methods can be chosen. After EHRs are integrated, users can employ a set of flexible built-in querying functions for dividing data into case and control groups by using specified criteria and splitting the data into before and after an event based on the record date.

Autoři článku: Finchehlers5395 (Lester Serup)