Hewittglass2794

Z Iurium Wiki

Online reviews regarding different products or services have become the main source to determine public opinions. Consequently, manufacturers and sellers are extremely concerned with customer reviews as these have a direct impact on their businesses. Unfortunately, to gain profit or fame, spam reviews are written to promote or demote targeted products or services. This practice is known as review spamming. In recent years, Spam Review Detection problem (SRD) has gained much attention from researchers, but still there is a need to identify review spammers who often work collaboratively to promote or demote targeted products. It can severely harm the review system. This work presents the Spammer Group Detection (SGD) method which identifies suspicious spammer groups based on the similarity of all reviewer's activities considering their review time and review ratings. After removing these identified spammer groups and spam reviews, the resulting non-spam reviews are displayed using diversification technique. For the diversification, this study proposed Diversified Set of Reviews (DSR) method which selects diversified set of top-k reviews having positive, negative, and neutral reviews/feedback covering all possible product features. this website Experimental evaluations are conducted on Roman Urdu and English real-world review datasets. The results show that the proposed methods outperformed the existing approaches when compared in terms of accuracy.Arabic language is a challenging language for automatic processing. This is due to several intrinsic reasons such as Arabic multi-dialects, ambiguous syntax, syntactical flexibility and diacritics. Machine learning and deep learning frameworks require big datasets for training to ensure accurate predictions. This leads to another challenge faced by researches using Arabic text; as Arabic textual datasets of high quality are still scarce. In this paper, an intelligent framework for expanding or augmenting Arabic sentences is presented. The sentences were initially labelled by human annotators for sentiment analysis. The novel approach presented in this work relies on the rich morphology of Arabic, synonymy lists, syntactical or grammatical rules, and negation rules to generate new sentences from the seed sentences with their proper labels. Most augmentation techniques target image or video data. This study is the first work to target text augmentation for Arabic language. Using this framework, we were able to increase the size of the initial seed datasets by 10 folds. Experiments that assess the impact of this augmentation on sentiment analysis showed a 42% average increase in accuracy, due to the reliability and the high quality of the rules used to build this framework.

The endeavouring to offer complex special functions from individual systems gave rise to what is known as the System of Systems (SoS). SoS co-integrating systems together while allowing for absorbing more systems in the future. SoS as an integrated system simplifies operations, reduces costs, and ensures efficiency. However, conflict may result while co-integrating systems, violating the main benefits of SoS. This paper is concerned with enhancing the time required to detect and solve such conflicts.

We adopted the k-means clustering technique to enhance the detection and solving of conflict resulting while co-integrating new systems into an existing SoS. Instead of dealing with SoS as a single entity, we partition it into clusters. Each cluster contains nearby systems according to pre-specified criteria. We can consider each cluster a Sub SoS (S-SoS). By doing so, the conflict that may arise while co-integrating new systems can be detected and solved in a shorter time. We propose the Smart Semantic Beliet showed the ability to accommodate more systems as well, therefore achieving the objectives of SoS. In order to test the applicability of the SSBFCSoS and compare its performance with other approaches, two datasets were employed. They are (Glest & StarCraft Brood War). With each dataset, 15 test cases were examined. We achieved, on average, 89% in solving the conflict compared to 77% for other approaches. Moreover, it showed an acceleration of up to proportionality over previous approaches for about 16% in solving conflicts as well. Besides, it reduced the frequency of the same conflicts by approximately 23% better than the other method, not only in the same cluster but even while combining different clusters.The evolution of electronic media is a mixed blessing. Due to the easy access, low cost, and faster reach of the information, people search out and devour news from online social networks. In contrast, the increasing acceptance of social media reporting leads to the spread of fake news. This is a minacious problem that causes disputes and endangers the societal stability and harmony. Fake news spread has gained attention from researchers due to its vicious nature. proliferation of misinformation in all media, from the internet to cable news, paid advertising and local news outlets, has made it essential for people to identify the misinformation and sort through the facts. Researchers are trying to analyze the credibility of information and curtail false information on such platforms. Credibility is the believability of the piece of information at hand. Analyzing the credibility of fake news is challenging due to the intent of its creation and the polychromatic nature of the news. In this work, we propose a model for detecting fake news. Our method investigates the content of the news at the early stage i.e., when the news is published but is yet to be disseminated through social media. Our work interprets the content with automatic feature extraction and the relevance of the text pieces. In summary, we introduce stance as one of the features along with the content of the article and employ the pre-trained contextualized word embeddings BERT to obtain the state-of-art results for fake news detection. The experiment conducted on the real-world dataset indicates that our model outperforms the previous work and enables fake news detection with an accuracy of 95.32%.Using prototype methods to reduce the size of training datasets can drastically reduce the computational cost of classification with instance-based learning algorithms like the k-Nearest Neighbour classifier. The number and distribution of prototypes required for the classifier to match its original performance is intimately related to the geometry of the training data. As a result, it is often difficult to find the optimal prototypes for a given dataset, and heuristic algorithms are used instead. However, we consider a particularly challenging setting where commonly used heuristic algorithms fail to find suitable prototypes and show that the optimal number of prototypes can instead be found analytically. We also propose an algorithm for finding nearly-optimal prototypes in this setting, and use it to empirically validate the theoretical results. Finally, we show that a parametric prototype generation method that normally cannot solve this pathological setting can actually find optimal prototypes when combined with the results of our theoretical analysis.Data acquisition problem in large-scale distributed Wireless Sensor Networks (WSNs) is one of the main issues that hinder the evolution of Internet of Things (IoT) technology. Recently, combination of Compressive Sensing (CS) and routing protocols has attracted much attention. An open question in this approach is how to integrate these techniques effectively for specific tasks. In this paper, we introduce an effective deterministic clustering based CS scheme (DCCS) for fog-supported heterogeneous WSNs to handle the data acquisition problem. DCCS employs the concept of fog computing, reduces total overhead and computational cost needed to self-organize sensor network by using a simple approach, and then uses CS at each sensor node to minimize the overall energy expenditure and prolong the IoT network lifetime. Additionally, the proposed scheme includes an effective algorithm for CS reconstruction called Random Selection Matching Pursuit (RSMP) to enhance the recovery process at the base station (BS) side with a complete scenario using CS. RSMP adds random selection process during the forward step to give opportunity for more columns to be selected as an estimated solution in each iteration. The results of simulation prove that the proposed technique succeeds to minimize the overall network power expenditure, prolong the network lifetime and provide better performance in CS data reconstruction.This paper addresses the resource allocation problem in multi-sharing uplink for device-to-device (D2D) communication, one aspect of 5G communication networks. The main advantage and motivation in relation to the use of D2D communication is the significant improvement in the spectral efficiency of the system when exploiting the proximity of communication pairs and reusing idle resources of the network, mainly in the uplink mode, where there are more idle available resources. An approach is proposed for allocating resources to D2D and cellular user equipments (CUE) users in the uplink of a 5G based network which considers the estimation of delay bound value. The proposed algorithm considers minimization of total delay for users in the uplink and solves the problem by forming conflict graph and by finding the maximal weight independent set. For the user delay estimation, an approach is proposed that considers the multifractal traffic envelope process and service curve for the uplink. The performance of the algorithm is evaluated through computer simulations in comparison with those of other algorithms in the literature in terms of throughput, delay, fairness and computational complexity in a scenario with channel modeling that describes the propagation of millimeter waves at frequencies above 6 GHz. Simulation results show that the proposed allocation algorithm outperforms other algorithms in the literature, being highly efficient to 5G systems.The design of an observer-based robust tracking controller is investigated and successfully applied to control an Activated Sludge Process (ASP) in this study. To this end, the Takagi-Sugeno (TS) fuzzy modeling is used to describe the dynamics of a nonlinear system with disturbance. Since the states of the system are not fully available, a fuzzy observer is designed. Based on the observed states and a reference state model, a reduced fuzzy controller for trajectory tracking purposes is then proposed. While the controller and the observer are developed, the design goal is to achieve the convergence and a guaranteed H ∞ performance. By using Lyapunov and H ∞ theories, sufficient conditions for synthesis of a fuzzy observer and a fuzzy controller for TS fuzzy systems are derived. Using some special manipulations, these conditions are reformulated in terms of linear matrix inequalities (LMIs) problem. Finally, the robust and effective tracking performance of the proposed controller is tested through simulations to control the dissolved oxygen and the substrate concentrations in an activated sludge process.Access control is a critical aspect for improving the privacy and security of IoT systems. A consortium is a public or private association or a group of two or more institutes, businesses, and companies that collaborate to achieve common goals or form a resource pool to enable the sharing economy aspect. However, most access control methods are based on centralized solutions, which may lead to problems like data leakage and single-point failure. Blockchain technology has its intrinsic feature of distribution, which can be used to tackle the centralized problem of traditional access control schemes. Nevertheless, blockchain itself comes with certain limitations like the lack of scalability and poor performance. To bridge the gap of these problems, here we present a decentralized capability-based access control architecture designed for IoT consortium networks named IoT-CCAC. A blockchain-based database is utilized in our solution for better performance since it exhibits favorable features of both blockchain and conventional databases.

Autoři článku: Hewittglass2794 (Fox Frederick)