Frenchmoon3547

Z Iurium Wiki

This article is concerned with the problem of recursive state estimation for a class of multirate multisensor systems with distributed time delays under the round-robin (R-R) protocol. The state updating period of the system and the sampling period of the sensors are allowed to be different so as to reflect the engineering practice. An iterative method is presented to transform the multirate system into a single-rate one, thereby facilitating the system analysis. The R-R protocol is introduced to determine the transmission sequence of sensors with the aim to alleviate undesirable data collisions. Under the R-R protocol scheduling, only one sensor can get access to transmit its measurement at each sampling time instant. The main purpose of this article is to develop a recursive state estimation scheme such that an upper bound on the estimation error covariance is guaranteed and then locally minimized through adequately designing the estimator parameter. Finally, simulation examples are provided to show the effectiveness of the proposed estimator design scheme.In this article, a new outlier-resistant recursive filtering problem (RF) is studied for a class of multisensor multirate networked systems under the weighted try-once-discard (WTOD) protocol. The sensors are sampled with a period that is different from the state updating period of the system. In order to lighten the communication burden and alleviate the network congestions, the WTOD protocol is implemented in the sensor-to-filter channel to schedule the order of the data transmission of the sensors. In the case of the measurement outliers, a saturation function is employed in the filter structure to constrain the innovations contaminated by the measurement outliers, thereby maintaining satisfactory filtering performance. By resorting to the solution to a matrix difference equation, an upper bound is first obtained on the covariance of the filtering error, and the gain matrix of the filter is then characterized to minimize the derived upper bound. Furthermore, the exponential boundedness of the filtering error dynamics is analyzed in the mean square sense. Finally, the usefulness of the proposed outlier-resistant RF scheme is verified by simulation examples.This article develops an adaptive neural-network (NN) boundary control scheme for a flexible manipulator subject to input constraints, model uncertainties, and external disturbances. First, a radial basis function NN method is utilized to tackle the unknown input saturations, dead zones, and model uncertainties. Then, based on the backstepping approach, two adaptive NN boundary controllers with update laws are employed to stabilize the like-position loop subsystem and like-posture loop subsystem, respectively. With the introduced control laws, the uniform ultimate boundedness of the deflection and angle tracking errors for the flexible manipulator are guaranteed. Finally, the control performance of the developed control technique is examined by a numerical example.In this article, direct adaptive actuator failure compensation control is investigated for a class of noncanonical neural-network nonlinear systems whose relative degrees are implicit and parameters are unknown. Both the state tracking and output tracking control problems are considered, and their adaptive solutions are developed which have specific mechanisms to accommodate both actuator failures and parameter uncertainties to ensure the closed-loop system stability and asymptotic state or output tracking. The adaptive actuator failure compensation control schemes are derived for noncanonical nonlinear systems with neural-network approximation, and are also applicable to general parametrizable noncanonical nonlinear systems with both unknown actuator failures and unknown parameters, solving some key technical issues, in particular, dealing with the system zero dynamics under uncertain actuator failures. #link# The effectiveness of the developed adaptive control schemes is confirmed by simulation results from an application example of speed control of dc motors.Most reference vector-based decomposition algorithms for solving multiobjective optimization problems may not be well suited for solving problems with irregular Pareto fronts (PFs) because the distribution of predefined reference vectors may not match well with the distribution of the Pareto-optimal solutions. Thus, the adaptation of the reference vectors is an intuitive way for decomposition-based algorithms to deal with irregular PFs. However, most existing methods frequently change the reference vectors based on the activeness of the reference vectors within specific generations, slowing down the convergence of the search process. To address this issue, we propose a new method to learn the distribution of the reference vectors using the growing neural gas (GNG) network to achieve automatic yet stable adaptation. To this end, an improved GNG is designed for learning the topology of the PFs with the solutions generated during a period of the search process as the training data. Selleck AR-42 use the individuals in the current population as well as those in previous generations to train the GNG to strike a balance between exploration and exploitation. Comparative studies conducted on popular benchmark problems and a real-world hybrid vehicle controller design problem with complex and irregular PFs show that the proposed method is very competitive.The scheduling and control of wireless cloud control systems involving multiple independent control systems and a centralized cloud computing platform are investigated. For such systems, the scheduling of the data transmission as well as some particular design of the controller can be equally important. From this observation, we propose a dual channel-aware scheduling strategy under the packet-based model predictive control framework, which integrates a decentralized channel-aware access strategy for each sensor, a centralized access strategy for the controllers, and a packet-based predictive controller to stabilize each control system. First, the decentralized scheduling strategy for each sensor is set in a noncooperative game framework and is then designed with asymptotical convergence. Then, the central scheduler for the controllers takes advantage of a prioritized threshold strategy, which outperforms a random one neglecting the information of the channel gains. Finally, we prove the stability for each system by constructing a new Lyapunov function, and further reveal the dependence of the control system stability on the prediction horizon and successful access probabilities of each sensor and controller. These theoretical results are successfully verified by numerical simulation.Dynamic multiobjective optimization problem (DMOP) denotes the multiobjective optimization problem, which contains objectives that may vary over time. Due to the widespread applications of DMOP existed in reality, DMOP has attracted much research attention in the last decade. link2 In this article, we propose to solve DMOPs via an autoencoding evolutionary search. In particular, for tracking the dynamic changes of a given DMOP, an autoencoder is derived to predict the moving of the Pareto-optimal solutions based on the nondominated solutions obtained before the dynamic occurs. This autoencoder can be easily integrated into the existing multiobjective evolutionary algorithms (EAs), for example, NSGA-II, MOEA/D, etc., for solving DMOP. In contrast to the existing approaches, the proposed prediction method holds a closed-form solution, which thus will not bring much computational burden in the iterative evolutionary search process. Furthermore, the proposed prediction of dynamic change is automatically learned from the nondominated solutions found along the dynamic optimization process, which could provide more accurate Pareto-optimal solution prediction. link3 To investigate the performance of the proposed autoencoding evolutionary search for solving DMOP, comprehensive empirical studies have been conducted by comparing three state-of-the-art prediction-based dynamic multiobjective EAs. The results obtained on the commonly used DMOP benchmarks confirmed the efficacy of the proposed method.Stroke is an acute cerebral vascular disease that is likely to cause long-term disabilities and death. Immediate emergency care with accurate diagnosis of computed tomographic (CT) images is crucial for dealing with a hemorrhagic stroke. However, due to the high variability of a stroke's location, contrast, and shape, it is challenging and time-consuming even for experienced radiologists to locate them. In this paper, we propose a U-net based deep learning framework to automatically detect and segment hemorrhage strokes in CT brain images. The input of the network is built by concatenating the flipped image with the original CT slice which introduces symmetry constraints of the brain images into the proposed model. This enhances the contrast between hemorrhagic areas and normal brain tissue. Various Deep Learning topologies are compared by varying the layers, batch normalization, dilation rates, and pre-train models. This could increase the respective filed and preserves more information on lesion characteristics. Besides, the adversarial training is also adopted in the proposed network to improve the accuracy of the segmentation. The proposed model is trained and evaluated on two different datasets, which achieve the competitive performance with human experts with the highest location accuracy 0.9859 for detection, 0.8033 Dice score, and 0.6919 IoU for segmentation. The results demonstrate the effectiveness, robustness, and advantages of the proposed deep learning model in automatically hemorrhage lesion diagnosis, which make it possible to be a clinical decision support tool in stroke diagnosis.Automatic retinal vessel segmentation is important for the diagnosis and prevention of ophthalmic diseases. The existing deep learning retinal vessel segmentation models always treat each pixel equally. However, the multi-scale vessel structure is a vital factor affecting the segmentation results, especially in thin vessels. To address this crucial gap, we propose a novel Fully Attention-based network (FANet) based on attention mechanisms to adaptively learn rich feature representation and aggregate the multi-scale information. Specifically, the framework consists of the image pre-processing procedure and the semantic segmentation networks. Green channel extraction (GE) and contrast limited adaptive histogram equalization (CLAHE) are employed as pre-processing to enhance the texture and contrast of retinal blood images. Besides, the network combines two types of attention modules with the U-Net. We propose a lightweight dual-direction attention block to model global dependencies and reduce intra-class inconsistencies, in which the weights of feature maps are updated based on the semantic correlation between pixels. The dual-direction attention block utilizes horizontal and vertical pooling operations to produce the attention map. In this way, the network aggregates global contextual information from semantic-closer regions or a series of pixels belonging to the same object category. Meanwhile, we adopt the selective kernel(SK) unit to replace the standard convolution for obtaining multi-scale features of different receptive field sizes generated by soft attention. Furthermore, we demonstrate that the proposed model can effectively identify irregular, noisy, and multi-scale retinal vessels. The abundant experiments on DRIVE, STARE, and CHASE_DB1 datasets show that our method achieves state-of-the-art performance.

Autoři článku: Frenchmoon3547 (Larson Caldwell)