Sumnerhackett3481

Z Iurium Wiki

We present an end-to-end smart harvesting solution for precision agriculture. Our proposed pipeline begins with yield estimation that is done through the use of object detection and tracking to count fruit within a video. We use and train You Only Look Once model (YOLO) on video clips of apples, oranges and pumpkins. The bounding boxes obtained through objection detection are used as an input to our selected tracking model, DeepSORT. The original version of DeepSORT is unusable with fruit data, as the appearance feature extractor only works with people. We implement ResNet as DeepSORT's new feature extractor, which is lightweight, accurate and generically works on different fruits. Our yield estimation module shows accuracy between 91-95% on real footage of apple trees. Our modification successfully works for counting oranges and pumpkins, with an accuracy of 79% and 93.9% with no need for training. Our framework additionally includes a visualization of the yield. This is done through the incorporation of geospatial data. We also propose a mechanism to annotate a set of frames with a respective GPS coordinate. During counting, the count within the set of frames and the matching GPS coordinate are recorded, which we then visualize on a map. We leverage this information to propose an optimal container placement solution. Our proposed solution involves minimizing the number of containers to place across the field before harvest, based on a set of constraints. This acts as a decision support system for the farmer to make efficient plans for logistics, such as labor, equipment and gathering paths before harvest. Our work serves as a blueprint for future agriculture decision support systems that can aid in many other aspects of farming.Lung cancer is the leading cause of cancer death and morbidity worldwide. Many studies have shown machine learning models to be effective in detecting lung nodules from chest X-ray images. However, these techniques have yet to be embraced by the medical community due to several practical, ethical, and regulatory constraints stemming from the "black-box" nature of deep learning models. Additionally, most lung nodules visible on chest X-rays are benign; therefore, the narrow task of computer vision-based lung nodule detection cannot be equated to automated lung cancer detection. Addressing both concerns, this study introduces a novel hybrid deep learning and decision tree-based computer vision model, which presents lung cancer malignancy predictions as interpretable decision trees. The deep learning component of this process is trained using a large publicly available dataset on pathological biomarkers associated with lung cancer. These models are then used to inference biomarker scores for chest X-ray images from two independent data sets, for which malignancy metadata is available. Next, multi-variate predictive models were mined by fitting shallow decision trees to the malignancy stratified datasets and interrogating a range of metrics to determine the best model. The best decision tree model achieved sensitivity and specificity of 86.7% and 80.0%, respectively, with a positive predictive value of 92.9%. Decision trees mined using this method may be considered as a starting point for refinement into clinically useful multi-variate lung cancer malignancy models for implementation as a workflow augmentation tool to improve the efficiency of human radiologists.

Biometric sensing is a security method for protecting information and property. State-of-the-art biometric traits are behavioral and physiological in nature. However, they are vulnerable to tampering and forgery.

The proposed approach uses blood flow sounds in the carotid artery as a source of biometric information. A handheld sensing device and an associated desktop application were built. Between 80 and 160 carotid recordings of 11 s in length were acquired from seven individuals each. Wavelet-based signal analysis was performed to assess the potential for biometric applications.

The acquired signals per individual proved to be consistent within one carotid sound recording and between multiple recordings spaced by several weeks. The averaged continuous wavelet transform spectra for all cardiac cycles of one recording showed specific spectral characteristics in the time-frequency domain, allowing for the discrimination of individuals, which could potentially serve as an individual fingerprint of the caion could clinically be used to obtain and highlight differences from a previously established personalized audio profile and subsequently could provide information on the source of the deviation as well as on its effects on the individual's health. The limited number of individuals and recordings require a study in a larger population along with an investigation of the long-term spectral stability of carotid sounds to assess its potential as a biometric marker. Nevertheless, the approach opens the perspective for automatic feature extraction and classification.Human activity recognition has been a key study topic in the development of cyber physical systems and assisted living applications. In particular, inertial sensor based systems have become increasingly popular because they do not restrict users' movement and are also relatively simple to implement compared to other approaches. In this paper, we present a hierarchical classification framework based on wavelets and adaptive pooling for activity recognition and fall detection predicting fall direction and severity. To accomplish this, windowed segments were extracted from each recording of inertial measurements from the SisFall dataset. A combination of wavelet based feature extraction and adaptive pooling was used before a classification framework was applied to determine the output class. Furthermore, tests were performed to determine the best observation window size and the sensor modality to use. Based on the experiments the best window size was found to be 3 s and the best sensor modality was found to be a combination of accelerometer and gyroscope measurements. These were used to perform activity recognition and fall detection with a resulting weighted F1 score of 94.67%. This framework is novel in terms of the approach to the human activity recognition and fall detection problem as it provides a scheme that is computationally less intensive while providing promising results and therefore can contribute to edge deployment of such systems.The research presented in this manuscript proposes a novel Harris Hawks optimization algorithm with practical application for evolving convolutional neural network architecture to classify various grades of brain tumor using magnetic resonance imaging. The proposed improved Harris Hawks optimization method, which belongs to the group of swarm intelligence metaheuristics, further improves the exploration and exploitation abilities of the basic algorithm by incorporating a chaotic population initialization and local search, along with a replacement strategy based on the quasi-reflection-based learning procedure. The proposed method was first evaluated on 10 recent CEC2019 benchmarks and the achieved results are compared with the ones generated by the basic algorithm, as well as with results of other state-of-the-art approaches that were tested under the same experimental conditions. In subsequent empirical research, the proposed method was adapted and applied for a practical challenge of convolutional neural ne help in the early detection of brain tumors.Chromatic dispersion engineering of photonic waveguide is of great importance for Photonic Integrated Circuit in broad applications, including on-chip CD compensation, supercontinuum generation, Kerr-comb generation, micro resonator and mode-locked laser. Linear propagation behavior and nonlinear effects of the light wave can be manipulated by engineering CD, in order to manipulate the temporal shape and frequency spectrum. Therefore, agile shapes of dispersion profiles, including typically wideband flat dispersion, are highly desired among various applications. selleck In this study, we demonstrate a novel method for agile dispersion engineering of integrated photonic waveguide. Based on a horizontal double-slot structure, we obtained agile dispersion shapes, including broadband low dispersion, constant dispersion and slope-maintained linear dispersion. The proposed inverse design method is objectively-motivated and automation-supported. Dispersion in the range of 0-1.5 ps/(nm·km) for 861-nm bandwidth has been achieved, which shows superior performance for broadband low dispersion. Numerical simulation of the Kerr frequency comb was carried out utilizing the obtained dispersion shapes and a comb spectrum for 1068-nm bandwidth with a 20-dB power variation was generated. Significant potential for integrated photonic design automation can be expected.At present, people spend most of their time in passive rather than active mode. Sitting with computers for a long time may lead to unhealthy conditions like shoulder pain, numbness, headache, etc. To overcome this problem, human posture should be changed for particular intervals of time. This paper deals with using an inertial sensor built in the smartphone and can be used to overcome the unhealthy human sitting behaviors (HSBs) of the office worker. To monitor, six volunteers are considered within the age band of 26 ± 3 years, out of which four were male and two were female. Here, the inertial sensor is attached to the rear upper trunk of the body, and a dataset is generated for five different activities performed by the subjects while sitting in the chair in the office. Correlation-based feature selection (CFS) technique and particle swarm optimization (PSO) methods are jointly used to select feature vectors. The optimized features are fed to machine learning supervised classifiers such as naive Bayes, SVM, and KNN for recognition. Finally, the SVM classifier achieved 99.90% overall accuracy for different human sitting behaviors using an accelerometer, gyroscope, and magnetometer sensors.Real-time and accurate longitudinal rip detection of a conveyor belt is crucial for the safety and efficiency of an industrial haulage system. However, the existing longitudinal detection methods possess drawbacks, often resulting in false alarms caused by tiny scratches on the belt surface. A method of identifying the longitudinal rip through three-dimensional (3D) point cloud processing is proposed to solve this issue. Specifically, the spatial point data of the belt surface are acquired by a binocular line laser stereo vision camera. Within these data, the suspected points induced by the rips and scratches were extracted. Subsequently, a clustering and discrimination mechanism was employed to distinguish the rips and scratches, and only the rip information was used as alarm criterion. Finally, the direction and maximum width of the rip can be effectively characterized in 3D space using the principal component analysis (PCA) method. This method was tested in practical experiments, and the experimental results indicate that this method can identify the longitudinal rip accurately in real time and simultaneously characterize it. Thus, applying this method can provide a more effective and appropriate solution to the identification scenes of longitudinal rip and other similar defects.

Autoři článku: Sumnerhackett3481 (Voigt Weeks)