Samuelsenschneider7504
BACKGROUND Elevated temperature as a result of global climate warming, either in form of sudden heatwave (heat shock) or prolonged warming, has profound effects on the growth and development of plants. However, how plants differentially respond to these two forms of elevated temperatures is largely unknown. Here we have therefore performed a comprehensive comparison of multi-level responses of Arabidopsis leaves to heat shock and prolonged warming. RESULTS The plant responded to prolonged warming through decreased stomatal conductance, and to heat shock by increased transpiration. In carbon metabolism, the glycolysis pathway was enhanced while the tricarboxylic acid (TCA) cycle was inhibited under prolonged warming, and heat shock significantly limited the conversion of pyruvate into acetyl coenzyme A. The cellular concentration of hydrogen peroxide (H2O2) and the activities of antioxidant enzymes were increased under both conditions but exhibited a higher induction under heat shock. Interestingly, the transcription factors, class A1 heat shock factors (HSFA1s) and dehydration responsive element-binding proteins (DREBs), were up-regulated under heat shock, whereas with prolonged warming, other abiotic stress response pathways, especially basic leucine zipper factors (bZIPs) were up-regulated instead. CONCLUSIONS Our findings reveal that Arabidopsis exhibits different response patterns under heat shock versus prolonged warming, and plants employ distinctly different response strategies to combat these two types of thermal stress.BACKGROUND Multidrug resistant tuberculosis (MDR-TB) remains a serious public health problem with poor treatment outcomes. Predictors of poor outcomes vary in different regions. Vietnam is among the top 30 high burden of MDR-TB countries. We describe demographic characteristics and identify risk factors for poor outcome among patients with MDR-TB in Ho Chi Minh City (HCMC), the most populous city in Vietnam. METHODS This retrospective study included 2266 patients who initiated MDR-TB treatment between 2011 and 2015 in HCMC. Treatment outcomes were available for 2240 patients. Data was collected from standardized paper-based treatment cards and electronic records. A Kruskal Wallis test was used to assess changes in median age and body mass index (BMI) over time, and a Wilcoxon test was used to compare the median BMI of patients with and without diabetes mellitus. Chi squared test was used to compare categorical variables. Multivariate logistic regression with multiple imputation for missing data was used to identify risk factors for poor outcomes. Statistical analysis was performed using R program. RESULTS Among 2266 eligible cases, 60.2% had failed on a category I or II treatment regimen, 57.7% were underweight, 30.2% had diabetes mellitus and 9.6% were HIV positive. The notification rate increased 24.7% from 2011 to 2015. The treatment success rate was 73.3%. selleck kinase inhibitor Risk factors for poor treatment outcome included HIV co-infection (adjusted odds ratio (aOR) 2.94), advanced age (aOR 1.45 for every increase of 5 years for patients 60 years or older), having history of MDR-TB treatment (aOR 5.53), sputum smear grade scanty or 1+ (aOR 1.47), smear grade 2+ or 3+ (aOR 2.06), low BMI (aOR 0.83 for every increase of 1 kg/m2 of BMI for patients with BMI less then 21). CONCLUSION The number of patients diagnosed with MDR-TB in HCMC increased by almost a quarter between 2011 and 2015. Patients with HIV, high smear grade, malnutrition or a history of previous MDR-TB treatment are at greatest risk of poor treatment outcome.BACKGROUND Infectious meningitis is a serious disease and patient outcome relies on fast and reliable diagnostics. A syndromic panel testing approach like the FilmArray ME can accelerate diagnosis and therefore decrease the time to pathogen specific therapy. Yet, its clinical utility is controversial, mainly because of a remaining uncertainty in correct interpretation of results, limited data on its performance on clinical specimens and its relatively high costs. The aim of this study was to analyze clinical performance of the assay in a real life setting at a tertiary university hospital using a pragmatic and simple sample selection strategy to reduce the overall cost burden. METHODS Over a period of 18 months we received 4623 CSF samples (2338 hospitalizations, 1601 individuals). FilmArray ME analysis was restricted to CSF-samples with a high pretest probability of infectious meningitis, e.g. positive Gram-stain, samples in which leukocytes and/or bacteria were evident or urgent suspicion of infection was ced for antimicrobial susceptibility testing, the use of molecular tests as a stand-alone diagnostic cannot be recommended.BACKGROUND Identifying immunogens that induce HIV-1-specific immune responses is a lengthy process that can benefit from computational methods, which predict T-cell epitopes for various HLA types. METHODS We tested the performance of the NetMHCpan4.0 computational neural network in re-identifying 93 T-cell epitopes that had been previously independently mapped using the whole proteome IFN-γ ELISPOT assays in 6 HLA class I typed Ugandan individuals infected with HIV-1 subtypes A1 and D. To provide a benchmark we compared the predictions for NetMHCpan4.0 to MHCflurry1.2.0 and NetCTL1.2. RESULTS NetMHCpan4.0 performed best correctly predicting 88 of the 93 experimentally mapped epitopes for a set length of 9-mer and matched HLA class I alleles. Receiver Operator Characteristic (ROC) analysis gave an area under the curve (AUC) of 0.928. Setting NetMHCpan4.0 to predict 11-14mer length did not improve the prediction (37-79 of 93 peptides) with an inverse correlation between the number of predictions and length set. Late time point peptides were significantly stronger binders than early peptides (Wilcoxon signed rank test p = 0.0000005). MHCflurry1.2.0 similarly predicted all but 2 of the peptides that NetMHCpan4.0 predicted and NetCTL1.2 predicted only 14 of the 93 experimental peptides. CONCLUSION NetMHCpan4.0 class I epitope predictions covered 95% of the epitope responses identified in six HIV-1 infected individuals, and would have reduced the number of experimental confirmatory tests by > 80%. Algorithmic epitope prediction in conjunction with HLA allele frequency information can cost-effectively assist immunogen design through minimizing the experimental effort.