Liudillon0564
We present two online experiments investigating trust in artificial intelligence (AI) as a primary and secondary medical diagnosis tool and one experiment testing two methods to increase trust in AI. Participants in Experiment 1 read hypothetical scenarios of low and high-risk diseases, followed by two sequential diagnoses, and estimated their trust in the medical findings. In three between-participants groups, the first and second diagnoses were given by human and AI, AI and human, and human and human doctors, respectively. In Experiment 2 we examined if people expected higher standards of performance from AI than human doctors, in order to trust AI treatment recommendations. In Experiment 3 we investigated the possibility to increase trust in AI diagnoses by (i) informing our participants that the AI outperforms the human doctor, and (ii) nudging them to prefer AI diagnoses in a choice between AI and human doctors. Results indicate overall lower trust in AI, as well as for diagnoses of high-risk diseases. Participants trusted AI doctors less than humans for first diagnoses, and they were also less likely to trust a second opinion from an AI doctor for high risk diseases. Surprisingly, results highlight that people have comparable standards of performance for AI and human doctors and that trust in AI does not increase when people are told the AI outperforms the human doctor. Importantly, we find that the gap in trust between AI and human diagnoses is eliminated when people are nudged to select AI in a free-choice paradigm between human and AI diagnoses, with trust for AI diagnoses significantly increased when participants could choose their doctor. These findings isolate control over one's medical practitioner as a valid candidate for future trust-related medical diagnosis and highlight a solid potential path to smooth acceptance of AI diagnoses amongst patients.Success at an Olympic level can come down to the smallest of margins. However little research has been conducted into how the menstrual cycle affects elite athletes' performance and decision making. This study uses a combination of quantitative and qualitative research methods to explore this question. Physiological performance data was collected from eight elite athletes for 7 months and analyzed as a function of menstrual phase. The Cambridge Gambling Task (CGT) was used to test decision making and testing occurred twice in one cycle, during the early follicular phase and during the mid-luteal phase. Menstrual cycle phase was determined using menstrual cycle mapping and urine ovulation tests. In the qualitative part of this project, two elite athletes, two Olympic level athletes, and two coaches participated in semi-structured interviews. The study found that physiological performance was significantly better during the menses phase (MP) compared to the proliferative and secretory phases (PSP). There was variation in how elite athletes were individually affected however. Oral contraceptive users showed a greater performance change from MP to PSP suggesting that oral contraceptives may be detrimental to performance in some athletes. The results of the CGT showed that impulsivity is significantly affected by menstrual cycle phase. Risk taking, error rates and response times were not affected. The qualitative interviews revealed that elite athletes and their coaches understand little of the menstrual cycle. Despite this, there are preconceptions that it negatively effects performance during the menses phase. selleckchem The findings suggest that the menstrual cycle can have a significant effect on an elite athlete's performance and this paper discusses how individuals can possibly improve aspects of physiological and psychological performance by understanding and monitoring their menstrual patterns.A major pain for researchers in all fields is that they have less and less time for actual science activities reading, thinking, coming up with new theories and hypotheses, testing, analyzing data, writing. In psychology, three of the most time-consuming nonactual science activities are learning how to program an experiment, recruiting participants, and preparing teaching materials. Testable (www.testable.org) provides a suite of academic tools to speed things up considerably. The Testable software allows the development of most psychology experiments in minutes, using a natural language form and a spreadsheet. Furthermore, any experiment can be easily converted into a social experiment in Testable Arena, with multiple participants interacting and viewing each other's responses. Experiments can then be published to Testable Library, a public repository for demonstration and sharing purposes. Participants can be recruited from Testable Minds, the subject pool with the most advanced participants verification system. Testable Minds employs multiple checks (such as face authentication) to ensure participants have accurate demographics (age, sex, location), are human, unique, and reliable. Finally, the Testable Class module can be used to teach psychology through experiments. It features over 50 ready-made classic psychology experiments, fully customizable, which instructors can add to their classes, together with their own experiments. These experiments can then be made available to students to do, import, modify, and use to collect data as part of their class. These Testable tools, backed up by a strong team of academic advisors and thousands of users, can save psychology researchers and other behavioral scientists valuable time for science.The possibility that improved inhibitory control in older adults is associated with engagement in non-contact sporting activity, Tai Chi, was investigated. Three groups of participants were compared; a group who regularly took part in Tai Chi (TC), a regularly exercising (RE) group, and a sedentary group (SG). Concurrent electroencephalographic recordings were obtained while a stop-signal inhibitory control task, where speeded responses are needed for most trials, but these must occasionally be withheld when a 'stop signal' is displayed, was performed. The electrophysiological components P3, broadly related to decision making, and Pe, related to error monitoring, were analyzed. Both exercise groups performed better on the stop-signal task for the measure indicative of inhibitory control, as well as being generally better for other indices of performance. No significant effects were seen for post-error slowing. Electrophysiological differences were seen for the TC group, with a significantly larger P3 component related to the stop-signal and a larger Pe component when errors were made.