Benjamin Wald / en Making AI more 'explainable' in health-care settings may lead to more mistakes: 茄子直播 researcher /news/making-ai-more-explainable-health-care-settings-may-lead-more-mistakes-u-t-researcher <span class="field field--name-title field--type-string field--label-hidden">Making AI more 'explainable' in health-care settings may lead to more mistakes: 茄子直播 researcher</span> <div class="field field--name-field-featured-picture field--type-image field--label-hidden field__item"> <img loading="eager" srcset="/sites/default/files/styles/news_banner_370/public/GettyImages-1227897932.jpg?h=afdc3185&amp;itok=gRf8qv1t 370w, /sites/default/files/styles/news_banner_740/public/GettyImages-1227897932.jpg?h=afdc3185&amp;itok=ZnlN2GOs 740w, /sites/default/files/styles/news_banner_1110/public/GettyImages-1227897932.jpg?h=afdc3185&amp;itok=yhlyESFS 1110w" sizes="(min-width:1200px) 1110px, (max-width: 1199px) 80vw, (max-width: 767px) 90vw, (max-width: 575px) 95vw" width="740" height="494" src="/sites/default/files/styles/news_banner_370/public/GettyImages-1227897932.jpg?h=afdc3185&amp;itok=gRf8qv1t" alt="&quot;&quot;"> </div> <span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>Christopher.Sorensen</span></span> <span class="field field--name-created field--type-created field--label-hidden"><time datetime="2020-10-16T12:25:24-04:00" title="Friday, October 16, 2020 - 12:25" class="datetime">Fri, 10/16/2020 - 12:25</time> </span> <div class="clearfix text-formatted field field--name-field-cutline-long field--type-text-long field--label-above"> <div class="field__label">Cutline</div> <div class="field__item">Research by 茄子直播's Marzyeh Ghassemi suggests that physicians may "over-trust" AI tools when steps are taken to make the algorithms' decisions more transparent, leading to more errors (photo by Nelson Almeida/AFP via Getty Images)</div> </div> <div class="field field--name-field-author-reporters field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/authors-reporters/benjamin-wald" hreflang="en">Benjamin Wald</a></div> </div> <div class="field field--name-field-topic field--type-entity-reference field--label-above"> <div class="field__label">Topic</div> <div class="field__item"><a href="/news/topics/our-community" hreflang="en">Our Community</a></div> </div> <div class="field field--name-field-story-tags field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/tags/temerty-faculty-medicine" hreflang="en">Temerty Faculty of Medicine</a></div> <div class="field__item"><a href="/news/tags/schwartz-reisman-institute-technology-and-society" hreflang="en">Schwartz Reisman Institute for Technology and Society</a></div> <div class="field__item"><a href="/news/tags/artificial-intelligence" hreflang="en">Artificial Intelligence</a></div> <div class="field__item"><a href="/news/tags/computer-science" hreflang="en">Computer Science</a></div> </div> <div class="clearfix text-formatted field field--name-body field--type-text-with-summary field--label-hidden field__item"><p>Machine learning algorithms have the potential to provide huge benefits in health care, potentially providing more reliable diagnoses than human doctors in some cases. Yet, many of us are reluctant to entrust our health to an algorithm&nbsp;鈥&nbsp;especially when even its designers can鈥檛 explain exactly how it arrived at its conclusion.</p> <p>This has led to a push for more transparency in AI design so that these powerful tools can explain&nbsp;how they came to their decisions, an idea referred to as 鈥渆xplainable AI.鈥</p> <p><img class="migrated-asset" src="/sites/default/files/2017-10-31-Marzyeh_Ghassemi_1.jpg" alt>But the University of Toronto鈥檚&nbsp;<strong>Marzyeh Ghassemi</strong>, an assistant professor in the Temerty Faculty of Medicine and in the&nbsp;department of computer science in the Faculty of Arts &amp; Science,&nbsp;says that explainable AI may actually make matters worse, pointing to her own research that suggests explainable AI is perceived to be more trustworthy despite being less accurate.</p> <p>&nbsp;</p> <p>鈥淗umans tend to over-trust AI systems in two specific scenarios,鈥 says Ghassemi, who specializes in machine learning for health and is a faculty affiliate at 茄子直播鈥檚 Schwartz Reisman Institute for Technology and Society.</p> <p>鈥淣umber one is when they believe the machine can perform a function they cannot, and number two is when there鈥檚 an expectation that the machine might mitigate some risks. Both of these are very true in many health settings, but particularly in emergency and intensive health settings.鈥</p> <p>Thus, providing explanations of algorithmic medical recommendations could result in doctors over-trusting, and hence over-relying on, these recommendations even when they are mistaken, Ghassemi says.</p> <p>Her research, which she presented at the Schwartz Reisman weekly seminar on Sept. 30, looks at how both experts and non-experts make use of medical advice they believe to be machine-generated. Experts tend to rate advice they were told was from an AI as less reliable than advice from a human, whereas non-experts tend to trust both sources of advice equally.</p> <p>However, experts鈥 lack of trust turns out to not make much of a difference in how much they are influenced by this advice, or in the accuracy of their resulting recommendations or actions. So the experts 鈥 the same ones who were suspicious of the AI advice 鈥 were equally as likely to base their final decision on this advice as novices who rated the advice as more trustworthy.</p> <p>Ghassemi鈥檚 work relates to a number of intersections between the four conversations that guide the work of the Schwartz Reisman Institute. Her work asks how AI systems can produce the maximum benefit for humanity, but also how these systems can be designed fairly so that disadvantaged communities do not bear an unfair degree of risk when the systems fail, and how they might be used to combat the bias that already exists in the medical system.</p> <p>Her work, and the work of other Schwartz Reisman researchers, goes beyond the direct consequences of new technology to explore how it is integrated into existing communities, and how it might transform these communities for better or worse.</p> <p>In the context of Ghassemi鈥檚 work, if AI advice is unreliable, even experienced clinicians can be misled into providing the wrong diagnosis. This becomes especially problematic when AI tools are trained on data from certain conditions, and then those conditions change.</p> <p>鈥淲e don鈥檛 know what happens when, for example, a new disease comes up or a pandemic occurs,鈥 says Ghassemi, 鈥渁nd suddenly [the AI is] giving the wrong predictions because we haven鈥檛 updated our models fast enough.鈥</p> <p>Transparency might seem to be a solution to this problem, giving us the ability to know when to trust an AI and when to doubt it. But in fact, Ghassemi suggests the opposite is true. She highlights a study called 鈥淢anipulating and Measuring Model Interpretability鈥 that showed people were actually less likely to catch obvious errors made by a transparent model (one which gave a clear weighting formula to justify its decision) than they were to catch the same errors made by a 鈥渂lack-box鈥 model (one which gave no explanation for how its decision was reached.)</p> <p>Ghassemi argues that we do need a certain kind of transparency, but not the kind traditionally championed by proponents of explainable AI. Instead, doctors need to know when the model is likely to be wrong&nbsp;鈥 perhaps because of the population the AI system was trained on, or due to limitations in the data the system was trained on.</p> <p>Another argument in favour of transparency is that it can reduce bias in decision-making. Machine learning can end up encoding biases against vulnerable populations. However, bias is clearly already a part of the medical landscape even without the use of advanced technologies. As Ghassemi puts it: 鈥淒octors are human, and humans are biased.鈥 And machine learning models can inherit this bias from the data they are trained on. Knowing how a model works will not automatically catch this kind of bias; it requires knowledge of both the way that machine learning models work and a knowledge of the biases present in the clinical data.&nbsp;</p> <p>Ghassemi argues that we need aggressive audits of the machine learning model to identify potential bias, and that this is where transparency can serve a valuable function.</p> <p>The overall takeaway from Ghassemi鈥檚 work is that we need to treat all advice, including AI-based advice, with appropriate suspicion because advice&nbsp;that seems transparent can cause overconfidence.</p> <p>Instead, AI explanations can and should&nbsp;be used to audit and monitor these systems to catch their flaws and biases, she says.</p> </div> <div class="field field--name-field-news-home-page-banner field--type-boolean field--label-above"> <div class="field__label">News home page banner</div> <div class="field__item">Off</div> </div> Fri, 16 Oct 2020 16:25:24 +0000 Christopher.Sorensen 166055 at