Researchers Discuss Implications of Using Artificial Intelligence to Treat Mental Health Issues in Harvard Law School Webinar | News

Harvard Law School’s Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics hosted research coauthors Piers M. Gooding and Lydia XZ Brown in a discussion on the ethics of artificial intelligence in the treatment of mental illness during a webinar Wednesday night.

Gooding and Brown, co-authors of the 2022 report Digital Futures in Mind: Reflecting on Technological Experiments in Mental Health and Crisis Support, were joined by experts in artificial intelligence in medicine and mental health, including Carlos A. Larrauri, a psychiatrist Public Health Practitioner and Board Member of the National Alliance on Mental Illness; Rhonda Moore, program director at the National Institutes of Health; and Sara Gerke, an assistant professor at Penn State Dickinson Law.

Brown, an associate professor and core faculty member of the Disability Studies Program at Georgetown University, opened the webinar by contextualizing technological advances — like social media surveillance — to address mental health issues in the context of systemic inadequacies.

Brown’s conversations with community members on social media revealed fears about the dangers of sharing information about mental health issues online.

“This fear is largely driven by a particular concern that data could be shared not only with the company providing a social media platform, but also with local law enforcement in dangerous, and sometimes fatal, attempts at an individual’s mental health crisis to intervene by using a prison measure. ” Said.

Gooding, a research fellow at Melbourne Law School, described the research group’s collaborative approach and the central involvement of people who “had drawn on lived experience of employment with psychiatric services or with mental illness”.

READ :  Lieutenant General Paul Ostrowski Joins Intelligent Artifacts Advisory Board

In response to the argument that regulatory and legal frameworks could stifle technological development, Gooding said, “We came from the perspective that regulation is more about protecting people’s rights, both individually and collectively .”

After Brown and Gooding discussed the findings presented in their paper, the panelists individually shared their thoughts on the effectiveness and impact of using technology to treat mental health.

Larrauri started by sharing his personal journey with mental health issues. He discussed the implications of these experiences in shaping his position as an advocate for artificial intelligence in mental health care, particularly in early intervention.

“We need to push for a patient-centric approach based on robust, ethical principles,” Larrauri said.

Following Larrauri’s findings, Gerke provided an overview of potential benefits and limitations of using innovative technologies to treat mental health problems, balancing the widespread accessibility of digital resources, the growing need for healthcare services and objectivity against the privacy concerns associated with artificial intelligence. was developed for these purposes.

“In summary, while AI mental health apps and chatbots hold promise, they also pose several ethical as well as legal challenges that we should address before launching them uncontrolled and potentially harming patients,” she said.

Moore next spoke about the report’s underreporting of socioeconomic disparities exacerbated by AI in what she termed the Global South and Global North. Looking to the future, she emphasized the importance of ethnographic exploration in the Global South in “postcolonial computing, decolonial computing, and data extractivism”.

In a lively discussion following the panel, Brown addressed how to set ethical boundaries in AI-driven mental health initiatives.

READ :  NVIDIA partners with Broad Institute, making AI tools available for life science research platform

“It is impossible to separate use cases from social, cultural and political structures and realities,” they said, pointing to the larger context of existing systemic problems.

In an interview after the event, Gooding said he hopes her research “helps clear the fog of hype and encourage a very sober and clear public discussion about the possibilities and dangers of data-driven and algorithmic technology in the context of mental health.” ”