Legal, Ethical, and Wider Implications of Suicide Risk Detection Systems in Social Media Platforms

Karen L. Celedonia*, Marcelo Corrales Compagnucci, Timo Minssen, Michael Lowery Wilson

*Corresponding author af dette arbejde

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningpeer review

7 Citationer (Scopus)
42 Downloads (Pure)

Abstract

Suicide remains a problem of public health importance worldwide. Cognizant of the emerging links between social media use and suicide, social media platforms, such as Facebook, have developed automated algorithms to detect suicidal behavior. While seemingly a well-intentioned adjunct to public health, there are several ethical and legal concerns to this approach. For example, the role of consent to use individual data in this manner has only been given cursory attention. Social media users may not even be aware that their social media posts, movements, and Internet searches are being analyzed by non-health professionals, who have the decision-making ability to involve law enforcement upon suspicion of potential self-harm. Failure to obtain such consent presents privacy risks and can lead to exposure and wider potential harms. We argue that Facebook’s practices in this area should be subject to well-established protocols. These should resemble those utilized in the field of human subjects research, which upholds standardized, agreed-upon, and well-recognized ethical practices based on generations of precedent. Prior to collecting sensitive data from social media users, an ethical review process should be carried out. The fiduciary framework seems to resonate with the emergent roles and obligations of social media platforms to accept more responsibility for the content being shared.
OriginalsprogEngelsk
TidsskriftJournal of Law and the Biosciences
Vol/bind8
Udgave nummer1
Antal sider11
ISSN2053-9711
DOI
StatusUdgivet - 15 jul. 2021

Citationsformater