Back to top

Minority Report: Suicide

06/29/2017

Seeing this article claiming that artificial intelligence can predict suicide reminded me of Spielberg's film in which mutant "precogs" predict murders and led me to ponder how I might rewrite the script substituting suicide for murder.

In the film foiled murderers wind up imprisoned indefinitely in virtual reality, presumably based, at least in part, on the assumption that if released they would go on to fulfill the prediction. The motivation to predict suicide now, however, stems from the desire to induce the individual to stop wanting to die, especially by treating some identified mental illness assumed to have produced the intent to suicide.

In my script I would want my potential suicides freed, but maybe an implanted virtual reality chip could render them unable to choose to die by their own hands. Of course that might leave them in a state of unbearable suffering, so instead I might choose for them a virtual reality free of suffering. 

What of those who kill themselves not to escape suffering but to avoid anticipated loss of control, as in the case of a terminal illness? My script could provide for elimination of disease and death, but that might bore the audience. 

What of those who simply decide they have lived long enough? Will the community or government with this new power no longer allow citizens to make such a choice?

The original movie assumes ability to change the future. Can we do that now in real life? Can we really prevent some suicides, or do we only postpone a few of them? The original also exposes the possibility of malicious use of the predictive power of the precogs. Maybe I would incorporate into the plot a bad guy/girl who uses the power for evil.

In the original Minority Report prediction depends on intent, and premeditation gives the cops more lead time to intervene effectively. How might that work with the impulsive suicide? 

The original film fails to disclose how the precogs limit their predictions to the jurisdiction in question, Washington, D.C. Beyond that, presumably, everyone escapes detection. But in my film would I want to make everyone everywhere subject to prediction of suicide? Perhaps universal monitoring of digital communication, even beyond social media, would make for a plausible plot. In real life, of course, we apply our rough risk assessment tools almost exclusively to those with diagnosed or suspected mental disorders. In fact, we often operate on the (false) assumption that suicide implies mental illness, thus the stigma attached to suicide.

Somewhere in my plot the suicide prediction bot identifies a character at risk. Free of thoughts of killing herself and minding her own business at home, she hears the doorbell (or her app) ring, opens the door and a group of men in white coats place a virtual reality tiara on her head before they whisk her away for anti-suicide processing.

What would you like to see in this sci-fi blockbuster?