Through the “Star Trek: The Subsequent Technology” episode “Satan’s Due,” the Enterprise’s resident android Information serves as an arbitrator between Captain Jean-Luc Picard and a con artist as a result of, as an android, Information is not aware of bias or favoritism. The viewers is prepared to associate with this logic since they know Information is an efficient particular person (he efficiently argued his personhood in entrance of a choose in a previous episode). Plus, “Star Trek” is a piece of fiction. However when an actual AI is concerned in an actual trial, then issues get scary.
In March of 2023, Choose Anoop Chitkara, who serves the Excessive Courtroom of Punjab and Haryana, India, was presiding over a case involving Jaswinder Singh, who had been arrested in 2020 as a homicide suspect. Choose Chitkara could not determine if bail must be supplied, so he requested ChatGPT. The AI shortly rejected bail, claiming Singh was “thought-about a hazard to the neighborhood and a flight threat” because of the costs, and that in such circumstances bail was normally set excessive or rejected “to make sure that the defendant seems in court docket and doesn’t pose a threat to public security.”
To be honest, ChatGPT’s ruling is not unreasonable, in all probability as a result of we all know so little in regards to the case, however it’s scary as a result of this system solely reads data and lacks evaluation and significant considering abilities. Furthermore, this occasion units a precedent. Since Choose Chitkara took the AI’s recommendation, who’s to say different public officers will not do the identical sooner or later? Will we sooner or later depend on ChatGPT or one other AI to move rulings as an alternative of flesh and blood judges? The mere thought is sufficient to ship shivers down your backbone.