top of page
newspaper59

AI and Future Court Cases

Charlotte Figler, 27

AI is an up-and-coming technology with the ability to stimulate human comprehension, learning, creativity, and problem-solving, and can act independently, which replaces the need for human knowledge (“What Is Artificial Intelligence (AI)?”). AI has been used for the positive, such as generating detecting animal language, but it also has been seen in negative circumstances such as generating false photos of people  (“Eight good news stories about AI”). The recent debate is whether the defamation and libel laws are sufficient enough to falsify AI content. Defamation is a false statement said by someone that hurts someone else's character. Libel is similar, but instead its written definition. Due to new and advanced technology, current defamation and liberal laws are not enough to defend against AI cases. 


The current defamation laws are as follows. The plaintiff must prove that the defendant must publish a statement about the plaintiff. They then have to prove that the statement was false and that the statement was written in order to cause harm to the plaintiff. This is a very important part of the law because in order to prove it was defamatory, the plaintiff must prove that the statement was made in order to harm the plaintiff. This also applies to celebrities who will have to prove the statement was not made out of opinion but was false information. 


One law that could work is the case of NYT vs Sullivan. This case happened in 1960 after a false ad was published. This case debated the laws on political advertising, and what is allowed. A police commissioner claimed he had been libeled in the ad, and therefore sued because of this information he claimed was false. The outcome was that public statements about political figures are completely protected by the First Amendment if they can be backed up by evidence. After this case, the freedom of the press remained protected under the First Amendment, as one would have to prove the person spoke with actual malice. If the plaintiff could prove that the company that made the AI is defamatory, then the person could sue the company that created it. Since it is very difficult to prove that the AI spoke with actual malice, it is easier to prove that the company created a defective AI that spoke with bad intent, and spread false information because it was in their code.


I believe that these laws are not up-to-date enough to deal with new AI claims. There are too many loopholes and not enough specific laws to deal with the claims made by people defamed by AI. As time goes on, the laws will become more specific to the claims made by people who have been defamed by AI. There is still some human error left to account for, such as the creators who make the AI programs and the people who use the AI. Most of it is still left up to speculation because the program does do most of the work. The people who publish using AI, or post anything from their account using AI do still have to be careful. Nowadays, people, businesses, papers, etc. have to be careful who they give credit to and when they put out any information about their brand.

14 views

Recent Posts

See All
bottom of page