In its most-watched documentary, Netflix uses artificial intelligence to create content that doesn't exist. This opened an ethical debate

The documentary “What Did Jennifer Do?” He created images that were considered real but never existed

“What Did Jennifer Do” is currently the most watched documentary on Netflix, competing in the top ten with productions like “The Batman” or the current number one film, Aitana’s “Wall with Wall.” However, a fleeting moment of his footage He questions all her documents, destroying her strong approach Real crime.

The film tells the true case of the parents of a young student, Jennifer, who were shot when intruders entered their home. She seems shocked by the event, but soon pieces that don't fit her statement begin to appear. We will soon learn the story of the daughter who, in order not to disappoint her demanding parents, invented an entire triumphant career in high school and college. The film is inspired by a real case that has not been completely closed, as there are pending appeals and… Many doubts about how to develop the experiment.

Above these product discussions Real crimeHowever, another problem arises: the web Futurism This was discovered in the film a few days after the premiere A pair of images generated by artificial intelligence are used as graphic documents. The alleged images contain “all the characteristics of an AI-generated image: destroyed hands and fingers, distorted facial features, altered objects in the background, and extremely long front teeth.”

This opens a very important discussion about the use of generative AI. Because if the tool already raises ethical debates when used in fiction (from how its use is kept away from qualified professionals to the origin of the materials it uses), in documentaries the questions multiply, because the viewer trusts that the material it uses is presented to support the original narrative.

See also  Three chains to enjoy running Valentine's Day

The second is from documentary images generated by artificial intelligence

This of course has all sorts of repercussions: on certain occasions, documentaries dramatize events when they have no documentary material to fall back on or merely as an aesthetic/dramatic resource. For example, dramatization of crimes is very common in television docu-series. But there the visual language changes so that the viewer knows, instinctively, that he is facing a false situation. The problem with these images that Netflix displays is that at no time does it mention that these are fake photosBut it is used so that the viewer sympathizes with the heroine, to get evidence that she is an “ordinary girl.”

Rachel Antell, co-founder of the Archive Producers Alliance and an expert on the unethical use of AI imagery in documentaries, says: To location 404 Even if he reveals it [un material] They are created by artificial intelligence, can emerge from any documentary, onto the Internet and other films, and then forever become part of the historical record. He recommends a series of measures to ensure the viewer knows: “We encourage people to be transparent about use and, in some cases, where appropriate, to get consent to recreate things that didn't necessarily happen.”

head | Netflix

In Chataka | AI Pin has reached its first users. Their conclusions are not encouraging at all.

Terry Alexander

"Award-winning music trailblazer. Gamer. Lifelong alcohol enthusiast. Thinker. Passionate analyst."

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top