Commons:Deletion requests/Files in Category:Police arresting Donald Trump (Midjourney)

From Wikimedia Commons, the free media repository
Jump to navigation Jump to search
This deletion discussion is now closed. Please do not make any edits to this archive. You can read the deletion policy or ask a question at the Village pump. If the circumstances surrounding this file have changed in a notable manner, you may re-nominate this file or ask for it to be undeleted.

OOS AI generated fake fan art of Donald Trump supposedly getting arrested. At least the descriptions say the images are false, but they still aren't educational regardless. Although one image is apparently being used in a Wikipedia article about his court cases, but really, so what? There's no reason it should be hosted on Commons regardless since it's clearly fake.

Adamant1 (talk) 07:09, 21 January 2024 (UTC)[reply]

  •  Strong keep. This deletionism is getting out of control. 1) It is COM:INUSE. This alone should be a reason NOT TO NOMINATE them to deletion unless there were COM:COPYVIO concerns (Such a file is not liable to deletion simply because it may be of poor quality: if it is in use, that is enough.) 2) It is fake (who would imagine images tagged with {{PD-algorithm}} were fake? Wow!), but still a clearly notable fake: [1], with almost SEVEN MILLION views on Twitter and covered by several RELIABLE newspapers: [2], [3], [4], [5], [6], [7], also mentioned encyclopedically here: Wikipedia:Deepfake#Donald Trump, etc. Careless and clearly abusive nomination that shows nominator, who apparently has something against AI-generated content, did not even try to search about it. And it is false that one image is apparently being used. Four of them (60%) actually are, 13 times, in seven different Wikipedias and in two Wikidata items. RodRabelo7 (talk) 17:38, 21 January 2024 (UTC)[reply]
The fact that a few of these got some views on Twitter aside since I've addressed it below this comment, but I don't think it's necessarily useful or educational to include fake images of celebrities in Wikipedia. Let alone to host them on Commons regardless of if said image is being used in a Wikipedia article or not. Maybe you could justify it if there was some controversy over the image that had an effect on the celebrity in questions life or something, but no one thought these images were real and they have had zero effect on Trump as a person what-so-ever. There's nothing inherently educational (and that's the standard here BTW. Not notability) about random AI generated images of celebrities in fantasy settings or made up situations. Apparently the guidelines only apply to keeping images and not deleting them though. --Adamant1 (talk) 18:15, 21 January 2024 (UTC)[reply]
Yes, they are educational. They portrait Donald Trump, Midjourney, and deepfake. Even Trump himself (!!!!!) shared one of them. RodRabelo7 (talk) 18:32, 21 January 2024 (UTC)[reply]
Except they aren't "deepfakes" and at least one of the images (File:Trump’s arrest (6).jpg) is clearly only being used in an article related to Donald Trump as a way to troll people. --Adamant1 (talk) 18:36, 21 January 2024 (UTC)[reply]
I have explained that below. It is perfectly contextualized and sourced in the article, not that this is a necessary condition to keep the file here. You apparently did not even read the article you mentioned. Please consider withdrawing this nomination. RodRabelo7 (talk) 18:46, 21 January 2024 (UTC)[reply]
 Keep Very notable examples of AI generated images and deepfakes, mentioned by the press. Obviously on scope. Darwin Ahoy! 17:44, 21 January 2024 (UTC)[reply]
@DarwIn: (and this goes to RodRabelo7 also) I thought the standard here was "educational value", not "notability." Otherwise where exactly does Commons:Project scope say images that have had some arbitrary number of views on Twitter or been featured in news stories are inherently educational? And what exactly are these images educated anyone about? It's not like they are actually "deepfakes" to begin with anyway since everyone knows they are fake. Otherwise maybe you could argue they are good examples of people being fooled by AI, but no one thinks the images are genuine. All the news stories seem to be about how MidJourney can make fake images of celebrities, which everyone knows and has nothing to do with these particular images. Or should we just host every AI generated image of a celebrity that gets views on Twitter or someone includes in a news article about AI artwork? --Adamant1 (talk) 18:06, 21 January 2024 (UTC)[reply]
If it is COM:INUSE, it has educational value. Have you read the page? A media file that is in use on one of the other projects of the Wikimedia Foundation is considered automatically to be useful for an educational purpose. And next time please ping me. RodRabelo7 (talk) 18:08, 21 January 2024 (UTC)[reply]
@RodRabelo7: Admittedly they contradict each other, but at least Commons:Project scope says "any use that is not made in good faith does not count" and "a file that is used in good faith on a Wikimedia project is always considered educational." Personally, I'd argue these images aren't being used on other project in good faith since at least one of the images, File:Midjourney Photo of Trump's Arrest.jpg, is being used in part of an article about notable events that took place in 2020s having to do with "deepfakes" when the image isn't a deepfake to begin with and the 2020s weren't notable for trump getting dog piled by police officers.
The other uses aren't any different either. File:Trump’s arrest (6).jpg, which is an image of Trump running away from the cops, is being used in an article about the criminal investigation of The Trump Organization with zero context and the description "AI-produced depiction of Trump running from police." That's litterally it. It's just an AI generated image of Trump running from the cops that someone added to an article because they thought it was funny or something. It clearly wasn't added to the article in good faith and there's nothing eductional about an AI generated image of Donald Trump that was added to an article for no reason outside of trolling. Or we should just keep every random AI generated fantasy image of a celebrity that someone puts in an article even when it's clearly being done a joke or troll? --Adamant1 (talk) 18:32, 21 January 2024 (UTC)[reply]
  1. File:Trump’s arrest (6).jpg is being used in the Reactions section with the caption “AI-produced depiction of Trump running from policejust before the textDeepfake and other imagery created using artificial intelligence (AI) depicting Trump being arrested and/or perp walked circulated on social media with faux headlines, proving controversial and popular on both sides of the political spectrum, though for opposite reasons. Bellingcat founder Eliot Higgins was banned from AI program Midjourney after he used it to generate depictions of Trump being incarcerated and his creations went viral. The Associated Press noted that the filming of a protest scene for Joker: Folie à Deux in New York City coincided with Trump's unrequited calls for protest.PLEASE SHOW me where is the BAD FAITH here.
  2. File:Midjourney Photo of Trump's Arrest.jpg is used here with the caption “Advancements in AI have been rapid and fast-paced in the early 2020s. Generative AI has become mainstream during the decade, with synthetic media in the form of Text-to-image models, ChatGPT, and Audio deepfakes. AI techniques have now been used in music, including the Beatles' last song "Now and Then" (2023).Once again, please show me where is the bad faith.
If all these images disturb you, consider discussing them on the Hebrew, Spanish, Greek, Esperanto, Portuguese, English, and Simple English Wikipedias. We are at Commons. RodRabelo7 (talk) 18:44, 21 January 2024 (UTC)[reply]
Deepfake and other imagery created using artificial intelligence (AI) depicting Trump being arrested and/or perp walked circulated on social media with faux headlines Nowhere does the article that line is cited say anything about "faux headlines" (on social media or otherwise) saying Trump ran from the police. It also wrongly calls the images "deepfakes" when that's not what they are. Further if you scroll through the news article both it and the social media accounts they cite are clear that the images are fake. So it's a totally made up story that purely exits as click bait. Especially on the end of whomever added that part to the Wikipedia article. Since there were no "faux headlines" saying the images were real or that Trump otherwise ran from the police. That's also clearly a bad faithed usage. As it's being used in a way that isn't educational and totally mis-represents the facts. Just like your doing by acting like I'm disturbed by the images or whatever BTW. It would be cool if you skipped the verbal abuse and stuck to the subject. Otherwise I'm just not going to respond to you. I'm not here to be insulted. Thanks.
Generative AI has become mainstream during the decade, with synthetic media in the form of Text-to-image models, ChatGPT, and Audio deepfakes. Again for the 5th time, the images aren't "deepfakes" and no thought they were genuine. Using an image as an example of a "deepfake" or something that people thought was real when it isn't one and no one thought it was clearly isn't a good faith or educational usage. Just like it wouldn't be if I uploaded an image of a stick, added to an article about cats, and claimed it was a "deepfake" of a cat that everyone was fooled into thinking was real. ---Adamant1 (talk) 19:03, 21 January 2024 (UTC)[reply]
Ok, let’s see: first you nominate them to deletion because you thought they were out of scope. Now that it has been categorically proved they are on scope, you say they are used for trolling Wikipedia readers, even though anyone reading the articles you have brought here will see they are not used for that purpose. What’s next? If you think Wikipedia uses “deepfake” incorrectly, that’s their problem, not ours—why don’t you fix it? (Protip: Wikipedia is a collaborative project just as Commons!) Your arguments are starting to become pitiful. RodRabelo7 (talk) 19:40, 21 January 2024 (UTC)[reply]
@RodRabelo7: it has been categorically proved they are on scope I don't think that's been proven. We'll have to disagree though. But about Wikipedia's usage of deepfake, per Deepfake "Deepfakes (portmanteau of "deep learning" and "fake"[1]) are synthetic media[2] that have been digitally manipulated to replace one person's likeness convincingly with that of another." The last time I checked these aren't images "hat have been digitally manipulated to replace one person's likeness convincingly with that of another." So there's nothing to correct on Wikipedia's end. Since again, the images aren't deepfakes even by their own definition! Like we can't delete an image on our end if it's being misused in a Wikipedia article by their own standards of what a deepfake is! Come on. --Adamant1 (talk) 19:51, 21 January 2024 (UTC)[reply]
Deepfake and other imagery Have a nice day. Cheers, RodRabelo7 (talk) 20:04, 21 January 2024 (UTC)[reply]
And you were arguing they are "deepfakes", not "other images." Your just being obtuse instead of admiting you were wrong about them being deepfakes. --Adamant1 (talk) 20:36, 21 January 2024 (UTC)[reply]
@Adamant1 Exactly, well sourced notability. And obvious educational value, as notable examples of a concept. And COM:INUSE. This nomination was quite absurd. Darwin Ahoy! 20:07, 21 January 2024 (UTC)[reply]
@DarwIn: They aren't notable examples of "deepfakes" though, since that's not what they are, which is what they are suppossedly educating people on. You can claim the nomination is absurd, but its more absurd to act like they serve any eductional value as "notable deepfakes" when that's not even what they are. You could maybe argue they are eductional in regards to AI generated memes of celeberities or something, but that's not the current usage. Nor what they are supposedly "notable" for. Although I still disagree that the images are notable to begin with or that it would even matter if they were. --Adamant1 (talk) 20:36, 21 January 2024 (UTC)[reply]
@Adamant1, explain where they are not used as examples of AI-generated media. RodRabelo7 (talk) 21:16, 21 January 2024 (UTC)[reply]
Their used as examples of "AI-generated deepfakes", which technically count as "AI-generated media", but that's not the point or argument I'm making and you know it isn't. Your just moving the bar since your whole thing about how we should be following Wikipedia's definition of a "deepfake" turned out be wrong. They were "deepfakes" and should be kept as such only up until the point where your definition of the term was wrong. Now your making it about "AI-generated media" when that's not what the discussion is or was about. --Adamant1 (talk) 21:39, 21 January 2024 (UTC)[reply]
A1Cafel is more involved in deletion requests then I am and that's saying a lot. Since I'm pretty active in the area. Whereas, looking over your last 500 edits you apparently have none. So who's really the one who's not adequately prepared for this? It seems you aren't. So maybe take your own advice next time and don't engage in deletion requests. The vitriol certainly isn't helpful. And it doesn't make a difference that you reverted the edit accusing A1Cafel of trolling. It shouldn't have been said in the first place. Period. --Adamant1 (talk) 04:29, 22 January 2024 (UTC)[reply]
Is that going to be the game now? Super petty way to deal with a couple of your uploads being nominated for deletion. I've said it already, but pinging people that who think will side with you in deletion requests is an extremely bad faithed and inappropriate thing to do. --Adamant1 (talk) 04:44, 22 January 2024 (UTC)[reply]

Warning sign Warning Adamant1 and RodRabelo7, you've made your points, please disengage. This is rapidly becoming a conduct issue. RodRabelo7, pinging people uninvolved in the discussion to try to sway the DR is frowned upon. All it leads to is tit-for-tat pinging and it's entirely unnecessary; Category:AI-generation related deletion requests/pending exists for a reason. The Squirrel Conspiracy (talk) 09:59, 22 January 2024 (UTC)[reply]

 Delete
  1. Eliot Higgins is british.
  2. he lives in the suburbs of Leicester, uk. https://www.prospectmagazine.co.uk/world/64130/eliot-higgins-the-man-who-verifies
  3. Commons:AI-generated_media#United_Kingdom.
RZuo (talk) 10:17, 22 January 2024 (UTC)[reply]
An email to Bellingcat has been sent. RodRabelo7 (talk) 19:07, 22 January 2024 (UTC)[reply]
You do realize that i have voted to delete AI images on multiple occasions? Trade (talk) 11:52, 22 January 2024 (UTC)[reply]
  •  Keep all, per RodRabelo7's rationale whom I 100% agree with while we usually disagree.
Also per Darwin's explanation. I would have found this DR without getting pinged and in regards to that note that this DR is categorized into a category that is browsable regularly by people. The images are clearly notable and useful where specific illustration uses-cases have already been explained or implemented; they are so notable that have even been reported on by media outlets. Nothing more needs to be said so this should be a speedy keep and I always oppose number of votes overriding policy and/or solid rationales. However, I'll add that they're clearly labelled as AI-made. If it isn't clear enough, things other than deleting them can be done about that.
Prototyperspective (talk) 12:18, 22 January 2024 (UTC)[reply]

Kept: no valid reason for deletion. As mentioned by several people above, these images are notable in themselves, so that a sufficient reason to keep them. --Yann (talk) 10:43, 14 February 2024 (UTC)[reply]