November 19, 2025

The Logical Conclusion of AI Art

As much as I like to defend abstract art I can also see why certain famous paintings don't tend to inspire more than disdain in the general public. Take Aleksandr Rodchenko's series Pure Red Color (Chistyi krasnyi tsvet), Pure Yellow Color (Chistyi zheltyi tsvet), and Pure Blue Color (Chistyi sinii tsvet)—they are paintings exactly as described: a painting that's just red, one that's just yellow, and one that's just blue.

View the MoMA's image of the series ›

Even in my best, most art-focused mode, I have had trouble with works that are so plain. They seem to taunt me to think the unforgivable thought, "Psh, I could do that. What's the point of that?!" Even while I tend to preach that art is not a puzzle that you have to figure out and that you can just let it speak to you or not, I have still wrestled with these particular Rodchenko paintings for years. Something about this series won't leave me alone. It has only just started to sprout meaning for me and I have AI to thank for it.

Reduction

Rodchenko is an artist and graphic designer that I have admired. His work is a tribute to skill and technique, being able to produce beautiful imagery and posters with tools that would make the modern designer cringe for how manual and difficult to use they were compared with any digital design software.

Why, then, would a skillful artist paint an entire canvas red and call it art?

I reduced painting to its logical conclusion and exhibited three canvases: red, blue, yellow. I affirmed: it’s all over. Basic colors. Every plane is a plane and there is to be no more representation.

Aleksandr Rodchenko, MoMA Website, accessed 11-17-2025

It was "the death of painting." Rodchenko had pushed art to its end. Where we may have started with basic colors and shapes as a species, we became more representative and more life-like in our drawings and art. Now we have returned to the simplest forms having explored all there is to explore. That's a little bit pessimistic, and yet, it feels kind of correct to me now.

AI has made me think about this "logical conclusion" like never before. A few weeks ago, I heard a website director say with some confusion: We have AI-generated content on [their site], so then we're going to add a AI chatbot assistant to [their site] that is trained on that AI-generated content? At some point, it seems abstracted too far.

Is this where our current systems, goals, and incentives are leading? AI-generated videos and "art" that we can't stop looking at because we've been trained to keep looking into the void? As the models keep getting better, so does their grasp on our eyes and our minds and our attention, until nothing substantial is left to hold onto. We just fall endlessly into the blackhole of feeds where nothing means anything and anything means nothing.

Thus, I present the logical conclusion of generative AI, the death of humanity:

A black canvas on a black background.

Cyborg

This may seem bleak, but I will offer some potential ways that we can take action. Lately a lot of tech companies have started updating their privacy policies—that thing in legalese that you know you should read but don't want to and end up clicking "Accept" just to escape the horrors of reading any further. Then the companies turn on new settings on your behalf to help train their AI models.

I'm not a lawyer, so this is not legal advice, nor is it a guarantee of protection, but might I recommend setting aside an hour on a quiet day to review settings you might want to hurry and turn off? Feel free to ignore, but it is definitely a good time to at least review your settings on your most-used apps, just to be aware of what companies are doing with your data (at least what they're upfront about what they're doing with your data, anyway). I'm not claiming these companies are nefarious or deceptive, I'm just erring on the side of caution.

Here are a few settings links that should take you to (or near) the settings that you might want to review:

LinkedIn Data Privacy | Consider turning off "Data for Generative AI Improvement." You may also want to turn off the advertising data—I was surprised how much they're tracking and I don't love the promise of "better ads" just so that they can share "data" with third parties. Facebook/Meta Accounts Center | You can review your ad preferences here Facebook/Meta Generative AI Information | This is where you can review how they approach their training. I haven't found any good settings to turn off yet—if you find something, hit reply and send me the settings URL and I'll pass it along next time.

There are also two projects that the University of Chicago has been working on to protect artists that post their content online: Glaze and Nightshade. These are tools to add an invisible layer to artwork prior to posting that prevents AI from learning the right things about the image. Glaze is a "defensive" tool that protects your pieces, while Nightshade is an "offensive" tool that disrupts models that are scraping without consent. If you're an artist, this is definitely something to look into to protect your work—I may write about this specifically in the future when I have a chance to research it further!