Likely. You can go into Nano Banana or ChatGPT right now, upload a pretty architectural rendering, and tell it to make it look old, weathered, winter, etc and it will come out looking very similar. Give it an example to really dial it in.
It's GenAI. It does something that's kind of like what you asked it to do, but it will skip some details or add other ones or whatever.
Dreary architectural pictures will be more likely to have electrical boxes, poor materials, etc, so when it moves the buildings from the latent space for cheery bright architectural renderings to dreary wet November architectural renderings, it will be more likely to add some of those details, because that's what's in its latent space.
Yeah - same things I noticed with people enthusiastically using genAI for old photo coloring. Initially it looks awesome, until you realize it can even alter the human face in such a way, that it no longer looks like that person.
My father was really happy with some old photos colored, until I pointed out he does not look like him. Strangely enough he wasnt bothered...
"Filter" is a Tik-tok / snapchat / instagram parlance for any kind of overlay / transformation. It's grown larger than just sepia filters and similar. All the ones that do facial tracking and overlay a mustache or w/e is funny in the moment are also referred to as filters.
There's a pretty clear expected transformation here though? It takes an image and then reduces the "shiny-ness" of it by giving it the same transformation: change the sky to overcast, add material degradation like rust, reduce the landscaping by adding weeds/puddles, and remove the happy looking people.
How is it not just a midjourney prompt? The liberties it takes seem to be better described by 'upload a picture, and AI will be told to make it dingier'. Can't people already do that ad nauseam?