
I’m completely speechless. This looks so terrible I thought it was a joke, but apparently Nvidia released these demos to impress people. DLSS 5 runs the entire game through an AI filter, making every character look like it’s running through an ultra realistic beauty filter.
The photo above is used as the promo image for the official blog post by the way. It completely ignores artistic intent and makes Grace’s face look “sexier” because apparently that’s what realism looks like now.
I wouldn’t be so baffled if this was some experimental setting they were testing, but they’re advertising this as the next gen DLSS. As in, this is their image of what the future of gaming should be. A massive F U to every artist in the industry. Well done, Nvidia.
The only good application of DLSS5 I could’ve think of is Euro Truck Simulator 2, hyperrealism would pair well with the game.
Check out this fun little nugget from further down in the article:
Nvidia actually used two RTX 5090s for its demos: one plays the game, the other exclusively runs the DLSS 5 technology.
An entire second GPU just to run it.
I think you have a typo there: An entire second GPU just to ruin it.
Can hardly afford or find one, they expect us to buy two? Fuck them
They have no intention to sell them to us. They’ll maybe let us play it like this thorugh GeForce Now, or any other streaming service.
They don’t want regular folk to buy PCs anymore.
So, yes. Fuck them indeed.
No. They want regular folk to buy PCs. They just have no idea what regular folk can afford. How much could a banana cost, 10 dollars?
Yeah, their sales team is incredibly fucked over from their only significant revenue being various AI farms.
Is there scarcity somewhere? Priced out completely, but they’re available.
That’s actually good news in my eyes, we definitely won’t see this hit the consumer market for years.
Context is important. The following sentence is:
The use of two GPUs is required right now as DLSS 5 still has a long way to go in terms of optimisation - both in terms of performance and its VRAM footprint. However, DLSS 5 is designed for use on a single GPU and that’s how it will ship later this year. Quite how scalable it is also remains to be seen, but in common with other DLSS technologies, Nvidia tells us that the computational cost scales with resolution.
Sure just double the vram and your AI can run, unfortunately you can’t afford that vram because billionaires are running AI.
It’s always funny to me when Nvidia releases new 8GB cards and new VRAM-heavy features as a reason to buy them.
8gig isn’t even enough for 4k right now. There’s a guy that took his 3070 up to 16gig and it really shows the cards are ram limited. And also it’s a serious pita to change vram without melting the card.
But still the idea of DLSS was to claw back performance lost to raytracing, right? This is the exact opposite of that, it costs performance to sloppify the game. I just pray it’s gonna be an optional feature in games and I can still use DLSS 4 instead.
2017: Buy 2x 1080 Ti for 1500 bucks, your build is GOATed, have fun spending half your time tinkering with your overclocks and fishing for the perfect SLI compatibility bits in Inspector
2026: This. It’s a shame
SLI is back!
That was my first thought too lol
Ha ha thats great. Reminds me of PhysX back in the day where initially it needed a dedicated card.
Some of us are old enough to remember when your games would sound different depending on which sound card you had installed.
Ahh, I remember the first time I heard the intro music to Star Control through my Sound Blaster instead of through my motherboard’s piezo speaker. Like the audio version of The Wizard Of Oz switching to colour.
Sound Blaaster!!
IRQ 3 DMA 8 bit (or what was that now again)
Ima bus out ma voodoo2 soon
WTF, I thought the point of DLSS was to gain performance, not lose performance…
This is shifting away from performance for dlss.
Features that need 2x the GPUs makes the stock go up 2x
Back to the ol pysx GPU days. Except physx made the game cooler.
Great the new SLAI.
Good, so my hardware won’t support it. XD
as much as you and I both hate it and as shitty as it looks right now, i imagine that some sort of cloud hosted ai technology is the future of gaming
It sounds like indie games that will run on a potato and not require internet access will be the future of gaming. I’m completely done with aaa corposlop.
sounds like a good idea. it sounds like that’s what I like doing and that’s what I like playing and it sounds like it’s good but I don’t know man. you tell me that sounds good. I’m glad you predict the future
i hope that the universe will happen exactly as in this website In accordance with our upvotes
They tried this shit with Stadia. What would be the difference now that we also have AI? There is no reason to stream a shitty AAA game from a shitty company when you can buy and play indie hits on your own hardware.
I know the capitalists really want game streaming to be the future but every gamer with some common sense will reject that idea.

Makes perfect sense in context.
Nvidia is no longer a gaming company. They do not care about gamers. They are actively DRIVING AWAY gamers as fast as they can, because AI chips are more lucrative than dirty smelly tiny consumer products.
This is meant to show off real-time AI generation performance, which is massively marketable to AI companies. Not consumers. Nothing is about consumers.
oh i’m sure the “hire fans lol” gamer chuds will be over the moon about this

This looks so bad, like uncanny 3d porn bad.
Yup. You will soon pay premium for checks notes actual footage of people enjoying fucking?!
New Worms game looking hype though

Can’t believe DF is so positive about this, just looks horrible. I’ve actually been quite positive about technologies like DLSS and FSR but this… no thanks.
Crazy to me to see people just now waking up to just how underinformed Digital Foundry really is to technical details, and how much they sold out.
Well, helps that you don’t need to know how to turn a computer on to see that they’re glazing some complete bullshit, just eyes.
They absolutely got a huge nvidia check for this one
The amount of glazing they do, holy fuck. They had me too for a minute before I started wondering if this is just an april fools joke
First time you’re witnessing nvidia coverage?
DF always glazes the industry. UE5, DLSS…
Eh, DLSS maybe, but they’ve been quite critical of UE5, especially its stutter issues. On the contrary, they’re always happy when something isn’t UE5.
My issue with UE5 is that it runs like shit, which DF tends to agree with, but it also looks like trash. Temporal effect are all over the place and make the image look like a muddy, ghosting mess. It only ever looks good on static screenshots
Idk why, but this reminded me of the South Park episode where everyone is using photoshop to show off their girlfriends.
Had to make the meme


The demos run on the back of two 5090 GPUs… At what point do we even acknowledge I shouldnt need to be using nearly 2kW of power on a gaming PC on a dedicated circuit in my home? Simply diminishing returns at a certain point and nvidia is bringing us way past that point into very unreasonable territory.
Is it cool they could do it at all?.. I GUESS?! But they’ve definitely lost the plot here.
I just watched the Digital Foundry video on this and the way they fuckin glaze over this tech just disgusted me enough to unsubscribe.
Can’t say I ever really trusted any Digital Foundry takes. They all seem to naval glaze over shiny things.
Digital Foundry has sold off for years now. They glaze over everything NV regardless of the quality. What killed it for me was in a now
editedrestored video reviewing Dying Light 2, in a mausoleaum, Alex Bataglia was glazing over an RTX image that was obviously black crushed and saying RTX was adding immense detail when the exact opposite was shown. That’s when I realized they were surely being paid to portray that version of events.Evidence of said bugs on release.
I love it, they restores the original video from 3-2-2022, likely because they think We’d forget! That’s an absolute smoking gun of reading the marketing materials vs what’s in fact in front of you… Check at 5:20 hahahaha this is amazing. If you go to the same location on present day DL2 you can clearly see the scene presented by Alex was bugged to the wazoo!
Invidious link because fuck google!
DLSS 6: You don’t even need to design your game. AI will generate it in real time.🤡
Was able to brace myself enough to skim through the video. Anyone that watches those clips and thinks that looks better has no place working on anything related to visual tech. Not only is the interpolation obvious and distracting, every scene, no matter the time of day, location, or anything else, has the exact same shitty lighting that washes out every shadow and color painstakingly added to the scene by actual professionals. Pro-tip: A white-balanced image is the starting point, not the end-goal. Making every game look exactly the same is fucking terrible, you hacks.
I can only hope this shit only finds its way into the AAAA dross that’s not worth playing already.
Isn’t DLSS by defintion always an AI slop filter?
Not really, DLSS mostly just reduces the resolution of a game and then upscales it back up. It does a pretty good job of making the game still look (almost) exactly the same. This, however, completely changes what you’re looking at.
DLSS is short for Deep Learning Super Sampling, it does the upscaling using deep learning, what people also call AI. The upscaler has to be trained on images. Depending on how you train it you either get something that looks almost exactly the same as the game at a higher resolution or you get AI slop.
I’m aware of how it works, but the results aren’t bad. Worst case scenario is you get some ghosting with DLSS, but it’s far from what I’d call AI slop.
But it literally follows the same process. Why is one slop, but not the other? You’re being hypocrital.
One is upscaling the image while preserving it as much as possible, the other is applying a filter to try and “enhance” it by drastically changing the image and ignoring artist’s intent. What’s hard to get?
This isn’t applying a filter, it’s
applyingrunning the image through a transformer network trained on advanced lighting methods like subsurface scattering to make materials more lifelike. It seems to change artistic intent quite a lot on these existing games, but frankly I’m excited to see what creators do with a game designed from the ground up to utilize AI-enhanced lighting. The DF video also states that this is an early preview (hence the dual 5090s) that is expected to change over time.it’s applying advanced lighting methods like subsurface scattering to make materials more lifelike.
It is not. It is approximating the results of training data consisting of output images that have been rendered with subsurface scattering. It isn’t actually running the subsurface scattering algorithm.
If it was made for that the slopifier would be able to identify the light sources. Before that it is art and environment destroying irrelevant bullshit. From all the slop examples, the best Nvidia can deliver, it is shown that they ignore the lighting of the scene.
How is “upscaling while preserving it” not the exact same philosophy as “enhance by applying a filter?”
You just don’t like the specific filter, it’s very literally the same process.
Because a pixelated circle being upscaled is a circle, but a pixelated circle being turned into a high definition pie is no longer a circle, and that’s especially problematic if the circle was just a cross hair or some other random circle like thing the AI thought was meant to be a pie.
Yes, both things are the same, but that’s like saying you had a tiny spider in your house and you were okay because it killed mosquitoes in your house, so you should be okay with having a colony of bats since they are also animals and eat mosquitoes. Yes, both are the same, but the scales and the amount of intrusion are completely different.
Current DLSS intent: We can only render this at like 720p with enough frames, so let’s do that and use AI anti-aliasing tricks so that when we present it at 4k, none of the jaggies are visible on-screen like they would be with raw 720p upscaling.
DLSS5 intent: Using our pile of stolen artwork neural net that we can now render at 60fps+ let’s “reimagine” the entire look of the game as we present it on-screen, even if it was already running at 4k just fine.
TLDR; How big the neuralnet is and what your train it for matters.
… How if flying a spaceship different from driving a car? They’re both controlled applications of kinetic energy to move people or objects.
At the end of the day, it’s all a pile of transistors and the only thing that is of import is the intent behind usage.
In one case it’s saying you can use a neural net to take something rendered at resolution A/4 and make it visually indistinguishable from the same render at resolution A.
The other is rendering something and radically changing the artistic or visual style.Upsampling can be replicated within some margin by lowering framerate and letting the GPU work longer on each frame. It strives to restore detail left out from working quicker by guessing.
You cannot turn this feature off and get similar results by lowering the frame rate. It aims to add detail that was never present by guessing.Upsampling methods have been produced that don’t use neural networks. The differences in behavior are in the realm of efficiency, and in many cases you would be hard pressed to tell which is which. The neural network is an implementation detail.
In the other case, the changes are more broad than can be captured by non AI techniques easily. The generative capabilities are central to the feature.Process matters, but zooming out too far makes everything identical, and the intent matters too. “I want to see your art better” as opposed to “I want to make your art better”.
Not all answers are easy. This new dlss looks like it was trained on stolen work. Old dlss had a neutral network that was tuned before the plagiarism machine became popular.
Piracy is not stealing.
It is when it’s used by corporations for profit, IMO. Not for individual private enjoyment.
Are you really asking why compressing and uncompressing art made by a human being is different from slop produced by the slop machine?
One exists to reconstruct an image as closely to the original as possible while saving space, the other is meant to insert arbitrary changes to the initial image and produce something else.
Oh yeah? Well vegatables are both in pig troughs and on dinner plates. Why’s one slop and not the other? They were grown with the same process!
Because one is shitty and the other isn’t.
If the vegetables are the same, they aren’t slop. Pigs aren’t fed vegetables, they use rotten vegetables. Your analogy doesn’t work, if you actually comprehend the basics of it….
If the vegetables weren’t rotten, yeah most people would eat the “slop” since it’s just vegetables, you would let good food go to waste just because the “name” you’re arbitrarily and incorrectly using for all pig feed?
I don’t like AI but christ Lemmy is getting annoying lately with kneejerk “slop” claims for anything with the letters AI in it. A lot of this stuff has been used for ages and yeah, they’re leaning into the current hype but the over reaction is just ridiculous (see: the “open slop” list of open source projects that includes those that have the audacity to allow developers the ability to use AI line completion)
It genuinely diminishes actual concerns with AI tech when people are losing it over things that have existed long before the current bubble but just have AI™️ on the package now
deep learning isn’t really the same thing as a large language model. People call LLMs AI.
Llms aren’t the only type of ai….
AI isn’t real, I’m just saying what people call AI is pretty much LLMs or anything that does NLP. No one looks at DLSS and says ‘thats ai’
“Science fiction AI” isn’t real. AI is most definitely a thing. From the Oxford dictionary:
artificial intelligence = the study and development of computer systems that can copy intelligent human behaviour
By definition, a chess program is AI.
pretty irrelevant to my point, honestly
LLMs were (and are) marketed as “AI”.
I agree. DLSS isn’t, though. It’s not AI though. Deep learning is like a close cousin
DLSS actually uses Machine Learning models to do the upscaling, so in fact there is no AI Slop here.
Not really,
Nvidia just calls everything DLSS…
Like, it’s basically an anthology label at this point. If they think it’s a good idea, they call.it DLSS #
For example DLSS 4 was frame generation, nothing to do with super sampling.
You could call it temporal super sampling.
It does a pretty good job of making the game still look (almost) exactly the same
Isn’t that just displaying the image with extra steps? Why is my PC using all this extra processing power in order to make it look (almost) exactly the same?
I think that’s accurate. It’s making something out of nothing, which will certainly be graphics but not necessarily exactly what the game is supposed to look like.
No, and if that’s your opinion you don’t know what DLSS is
While it may have used machine learning, it was definitely not in the ‘slop’ category. I generally think of slop as things which try to imitate some kind of creative or human element (like the enhancements from DLSS 5), but FSR and earlier DLSS used machine learning to replace anti-aliasing like MSAA, etc., through super-sampling and temporal technologies (frame gen kinda sucked though). So, to answer your hopefully literal question, DLSS has, in the past, not been a AI slop filter.
Yes, jensen huang recently tried to defend it.






















