In the past, I have always looked at Demos from Games and thought this is very good and getting close to movie quality. But that "close to" remained "close to" for quite some time. Even though It is improving every year but you can still tell it is gaming graphics. Even if some of the shots are not real time and pre rendered, they are still gaming like.
That Unreal 5 Demo was the first time ever I thought this is Hollywood Movie quality GFX ..... ( Apart from the Character ). IT IS STUNNING! And this is done Real time on PS5!
Edit: I am sorry for the tone and block capitals.... I am seriously geeking out.
Unreal is now starting to be used in place of green screens in some movies. (there's a lot more that goes into it, but essentially it seems like unreal is one of the core software pieces)
The Mandalorian filmed a lot of scenes with UE4 doing real-time rendering onto large LED screens like in that demo. Looked pretty good! Probably more fun and easier for the actors too.
If you're interested, this was a good article going into details of what's happening behind the scene for the real time rendering: https://ascmag.com/articles/the-mandalorian
(paraphrased) "it was designed to both light the actors and be a background that we can photograph, so that we end up with real-time live pixels in-camera"
Also from the article you linked:
> If the content was created in advance of the shoot, then photographing actors, props and set pieces in front of this wall could create final in-camera visual effects — or “near” finals, with only technical fixes required, and with complete creative confidence in the composition and look of the shots.
I'm not an expert in film terminology but it does not sound by any stretch that this was primarily about lighting.
I hadn't seen that video. I wonder how much was just left as the background screens (out of focus backgrounds in closeup shots seem likely). It does make it clear that the ceiling screens were mostly just for lighting, as they don't photograph well from the low angles.
It was for was shot composition as well; the cinematographer could see how it would end up while shooting. The screens updated to match the camera position in real time (e.g. to show the rest of a spaceship whose door was a physical prop).
As mentioned a few times in this comment section, this article covers the subject pretty well: https://ascmag.com/articles/the-mandalorian. All in all, it looks like many shots were truly captured at or near final, without significant post-production modifications.
Yes, I posted that myself :-) It was 2-3 months ago that I read it fully though. Favreau says "A majority of the shots were done completely in camera". However, any shot that included the ceiling screens needed VFX work to cover them, and that alone must have been a lot of shots.
If you can afford to put one or more GPU's per LED screen, there is little limit what you can render real-time, especially when the camera is so far away.
It used four synchronized PCs with a total cost of ~$20,000 to render three 4K panels, which is pocket change for a large production. The LED walls or a single lens costs more than that!
This goes into way more technical details: https://ascmag.com/articles/the-Mandalorian Particularly interesting is how the system had ~10 frames of latency, so excessively fast camera turns would show lower quality renders.
> If you can afford to put one or more GPU's per LED screen, there is little limit what you can render real-time
Umm, Ray tracing would love to have a word :P
In all seriousness, typical animated frames for big budget films easily take hours or longer _per frame_. It really depends on what looks you're trying to achieve. Game engines have come a long way in terms of realistic graphics with realtime rendering, but it's worth noting that it's still not the same quality as a fully ray traced scene (whether that matters depends on the content I guess)
A lot of what/why they are doing it this way, is realistic lighting and reflections. Im not sure the difference between a realtime game engine and ray tracing matters that much when you are using it as faux ambient light.
After it is filmed, they can still go back and touchup the backgrounds. Someday with ray tracing they can do real time finished products, but for now the tech works great at what its intended to do.
If a scene has been filmed in this setup, how easy is it to separate the physical foreground from the background screen if they want to re-composite the foreground with a more detailed rey traced background?
The system allows you to insert a dynamic greenscreen around a foreground element (while also retaining the option to preview how things will look after everything). So you can retain most of the virtual set for reflections and lighting, while still having a greenscreen.
I imagine with the advancements in consumer level tech with portrait mode in cameras and zoom backgrounds that the tech ILM has could make easy work of this.
Yes, of course. But fully animated CGI traces everything back to camera and single screen in movie quality. This setup has 1,326 individual screens, 123,904 px/m² filmed from several meters away for 180 degree view. None of those screens were rendered even close to movie quality.
btw. Only about 50% of the scenes were made with this setup. rest was traditional ray-tracing.
I wonder if this has anything to do with my wife's complaint (and I agree with her on this) that modern productions are beginning to look more and more like computer games.
She's been watching the third season of westworld, and it's kinda scary to me how almost every episode feels more like a game trailer than an actual story...
We actually binge watched game of thrones for the first time while we were both on mat leave. Hadn't seen a single episode before then.
The difficulty there is that I think everyone universally acknowledges (?) GOT just fell apart like a car crash in slow motion in the later seasons. It's hard to separate that from the knowledge that they started to depart from the books and were clearly undergoing irreparable problems/pressures by the end of season 5. Don't think we can blame that solely on tech... And it's hard to think about the cgi specifically when the rest is falling apart...
This is really cool, looks like HDR environments in real life. Makes me wonder if this could be used in conjunction with cameras and some clever image processing algorithms as a camouflage method.
For movies industry, a lot of the cost are actors and time involved. Having everything filmed with pre made scenes saves you lots of time comparing to green screen back and forth with VFX. Real world location also means lots of travelling.
Traveling with lots of heavy, expensive equipment, setting up, tearing down, then all the unpredictability of outdoor anything (weather, etc).
Also one amazing thing was the guy talking about "shooting a 10-hour dawn" – I can imagine a lot of time is wasted trying to recreate a certain time of day. With this, you don't waste that time. You can do as many takes as you need. The sun stays right where it is.
> Also one amazing thing was the guy talking about "shooting a 10-hour dawn"
I watched the whole video on the MAnchurian example, and it sold it obviously for me, but I was only initially thinking of the first demo.
Clearly its hugely beneficial.
What I find interesting is, naughty dog had a bit of a staff turnover for designers and hired film animators (in this case it was a crunch), cgi etc.. Funny how they're now almost cross skillset now. Have to admit, makes me feel older every day.
At the risk of sounding a bit negative I personally find that graphics have plateaued since about the PS3. Sure, there are more polys, sure, there are higher res textures, sure, there are more complex and dynamic lights. But you don't really have the kind of gap we used to have between, say, the PS1 and PS2 for instance. Diminishing returns and all that. The problem is that, in my experience, this eye candy only matters for about 10 minutes when you get into a game, then you stop really paying attention to how it looks and you focus on the gameplay and story etc...
Meanwhile all the dynamic stuff is still fairly primitive IMO. At around 4 minute in the video they briefly mention the water effects. They don't really spend a lot of time on them and for a good reason, they don't look particularly good.
When I was a kid in the 90s I definitely expected future games to look a lot better, but I also expect gameplay and world interaction to progress massively. Fully interactive environments you can interact with like in the real world. You could destroy everything, dig holes, build things, have advanced physics, great AI for NPCs etc...
It saddens me that the AAA video game industry is almost entirely focused on eye candy first and foremost. That being said I concede that I'm clearly in the minority, after all the Uncharted games are generally considered to be good games when I find them incredibly boring.
I hope that now that we can reach near-photorealism in games they'll have to come up with something new to keep pushing the envelope.
I must have expressed myself poorly, my point was not that I don't think it looks good, my point is that I feel like all this effort is focused on a single metric, making stuff look good in trailers.
Remember when HL2 was announced you had this super fancy physics engine? How objects would bounce and realistic react with each other, how you could stack things and come up with puzzles that would just use the physics engine without hardcoded scripts? HL2 was gorgeous looking but it was more than bump-mapping and complicated light models. You couldn't port HL2 to goldsrc and have it play the same.
Meanwhile almost 2 decades later we're back to fully scripted puzzles, mostly non-interactive environments and AI that's generally limited to a glorified A* algorithm.
This tech demo doesn't look decent, it looks really good, there's no arguing about it.
I will even go farther and say that I feel like I have now been playing the same game for 25 years. The FPS genre gets some new tricks, but it's still the same basic game with a few new mechanics and tricks- I can pick up a game I have never played before and generally be reasonably competent at it within a few minutes- a stark contrast to when I picked up tribes for the first time and was completely useless in online play for at least a week. Graphics increases have been both incremental and with diminishing returns in recent generations.
I think part of that is that the UI and control system has standardized on a mostly similar set of features with somewhat standardized mechanics. They've iterated over the last twenty years towards what is likely the peak (or at least a local maxima) of what you can do with a mouse/keyboard or controller (and controllers are generally normalizing as well, except for small differentiators). I.e. there's only so much control over your physical space you can have with a mouse and keyboard while still allowing good character movement, so there are limitations on what can easily be done.
On the other hand, VR based games are wildly experimenting and iterating because the control interface is so different (while at the same time fairly intuitive in that it maps to our reality better). Superhot in VR (which I have on the Quest) definitely doesn't feel like the same game from the last 25 years. Half-Life Alyx is supposed to be an amazing experience (I haven't had a chance to try it yet, and I won't for a long time likely), but that's not supposed to be because it looks so much better, but because it offers a wildly different and new experience.
So, if you're tired of feeling like you're playing the same old games, try VR. The Quest is probably the cheapest way into this space, but it's not actually cheap at $400, and it will gate you from a lot of the premier experiences available if you have a powerful PC with a good graphics card and VR headset that can interact with it (which the Quest can also do with a cable, so you aren't locked to Quest games, although maybe not at quite the visual quality of some other headsets).
I guess I just feel there is a real lack of creativity in gaming these days- There used to be a rich variety of games- puzzle/adventure games, RTSes, Simulation games, Flight/Space games of all kinds, and now its just kind of condensed down to FPSes, repeated sports franchises, and a few outliers- KSP, Minecraft, to name a few.
Interestingly, I won an Occulus Go at a meetup about 9 months ago. I almost gave it away, I didn't really know what I had won, I thought it was going to be an upgraded google cardboard.
But man I was surprised at how cool it was. And yeah- everything feels really raw and unpolished and experimental, kind of like the early days of the internet. My biggest gripe is that the games are all kind of vapid and on rails, there are no games where you can kind of go and explore, but still its all quite interesting and has me interested in buying a "proper" VR setup.
> My biggest gripe is that the games are all kind of vapid and on rails, there are no games where you can kind of go and explore
Some titles on the Quest are a bit better in that respect, but you still have to research the game to know. I think that's also a factor of how new the space is. It's much harder to allow users to roam where they want and still have the polish allowing them to interact how they want with the environment, it took quite a few years for that to happen in the traditional game space, both because of the effort required to craft that world, and because the engines were still working on abstractions that made it easy to fill those worlds with things that could be interacted with without too much programmer/artist work.
A good example of this is all the ways people have found for break Half-Life Alyx by working around the game expectations. People have found a way to hoard items, which would normally be hard to do because you have to actually hold them, but throwing them in buckets and bringing the bucket with you (for example, hauling around a bucket with 20 grenades in it, when normally you don't have a way to carry them all). If you put too many items in a bucket and pick it up, the physics system slows to a crawl and can crash the came. They designed the game so there weren't too many interactable objects in any one scene (because that makes sense), but the wholly new paradigm means that people were able to easily do crazy stuff to break it.
I've been meaning to pick up Arizona Sunshine for Quest, and that might be along the lines of what you're looking for (but not for Go). The Walking dead game is supposed to come to Quest eventually to (but again, the Go is unlikely), and the reviews of that looks pretty open for movement and exploration.
Maybe it's just because I am old enough to remember playing Super Mario on a CRT, but that attitude sometimes astounds me. Like I have read many assessments that the latest Doom game's incredible framerates are nothing to be impressed by, because the game is "visually underwhelming". I think a lot of people don't understand how difficult real-time computer graphics are to implement, and I have difficulty viewing the world through their eyes.
You can appreciate the technology behind it, but once the graphics in a game (which you play to escape real life) begin to mimic real life, it can feel underwhelming.
For instance, a game like Okami on PS2 is far more impressive to me than some 4K tech demo. When it comes down to actually playing a game, I don't give a shit about the polygon count, I give a shit if it's fun to play.
I love some pretty gfx as well, but I have the same feeling. Lately I’ve been finding the simplistic gfx of minecraft and terraria just fine especially given the mechanics are rather deep and enjoyable (for me at least). The simple graphics even add a bit of charm
Same here! Minecraft captured the hearts and minds of nearly every demographic, even with "rudimentary" blocky graphics. Never would have happened it went for realism.
Lately I've been playing this great mountain biking game called Lonely Mountains: Downhill that uses this gorgeous minimalist aesthetic. I'm also playing Trails in the Sky on the PSP. Those graphics just age beautifully.
I'm old enough to remember playing games on a CRT with a 32x24 screen resolution (ZX80, ZX81) and whilst I am thoroughly impressed with graphics technologies of today, I do have to echo the feeling that AAA games have slumped to a local minimum of effort in gameplay and story.
But on the flipside, there's literally thousands of indie games with innovative, interesting gameplay, if not sparkling graphics.
I kinda agree with you, but I think it's expected: demos are just ads for technical people. So in a way you know that it's too good to be true.
Plus they don't actually explain how it works, how the demo was made and what's the limits of their technology. They only show the good side, so people naturally wants to know the not-so-good side as well.
Or maybe it takes thoughtfulness to read that comment for what it is: an opinion about priorities of game studios.
And it resonates with me. I like eye candy like almost everyone and can appreciate technological marvels in CGI (and currently working with UE4 after hours, creating my own assets, I can appreciate how much work goes into modern game graphics). At the same time, I do feel many games these days try to use high-resolution textures and pretty shaders to paper over incoherent storylines and lack of gameplay depth. I think it's an entirely valid point to make.
That's funny you bring up the PS3, because my first reaction to the parent commenter and the OP video was how I remember thinking things couldn't get much better than the PS3 demos [0] – which now look comparatively primitive – but I've had that same feeling with PS4 and now PS5. But PS5 really does seem to be getting close to real-time interactive realism.
But I agree with you that I expected games to be much "better" by 2020, back when I was a kid playing 7th Guest. I guess I couldn't understand back then how much manual, hard-to-scale labor and budget would have to go into story, dialogue, art, acting (plus salaries of A-list movie stars), mo-cap, etc. I expected in-depth scripted NPC behavior, like in 1992's Ultima VII [1], to be extremely commonplace and basic by now, but I obviously didn't understand back then what actual AI and emergent simulation requires (versus manual scripting, and testing of that scripting)
The Killzone 2 trailer was completely fake (i.e. not running on actual PS3 hardware) at the time it was released, and the real-time version of it running on PS3 later unveiled looked much, much worse. This was debunked at the time.
I think raytracing has the potential to be a big leap forward. Real-time lighting and shadows are still incredibly limited, and most games are only able handle dynamic shadows from a hand-full of light-sources at a time. I think we don't see the difference yet because baked lighting gets a really good result, but I think after we live for a while in a world where every flickering candle, and every emissive texture in a game is a fully-fleged, shadow-casting, dymamic light source, then we are going to look back at current gen games and see how static and artificial the lighting is.
But I do agree with you in general that more advanced simulation is a huge, largely untouched opportunity in games.
Raytracing with anything resembling real environments with very large amount of details is not for tomorrow. Maybe 10-20 years down the road. So far most tech demos for current Raytracing happen with low-polygon use-cases.
I think there's just an obvious law of diminishing returns on certain things.
If you double the number of triangles, it doesn't take long to get to a clean looking circle - the next time you double it, it doesn't look that much better unless you're zooming way in on it.
So there's two things here, I think:
* There are tons of other improvements, like dynamic lighting. This turns into more of an immersion/ realism thing, and less of a "detail" thing. It's weird when your characters shadow doesn't move the way you expect with the light sources, it's weird when one part of the room is totally dark because a block is just slightly in the way of a light source, and the light isn't bouncing around it.
* With VR, this will all matter way more. You're going to have people taking objects in the game and putting them right up to their face - suddenly the difference between X triangles and 2X triangles is really noticeable again, relative to a 3rd person view on a monitor that's at least a foot or two away from you, where objects are always at a distance. Immersion is an extremely important factor for VR, so optimizing there makes a lot of sense.
So while todays mediums may not demonstrate these wins, they open up possibilities for new mediums.
I agree that Story telling and Game Play is still the top priorities. Look at Nintendo! I enjoy Zelda,( not exactly the best graphics looking game ) more so than most "photorealistic" game in recent years. ( I will also admit I am now a lot older and dont have time for serious gaming )
But still, the graphics in UE5 is stunning. The last time I was stunned by 3D Graphics was Crysis, and that was I think over 10 years ago.
The Zelda games (and Nintendo in general) are the perfect proof that great graphics are about way more than more sophisticated or technically advanced graphics.
Many games on the Switch look way better than generic AAA games on PS4 and Xbox. Granted the Switch games are still somewhat held back by limited hardware.
Personally I'd take Breath of the Wild over some generic UE4 game any day.
The UE5 demo show realtime GI. Current systems often rely on baking. So this feature really allows games to have more dynamic environments at the same visual quality as last gen. That's surely a win for gameplay.
However I see the new features in this particular demo as a game changer in many practical ways other than just eyecandy:
- Realtime dynamic GI makes baking lights unnecessary, increasing iteration time for environment artists.
- Also using this dynamic GI it is possible to create new gameplay mechanics based on dynamic lighting (for example in the demo the roof of the cave falls down and the area becomes lit, making more things visible)
- The new animation system makes developers able to create natural motions that automatically adjust to the environment, which would also save a lot of developer time.
- The demo explanation video mentions that the statue model assets are imported directly from ZBrush without any postprocessing (with the original triangle count, without baking any normal maps/LODs), which also saves time for artists importing their work to Unreal (although the file size costs might probably be a bit high to practically use this in every scenario)
I agree. Many projects seem to spend so much resources on graphics, which are getting quite impressive (not photoreal, though who cares), but game mechanics aren't getting better. Netcode has improved, allowing for more variety in multiplayer experiences, but other than that I get a sense of a game-mechanics winter.
> there are more polys, sure, there are higher res textures, sure, there are more complex and dynamic lights
What I'm noticing is there's too much focus on fancy lights, over-the-top postprocessing effects with annoying crap like ambient occlusion, screen space reflections that hog a lot of performance for just reflecting your character in water puddles placed everywhere just to show the effect off, and so on.
And not enough focus on good old textures and polygons. Approaching spherical objects closely still makes the polygons very very obvious. Staring directly at walls still shows how low res textures are.
> You could destroy everything, dig holes, build things
> They don't really spend a lot of time on them and for a good reason, they don't look particularly good.
Even CGI water looks typically very fake (except from afar), because we don't really have very good models for water/liquids. That's certainly not the priority for games either.
I think with every engine- or console generation people will go "this is photorealistic!", but in practice / real games it doesn't look / feel that way, or (maybe more likely) you get used to it until you run into the next best thing.
That said, The Mandalorian has used the Unreal engine to render real-time backgrounds for scenes, so it's good enough for that at least.
And in films they don't need to do real-time, they can take their time to render a scene.
In this demo I thought it "didn't feel that way" more due to the camera positioning and character than anything else, though. The environment seemed quite indistinguishable from pre-rendered CGI.
A highly scripted tech demo is also not such an accurate representation of what real-world results are going to look like.
Yes they can render it in-engine, but they can use bespoke character animations which don't have any blending artifacts, and they can put the camera on a rail and tune the assets and particle effects until they are certain every single frame can be rendered in under 16ms. They can hide billboards and other rendering tricks and be certain the camera is never going to hit them at an angle which gives them away.
Real-world game play scenarios are much more unpredictable, and the results are likely to fall well shy of this mark.
It's definitely impressive but kind of obviously not pre-rendered still. There is quite some aliasing in smaller details of the more complex objects and the edges of shadows aren't soft. The water effects looked quite bad still.
Also, the character model's hands+feet commonly clipped the walls slightly, and sometimes had that odd sliding motion in which the overall limb seems approximately stationary with respect to the surface they're touching, but the actual edge does not, it slides about a little.
I mean... that lighting was really, really good, and I think this is the first triangle-based demo where the surfaces really don't look oddly angular almost anywhere (maybe with exception of the stalactites). But it's a far cry from the hand-tuned look of something prerendered.
It definitely wasn't indistinguishable from pre-rendered CGI, there are a lot of shadow artifacts for one thing. It looks great but we get this ever few years from the real time people - they're amazing at picking off the low hanging fruit of offline rendering and finding a quality compromised solution that can work in realtime, but they're chasing a moving target.
Often there are choices made to support a larger range of devices that require comprises on graphic quality. It doesn't make a lot of sense to build a game only 5% of your customers can afford to run.
Next-gen consoles will have to become more ubiquitous before the previous gen consoles are left out of game releases.
the water effects just after the four minute mark looked odd. while the fixed assets like the statues and such later on looked great. however I kept going back to the water and questioning some of the lighting effects too.
don't get me wrong, the fixed assets look amazing and I look forward to seeing more done with this system
It didn’t seem to reflect well the way cloth moves when sliding across another piece of cloth. When she was climbing, I’d expect the scarf to sort of wrinkle/bunch up, then fall to her side, not just cleanly slide off.
However I could see them adjusting the physics for the demo to ensure the cloth actually fell by her side so we could see it swaying as she climbed.
Really? To me it felt like it was in the uncanny valley for cloth. When climbing it seemed to cling to at certain distance from the body at all times, like it had a certain range it could swing to/from the body but could not exceed.
Bingo. Look at the way her tied hair bounces when she is climbing. Totally unnatural. Also, the body movements - the speed of turning is too uniform/fluid; there are variations of speed within a single turn we do.
Though I was totally blown away by the detail of the rock texture and very realistic lighting.
It's not so much that the water looked off (which it did compared to the rock), it's that the character was completely unencumbered by the water. If you're walking through ankle-deep water, your feet are going to slow down as they drag through the water and go faster through air, and the character's stride didn't account for the increased drag. It'd be the same as if you put that super-realistic character in a stylized game like Hollow Knight without adapting her animation to 15fps like the rest of the characters are.
The animation was great, but the graphics were much better, and one can't help but notice the quality difference even though both are awesome works of art.
That and the climbing scene. As a climber there's no way to do that in those boots! Or, given the apparent story context, to do it that fast: nearly speed-climbing for an on-sight just doesn't look right. And her body movement for it was all wrong.
Yes but isn't that obvious? Based on the quick talk at the start I didn't go into this video expecting literally everything to be photo realistic. The developers were quite clear that the two big things they wanted to demo were:
1. Better real-time global illumination
2. Ultra high poly models
Then they also briefly focused on a few other features, mostly ones that they already shipped. The Lara Croft adventure scenario was an excellent choice to let them show their new work off in a context with many sudden lighting changes, naturalistic assets, arbitrary changes in scene geometry, huge changes in horizon distance and other things that have historically been difficult for real time engines due to their reliance on very CPU intensive pre-processing passes to build static data structures.
From what I've seen the look of water is usually not provided by the engine itself. Water is really hard to make look good and a lot of it depends on the specific needs of a game.
The water actually seemed to be acting like a smaller amount of water would, as if the character were six inches tall and walking through a tiny puddle or something.
The graphics are better, but camera and animations are a dead giveaway. It was fine when the camera was relatively static and far away, but shots with more movement became really noticeable. (Watch carefully, you can see the character's hand go through the rock in some places, or not make full contact in others. Once you notice it's hard to un-see.)
I'm sure this is all fixable, but some part of me wonders how much larger game budgets will have to become (or correspondingly, how much less content will be offered) in order to achieve the new standard in production qualities.
It's strange that the character wasn't more photorealistic in this demo. If you Google Image search you can find more photorealistic characters from previous versions of the Unreal Engine. I wonder if there might be some trade-off being made between photorealistic character and lifelike character movement.
I assumed the same. If you also make the character more realistic you run into the risk of making the demo eve more unbelievable. Now there are some hints that underline their claim that it runs on a PS5, like some videogame like animation transitions, character movements and the character model itself.
the problem isn't that her design is a bit stylized; it's that her face looks kind of... not-face-like in closeups. (missing some shininess & subsurface scattering imo).
it's different from e.g. Moana, where skin is kind of marzipan-y, but still recognizably skin-like
This is the best way to do graphics IMO. Instead of pushing the tech to its limits and landing in the uncanny valley, it's better to back off a bit from the limits of the tech and perfectly execute a more stylized result.
Triangles that are just one pixel in the screen, dynamic lightning and overall rendering this close to photorealism, it's the all the other aspects of the games that limit the experience.
Physics engine, character movement, etc. could still be improved.
The quality of the graphics make the lack of realism in the animations more jarring for me. When the character touches a surface it just doesn't look in the slightest. It's quite frustrating actually. My brain seems ready to see realistic contact but instead sees a body moving around unnaturally near a surface that its supposed to be in contact with.
I think it's the global illumination that really makes the difference. You can't get to realism just by drawing more polygons, but if light acts in believable ways (especially when the scene changes or lights move around) it really starts to look like real life.
Well, I may be wrong, but I was under the impression that the Unreal Engine will also work on the next XBOX (and any PC) and that the PS5 was less powerfull than the next XBOX.
It remains to be seen how the PS5 and the new XBox compare in terms of performance.
The specs of the new XBox are slightly higher on paper, but the PS5 is doing some really interesting things in terms of optimization. Basically the XBox is going for high constant clock speeds, and the PS5 is shifting priority between the CPU and GPU components of the SOC so each gets boosted performance when it's most relevant.
Both consoles are doing very interesting things with asset streaming and decompression of assets directly from the SSD to video memory which are going to open up new opportunities not only in visual quality, but in terms of how flexibly game worlds can be designed.
Probably we will have to wait and see how all of this plays out in terms of real-world performance.
Highly dangerous of the PS people to build non-deterministic performance into their console. Developers who try to push every ounce of power out of it will hate it.
edit: Maybe deterministic isn't the correct word. What I mean is that you can design a physics system that you can ensure runs on the PS5 CPU. But then the graphics boys make an upgrade and suddenly some power is diverted to the graphics and the physics system is no longer working correctly. This is still deterministic. But a nightmare to work with when optimizing for hard-real time requirements. The only way this can be done sanely is if the developers can fix this power budget and thus restore "determinism".
Mark Cerny's tech talk emphasised that performance is determistic:
'So how does boost work in this case? Put simply, the PlayStation 5 is given a set power budget tied to the thermal limits of the cooling assembly. "It's a completely different paradigm," says Cerny. "Rather than running at constant frequency and letting the power vary based on the workload, we run at essentially constant power and let the frequency vary based on the workload."'
It's clear from the presentation that it's not possible to run the CPU at full power and the GPU at full power at once. This means the CPU will be stealing performance from the GPU or vice versa when the system is pushed to it's limits. This means the things you can graphically do are directly connected to how much the CPU is loaded. This will be very hard to optimize for. Since seemingly unconnected physical pieces of hardware are influencing eachother.
The only way I can see this going well if this balance is fixed by the developer. E.g. the developer specifies I want 60% of the power budget to go to the GPU and 40% to the CPU. If this is handled dynamically by the PS5... oh boy.
> the CPU will be stealing performance from the GPU or vice versa
Another way to think about it is that very few games are using 100% CPU capacity and 100% CPU capacity at the same time. This model gives the developer a combined compute budget, which can be allocated as required for the task.
> Another way to think about it is that very few games are using 100% CPU capacity and 100% CPU capacity at the same time.
Right, but to the GP's point, for those games which do aim to max out both at the same time (or at least get as close to that as possible), this could be a development pain point unless the developer has control over which gets prioritized. Once upon a time this sort of maximization was a common goal for console game development, though I don't know how true that holds nowadays.
So then you would tune your game to use 50% CPU and 50% GPU if the requirements are equal on both sides.
But I think this style of hardware also changes the way you make optimization decisions. I.e. in a world where you have a fixed CPU/GPU budget, you might try to think of ways too move work to the CPU if for instance your GPU is saturated. But if the relative capacity is variable, you can just do things in the most efficient way overall, and you just have to make sure you don't exceed your overall budget.
The PS still sells itself heavily based on console exclusives (Uncharted, The Last of Us, Horizon Zero Dawn) that are ostensibly well-tuned to the hardware. Guess we'll have to wait until HZD's makes it to PCs to see how easily it's ported.
If the developer can control the balance then it's fine. If this balance cannot be controlled by the developers then it's a nightmare. It's not clear to me this is developer controlled.
I don't know to what degree it is deterministic or controlled by the developer. It might be interesting if, for instance, graphically heavy games can essentially trade CPU performance they're not using for additional GPU headroom in an intentional, explicit way.
"As for Microsoft’s Xbox Series X, Sweeney isn’t saying the new Xbox won’t be able to achieve something similar; both are using custom SSDs that promise blazing speeds. But he says Epic’s strong relationship with Sony means the company is working more closely with the PlayStation creator than it does with Microsoft on this specific area."
Keep in mind it's contending with YouTube compression too. A lot of the artifacts you see around the character and other moving objects are likely compression artifacts.
That Unreal 5 Demo was the first time ever I thought this is Hollywood Movie quality GFX ..... ( Apart from the Character ). IT IS STUNNING! And this is done Real time on PS5!
Edit: I am sorry for the tone and block capitals.... I am seriously geeking out.