The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On Mon, 22 Jul, 4:03 PM UTC
3 Sources
[1]
Nvidia GeForce RTX 4090 review: the best way to waste $1,600
Digital Trends may earn a commission when you buy through links on our site. Why trust us? Nvidia GeForce RTX 4090 MSRP $1,600.00 Score Details "Shocking as it sounds, the RTX 4090 is worth every penny. It's just a shame most people won't be able to afford this excellent GPU." Pros Huge leaps in 4K gaming performance Excellent ray tracing performance High power and thermals, but manageable DLSS 3 performance is off the charts Cons Very expensive DLSS 3 image quality needs some work The RTX 4090 is both a complete waste of money and the most powerful graphics card ever made. Admittedly, that makes it a difficult product to evaluate, especially considering how much the average PC gamer is looking to spend on an upgrade for their system. Contents About this reviewVideo reviewNvidia RTX 4090 specsSynthetic and rendering4K gaming performance1440p gaming performanceRay tracingDLSS 3 testedPower and thermalsShould you buy the Nvidia RTX 4090?Show 5 more items Get your weekly teardown of the tech behind PC gaming ReSpec Subscribe Check your inbox! Privacy Policy Debuting Nvidia's new Ada Lovelace architecture, the RTX 4090 has been shrouded in controversy and cited as the poster child for rising GPU prices. As much as it costs, it delivers on performance, especially with the enhancements provided by DLSS 3. Should you save your pennies and sell your car for this beast of GPU? Probably not. But it's definitely an exciting showcase of how far this technology can really go. About this review The RTX 4090 has been through a lot since it originally released. As we wind down the generation, we're revisiting Nvidia's flagship GPU to see how it holds up in 2024. Throughout the review, we've added additional notes on features, pitfalls, and pricing to better reflect the reality of buying the RTX 4090 in 2024. Video review Nvidia RTX 4090 specs As mentioned, the RTX 4090 introduces Nvidia's new Ada Lovelace architecture, as well as chipmaker TSMC's more efficient N4 manufacturing process. Although it's impossible to compare the RTX 4090 spec-for-spec with the previous generation, we can glean some insights into what Nvidia prioritized when designing Ada Lovelace. The main focus: clock speeds. The RTX 3090 Ti topped out at around 1.8GHz, but the RTX 4090 showcases the efficiency of the new node with a 2.52GHz boost clock. That's with the same board power of 450 watts, but it's running on more cores. The RTX 3090 Ti was just shy of 11,000 CUDA cores, while the RTX 4090 offers 16,384 CUDA cores. RTX 4090 RTX 3090 Architecture Ada Lovelace Ampere Process node TSMC N4 8nm Samsung CUDA cores 16,384 10,496 Ray tracing cores 144 3rd-gen 82 2nd-gen Tensor cores 576 4th-gen 328 3rd-gen Base clock speed 2235MHz 1394MHz Boost clock speed 2520MHz 1695MHz VRAM GDDR6X 24GB 24GB Memory speed 21Gbps 19.5Gbps Bus width 384-bit 384-bit TDP 450W 350W It's hard to say how much those extra cores matter, especially for gaming. Down the stack, the RTX 4080 has a little more than half of the cores as the RTX 4090, while the RTX 4070 Ti has even less. Although the RTX 4090 kicked off the Ada Lovelace generation, Nvidia has filled out the stack since. We've reviewed every card from Nvidia's current lineup, and you can find all of our review in the list below. Some cards, such as the RTX 4080 and RTX 4070 Ti, have been replaced with a refreshed Super model. RTX 4080 Super RTX 4070 Ti Super RTX 4070 Super RTX 4070 RTX 4060 Ti RTX 4060 Synthetic and rendering Before getting into the full benchmark suite, let's take a high-level look at performance. Port Royal and Time Spy from 3D Mark show how Nvidia's latest flagship scales well, showing a 58% gain over the RTX 3090 Ti in Time Spy, as well as a 102% increase over the RTX 3090 in Port Royal. It's important to note that 3DMark isn't the best way to judge performance, as it factors in your CPU much more than most games do (especially at 4K). In the case of the RTX 4090, though, 3DMark shows the scaling well. In fact, my results from real games are actually a little higher than what this synthetic benchmark suggests, at least outside of ray tracing. I also tested Blender to gauge some content creation tasks with the RTX 4090, and the improvements are astounding. Blender is accelerated by Nvidia's CUDA cores, and the RTX 4090 seems particularly optimized for these types of workloads, with it putting up more than double the score of the RTX 3090 and RTX 3090 Ti in the Monster and Junkshop scenes, and just under double in the Classroom scene. AMD's GPUs, which don't have CUDA, aren't even close. 4K gaming performance On to the juicy bits. All of my tests were done with a Ryzen 9 7950X and 32GB of DDR5-6000 memory on an open-air test bench. I kept Resizeable BAR turned on throughout testing, or in the case of AMD GPUs, Smart Access Memory. The RTX 4090 is a monster physically, but it's also a monster when it comes to 4K gaming performance. Across my test suite, excluding Bright Memory Infinite and Horizon Zero Dawn, which I have incomplete data for, the RTX 4090 was 68% faster than the RTX 3090 Ti. Compared to the RTX 3090, you're looking at nearly an 89% boost. That's a huge jump, much larger than the 30% boost we saw gen-on-gen with the release of the RTX 3080. And none of those numbers factor in upscaling. This is raw performance, including ray tracing, and the RTX 4090 is showing a huge lead over the previous generation. Perhaps the most impressive showing was Cyberpunk 2077. The RTX 4090 is just over 50% faster than the RTX 3090 Ti at 4K with maxed-out settings, which is impressive enough. It's the fact that the RTX 4090 cracks 60 frames per second (fps) that stands out, though. Even the most powerful graphics cards in the previous generation couldn't get past 60 fps without assistance from Deep Learning Super Sampling (DLSS). The RTX 4090 can break that barrier while rendering every pixel, and do so with quite a lead. Previous Next 1 of 6 Gears Tactics also shows the RTX 4090's power, winning out over the RTX 3090 Ti with a 73% lead. In a Vulkan title like Red Dead Redemption 2, the gains are smaller, but the RTX 4090 still managed a 52% lead based on my testing. This is a huge generational leap in performance, though it's still below what Nvidia originally promised. Nvidia has marketed the RTX 4090 as "two to four times faster" than the RTX 3090 Ti, and that's not true. It's much faster than the previous top dog, but Nvidia's claim only makes sense when you factor in DLSS 3. DLSS 3 is impressive, and I'll get to it later in this review. But it's not in every game and it still needs some work. Thankfully, with the RTX 4090's raw performance, DLSS is more of a "nice to have" and less of a "need to have." In AMD-promoted titles like Assassin's Creed Valhalla and Forza Horizon 5, the RTX 4090 still shows its power, though now against AMD's RX 6950 XT. In Valhalla at 4K, the RTX 4090 managed a 63% lead over the RX 6950 XT. The margins were tighter in Forza Horizon 5, which seems to scale very well with AMD's current offerings. Even with less of a lead, though, the RTX 4090 is 48% ahead of the RX 6950 XT. These comparisons are impressive, but the RTX 4090 isn't on a level playing field with its competitors. At $1,600, Nvidia's latest flagship is significantly more expensive than even the most expensive GPUs available today. With the performance the RTX 4090 is offering, though, it's actually a better deal than a cheaper RTX 3090 or RTX 3090 Ti. In terms of cost per frame, you're looking at around the same price as an RTX 3080 10GB at $700. This is not the best way to judge value -- it assumes you even have the extra cash to spend on the RTX 4090 in the first place, and it doesn't account for features like DLSS 3 -- but as crazy as it sounds, $1,600 is a pretty fair price for the 4K performance the RTX 4090 offers. Now that the launch dust has settled, make sure to read our RX 7900 XTX review to see how the RTX 4090 stacks up to other high-end GPUs. 1440p gaming performance If you're buying the RTX 4090 for 1440p, you're wasting your money (read our guide on the best 1440p graphics cards instead). Although it still provides a great improvement over the previous generation, the margins are much slimmer. You're looking at a 48% bump over the RTX 3090 Ti, and a 68% increase over the RX 6950 XT. Those are still large generational jumps, but the RTX 4090 really shines at 4K. You start to become a little CPU limited at 1440p, and if you go down to 1080p, the results are even tighter. And frankly, the extra performance at 1440p just doesn't stand out like it does at 4K. In Gears Tactics, for example, the RTX 4090 is 36% faster than the RTX 3090 Ti, down from the 73% lead Nvidia's latest card showed at 4K. The actual frame rates are less impressive, too. Sure, the RTX 4090 is way ahead of the RTX 3090 Ti, but it's hard to imagine someone needs over 200 fps in Gears Tactics when a GPU that's $500 cheaper is already above 160 fps. Previous Next 1 of 6 At 4K, the RTX 4090 accomplishes major milestones -- above 60 fps in Cyberpunk 2077 without DLSS, near the 144Hz mark for high refresh rate monitors in Assassin's Creed Valhalla, etc. At 1440p, the RTX 4090 certainly has a higher number, but that number is a lot more impressive on paper than it is on an actual screen. Ray tracing Nvidia has been a champion of ray tracing since the Turing generation, but Ada Lovelace is the first generation where it's seeing a major overhaul. At the heart of the RTX 4090 is a redesigned ray tracing core that boosts performance and introduces Shader Execution Reordering (SER). SER is basically a more efficient way to process ray tracing operations, allowing them to execute as GPU power becomes available rather than in a straight line where bottlenecks are bound to occur. It also requires you to turn on hardware-accelerated GPU scheduling in Windows. And it works. The margins with ray tracing are much slimmer usually, but the RTX 4090 actually shows higher gains with ray tracing turned on. In Cyberpunk 2077, for example, the RTX 4090 is nearly 71% faster than the RTX 3090 Ti with the Ultra RT preset. And that's before factoring in DLSS. AMD's GPUs, which are much further behind in ray tracing performance, show even larger differences. The RTX 4090 is a full 152% faster than the RX 6950 XT in this benchmark. Previous Next 1 of 6 Similarly, Metro Exodus Enhanced Edition showed an 80% boost for the RTX 4090 over the RTX 3090 Ti, and Bright Memory Infinite showed the RTX 4090 93% ahead of the RTX 3090. Nvidia's claim of "two to four times faster" than the RTX 3090 Ti may not hold up without DLSS 3, but ray tracing performance gets much closer to that mark. And just like 4K performance, the RTX 4090 shows performance improvements that actually make a difference when ray tracing is turned on. In Bright Memory Infinite, the RTX 4090 is the difference between taking advantage of a high refresh rate and barely cracking 60 fps. And in Cyberpunk 2077, the RTX 4090 is literally the difference between playable and unplayable. Since release, Nvidia has introduced DLSS 3.5, which is a huge boost for ray tracing on the RTX 4090. This feature is only in a few games, including Alan Wake 2 and Cyberpunk 2077, but it massively improves ray tracing. The Ray Reconstruction feature is an AI-powered ray tracing denoiser that improves shadows, reflections, and in some cases, even performance. You can see how much of a difference DLSS 3.5 makes in Alan Wake 2 above. The shadows are much sharper, the reflections have specular highlights, and the scene has a much softer gradient between light and dark areas. This improvement doesn't come with a performance penalty, either -- in some cases, Ray Reconstruction is even a bit faster. DLSS 3 tested DLSS has been a superstar feature for RTX GPUs for the past few generations, but DLSS 3 is a major shift for the tech. It introduces optical flow AI frame generation, which boils down to the AI model generating a completely unique frame every other frame. Theoretically, that means even a game that's 100% limited by the CPU and wouldn't see any benefit from a lower resolution will have twice the performance. That's not quite the situation in the real world, but DLSS 3 is still very impressive. I started with 3DMark's DLSS 3 test, which just runs the Port Royal benchmark with DLSS off and then on. My goal was to push the feature as far as possible, so I set DLSS to its Ultra Performance mode and the resolution to 8K. This is the best showcase of what DLSS 3 is capable of, with the tech boosting the frame rate by 578%. That's insane. In real games, the gains aren't as stark, but DLSS 3 is still impressive. Nvidia provided an early build of A Plague Tale: Requiem, and DLSS managed to boost the average frame rate by 128% at 4K with the settings maxed out. And this was with the Auto mode for DLSS. With more aggressive image quality presets, the gains are even higher. A Plague Tale: Requiem exposes an important aspect of DLSS 3, though: It incurs a decent amount of overhead. DLSS 3 is two parts. The first part is DLSS Super Resolution, which is the same DLSS you've seen on previous RTX generations. It will continue to work with RTX 20-series and 30-series GPUs, so you can still use DLSS 3 Super Resolution in games with previous-gen cards. DLSS Frame Generation is the second part, and it's exclusive to RTX 40-series GPUs. The AI generates a new frame every other frame, but that's computationally expensive. Because of that, Nvidia Reflex is forced on whenever you turn on Frame Generation, and you can't turn it off. If you reason through how Frame Generation works, it should provide double the frame rate of whatever you're getting with just Super Resolution, but that's not the case. As you can see in Cyberpunk 2077 below, the Frame Generation result means that the GPU is only rendering about 65 frames -- the rest are coming from the AI. With Super Resolution on its own, that result jumps up by nearly 30 fps. That's the DLSS Frame Generation overhead at play. Obviousy, Frame Generation provides the best performance, but don't count out Super Resolution as obsolete. Although it would seem that Frame Generation doubles DLSS frame rates, it's actually much closer to Super Resolution on its own in practice. You can't talk about DLSS apart from image quality, and although DLSS 3 is impressive, it still needs some work in the image quality department. Because every other frame is generated on the GPU and sent straight out to your display, it can't bypass elements like your HUD. Those are part of the generated frame, and they're ripe for artifacts, as you can see in Cyberpunk 2077 below. The moving quest marker sputters out as it moves across the screen, with the AI model not quite sure where to place pixels as the element moves. Normally, HUD elements aren't a part of DLSS, but Frame Generation means you have to factor them in. That same behavior shows up in the actual scene, as well. In A Plague Tale: Requiem, for example, you can see how running through the grass produces a thin layer of pixel purgatory as the AI struggles to figure out where to place the grass and where to place the legs. Similarly, Port Royal showed soft edges and a lot of pixel instability. These artifacts are best seen in motion, so I captured a bunch of 4K footage at 120 fps, which you can watch below. I slowed down the DLSS comparisons by 50% so you can see as many frames as possible, but keep in mind YouTube's compression and the fact that it's difficult to get a true apple-to-apples quality comparison when capturing gameplay. It's best to see in the flesh. RTX 4090 Gameplay and DLSS 3 showcase While playing, the image quality penalties DLSS 3 incurs are easily offset by the performance gains it offers. But Frame Generation isn't a setting you should always turn on. It's at its best when you're pushing ray tracing and all of the visual bells and whistles. Hopefully ,it will improve, as well. I'm confident Nvidia will continue to refine the Frame Generation aspect, but at the moment, it still shows some frayed edges. Power and thermals Leading up to the RTX 4090 announcement, the rumor mill ran rampant with speculation about obscene power demands. The RTX 4090 draws a lot of power -- 450W for the Founder's Edition and even more for board partner cards like the Asus ROG Strix RTX 4090 -- but it's not any more than the RTX 3090 Ti drew. And based on my testing, the RTX 4090 actually draws a little less. The chart below shows the maximum power draw I measured while testing. This isn't the max power -- a dedicated stress test would push the RTX 4090 further -- but games aren't stress tests, and you won't always reach max power (or even get close). Comparing other Founder's Edition models, the RTX 4090 actually consumed about 25W less than the RTX 3090 Ti. Overclocked board partner cards will climb higher, though, so keep that in mind. Previous Next 1 of 2 For thermals, the RTX 4090 peaked at 64 degrees Celsius in my test suite, which is right around where it should sit. The smaller RTX 3080 Ti with its pushed clock speeds and core counts showed the highest thermal results, peaking at 78 degrees. All of these numbers were gathered on an open-air test bench, though, so temperatures will be higher once the RTX 4090 is in a case. The RTX 4090 stays cool and quiet, but it still draws a lot of power. In some cases, it draws too much. Shortly after release, customers started experiencing a melting power connector on the GPU. This is due to the power cable not being fully inserted into the plug on the GPU, heating up the plastic and causing it to melt under load. It could even start a fire, with upward of 600W passing across extremely thin pins. It's been almost two years since the first reports of melting power connectors popped up. Since then, Nvidia has addressed the issue with a new revision of its power connector. I've been running an original RTX 4090 for two years without issues, as well -- as long as the cable is seated, you shouldn't have any problems. Should you buy the Nvidia RTX 4090? It's hard to recommend the RTX 4090 in 2024, not because it's a bad GPU, but because of where it sits in the market. It's faster than everything, but a lot of that power doesn't meaningfully impact your gameplay experience. An unforeseen surge in demand has driven up the cost of this already expensive GPU by a lot, too. You won't find any models available for list price, and you'd be scoring a deal if you found an RTX 4090 at $1,800. Demand from AI has caused the price to skyrocket, and although that price is coming down, it's not as low as it should be. The RTX 4090 is a monster, but it's past its prime. Now isn't a great time to buy the RTX 4090 anyway. We're staring down the launch of RTX 50-series GPUs, likely at the end of this year or the beginning of next, so it's best to wait on buying the RTX 4090.
[2]
Which GPU should you buy in each $100 price bracket?
Key Takeaways GPUs have become insanely expensive, making it hard to find decent options under $100, but Intel Arc A310 is a good affordable choice for light usage. In the $200 price range, Intel's Arc A750 or last-gen RX 6700 are solid options for performance and VRAM, despite some limitations. Top-tier GPUs like the RX 7900 XT or RTX 4070 Ti offer impressive specs for 4K gaming, with AMD and Nvidia each excelling in different areas. The cost of PC hardware has been steadily increasing for years, but GPU prices, in particular, have blown out of proportion as of late. The amount that could buy a decent gaming PC just a few years ago can barely get you a semi-decent mid-range graphics card these days. Related Here's everything you need to know about replacing your old GPU Want to install a new graphics card into your PC, but don't know where to start? We're here to help you out! However, there's at least one redeeming factor in the GPU industry of 2024: there are many options for the average consumer to choose from. So, we've compiled a list of the best graphics cards in each $100 price bracket to help you pick out the ideal companion for your gaming rig. 1 Around (and under) $100 Source: Intel via YouTube Starting off with the sub $100 range GPUs, you might have some trouble finding decent current-gen options in this tier. Team Green is completely non-existent here, and trust me, you shouldn't buy the GT 1030 unless it retails for $20 (but that's a story for another time). For $100, you shouldn't expect the dedicated graphics card to provide decent frame rates. That said, the Intel Arc A310 card serves as a worthwhile purchase if you want something affordable for light gaming and AV1 transcoding. The drivers on Intel discrete GPUs have gotten a lot better these days, and the A310 can deliver adequate performance at 1080p if you're willing to reduce all the graphical settings to low. ASRock Arc A310 Low Profile $90 $100 Save $10 $90 at Amazon$100 at Newegg The outdated RX 580 from AMD is another honorable mention. The lack of official driver updates is quite disappointing, but you can still make do with third-party drivers. What really makes this card a worthwhile recommendation is its 8GB VRAM, which pairs nicely with apps and games that are hungry for video memory. Sadly, I'm a bit hesitant to recommend it because most of the RX 580 cards from unknown brands contain significantly fewer shading units than the reference models. So, you should do your thorough research before picking one up, especially if it's priced at under $100. 2 Around $200 Close The situation is largely the same in the under $200 price range, as Intel's Alchemist series delivers solid price per dollar. In particular, the Arc A750 is an amazing little GPU that packs plenty of firepower with its 2.05GHz clock frequency, 8GB VRAM, and 32 Xe and 32 ray-tracing cores. I recently picked this one up in a sale, and I was really impressed by its performance, considering Team Blue absolutely botched the Alchemist family's launch. As long as you enable the Resizable BAR on your PC, the Arc A750 can even hold its own at 1440p, which is quite a feat for a GPU that costs slightly less than $200. ASRock Intel ARC A750 Challenger D $180 $220 Save $40 $180 at Amazon If you're someone who can't make do with 8GB of VRAM, then you might want to consider the last-gen RX 6700 from AMD. In most cases, the RX 6700 manages to get a few more frames than its Alchemist rival, and its 12GB VRAM lends the GPU some extra oomph in video rendering and AI-intensive workloads. The caveat with the RX 6700 is that it's hard to find, as most models are out-of-stock at online retailers. Sparkle Intel Arc A750 ORC OC Edition $200 at Amazon 3 Around $300 Source: Amazon Finally, we arrive at the first (out of many) Nvidia graphics card. For just under $300, you can easily snag an RTX 4060 from most e-commerce platforms. Don't get me wrong, I still criticize Nvidia for reducing the bus and memory width on the RTX 4060, on top of crippling it on the VRAM front. However, there's no denying that it's the best graphics card in the sub $300 tier, especially when you add ray-tracing into the fray. Plus, it supports Nvidia's frame generation technology thanks to its compatibility with DLSS 3.5, making it a decent option for slow-paced titles where you might want superior graphics and higher resolution over zero input lag. ZOTAC Gaming GeForce RTX 4060 8GB Twin Edge $298 at Amazon$300 at Newegg MSI Gaming GeForce RTX 4060 $293 at Amazon 4 Around $400 The RX 6800 is yet another graphics card from AMD's outdated RDNA 2 lineup, though it rocks solid specs that cement its standing as one of the best budget GPUs in 2024. For starters, its whopping 16GB of VRAM frees it from the shackles of limited video memory that bind the RTX 4060 (and the 8GB variant of the RTX 4060 Ti). Factor in its 72 Compute Units and 72 Ray Accelerators, and the RX 6800 single-handedly decimates every GPU in the sub-$400 range in pretty much every workload, be it 1440p gaming, video-editing, or even ray-tracing! XFX Speedster SWFT 319 AMD Radeon RX 6800 XT CORE $360 $460 Save $100 $360 at Amazon 5 Around $500 Close For half a grand, AMD wins again with its RX 7800 XT. From 16GB of VRAM to 2.43GHz (boost) core and memory clock speeds, the RX 7800 XT is not only the best option for the $500 range, but it goes toe-to-toe with the Nvidia GPU occupying the slot in the next pricing tier (but more on that later). The only issue with the RX 7800 XT is that it's a power-hungry beast with its 263W TDP, and AMD recommends a 700W PSU for anyone looking to tame this behemoth. XFX Speedster QICK319 RX 7800 XT $491 $520 Save $29 $491 at Amazon Gigabyte Radeon RX 7800 XT Gaming OC $499 at Amazon On the Nvidia side, it is worth mentioning that the RTX 4060 Ti lies in the same price range as the RX 7800XT. I'm obviously referring to the 16GB variant of the card, as the 8GB version isn't even worth mentioning. While the 128-bit bus width was a terrible decision on Nvidia's part, the 16GB of VRAM and slightly faster boost clock speed of 2.53 GHz serve as the redeeming features of the GPU. That said, the only situations where I can recommend the 4060 Ti over its AMD counterpart is when you can't upgrade your PSU or plan to leverage the superior ray-tracing and DLSS features of Nvidia's Ada Lovelace generation. Zotac Gaming GeForce RTX 4060 Ti 16GB AMP $450 at Amazon Gigabyte GeForce RTX 4060 Ti Windforce OC $450 at Amazon 6 Around $600 Close For this tier, you get to choose between RX 7900 GRE from AMD and Nvidia's RTX 4070/Super variants. If you're going by sheer performance, the RX 7900 GRE is better than its Nvidia rivals. With its 16GB of VRAM and 256-bit memory bandwidth, the RX 7900 GRE can stand its ground even at 4K with HD textures. Unfortunately, it also requires a beefy PSU, and you'll either need a highly efficient 750W or at least an 800W power supply to keep the RX 7900 GRE satiated. PowerColor Hellhound Radeon RX 7900 GRE $550 $580 Save $30 $550 at Amazon The RTX 4070 is also priced similarly to the RX 7900 GRE, though it barely manages to surpass the RX 7800 XT in normal games, let alone the RX 7900 GRE. Heck, it even has a lower VRAM of 12GB than the RTX 4060 Ti. But the reason why it climbed its way into the list is its power efficiency and - most importantly, support for Nvidia's features. When it comes to ray-tracing workloads, the RTX 4070 handily surpasses the RX 7900 GRE. Throw DLSS 3.5 and frame generation into the mix, and there's quite a bit of incentive to go with the RTX 4070. ASUS Dual GeForce RTX 4070 OC $560 at Amazon$550 at Best Buy What's more, there are a couple of RTX 4070 Super cards available for less than $600. While they possess the same video memory as their non-Super variants, the RTX 4070 Super GPUs have higher RT, Shader, and Tensor cores. As such, you should grab a Super variant of the RTX 4070 if you manage to find it for under $600. Gigabyte GeForce RTX 4070 Super WindForce OC $600 at Amazon 7 Around $700 Close We've arrived at the upper echelon of the mid-range cards, and this is where the RX 7900 XT dominates with its impressive 20GB VRAM. Of course, the rasterization performance is just as impressive, with the 84 Computer Units and a 2400MHz frequency providing more than enough frame rates at 4K with all the graphical settings dialed up a notch. As for the drawbacks, the 7900XT isn't as useful at ray-tracing workloads as its Nvidia counterpart, which occupies the next price tier. But for pure 4K goodness untouched by the likes of ray-tracing, super-sampling, or frame-generation? The RX 7900 XT is definitely the right choice! ASRock Phantom Gaming OC Radeon RX 7900 XT $680 at Amazon Sapphire Pulse Radeon RX 7900 XT $670 $830 Save $160 $670 at Amazon 8 Around $800 Close We've arrived at the price tier where an Nvidia GPU is our one and only choice. If your GPU budget hinges around the $800 mark, the RTX 4070 Ti makes for a solid graphics card. It's easily superior to the RX 7900 XT at ray tracing and super-sampling, and that's before you consider the Ada Lovelace family's impressive frame-generation capabilities. Likewise, the CUDA cores on the RTX 4070 Ti surpass the RX 7900 XT's performance at anything related to AI and machine learning. Gigabyte GeForce RTX 4070 Ti Gaming OC 12G $750 $900 Save $150 $750 at Best Buy$750 at Amazon Gigabyte GeForce RTX 4070 Ti AERO OC V2 $753 at Amazon 9 Around $900 Close Before I reveal the GPU in this category, I'll admit that we're really stretching the limits of this (and the subsequent) price range, as you'll find a handful of GPUs that go slightly over the $900 mark. Nevertheless, the RX 7900 XTX is AMD's flagship offering for GPUs based on the RDNA 3 architecture, and its specs live up to its reputation. With 96 Computer Units and an equal number of Ray Accelerators, combined with a 2.3GHz clock speed and a whopping 24 GB of GDDR6 video memory, it'll curb-stomp any game you throw it at with ease. ASRock Phantom Gaming Radeon RX 7900 XTX $910 at Amazon 10 Around $1000 Close It should come as no surprise that Team Green dominates the high-end GPU market. For $1000, you can snag the powerhouse that's the RTX 4080 Super. Sure, the 16GB of VRAM might be an issue if you want the highest frame rates at 4K for a long time. But otherwise, the RTX 4080 provides excellent performance at every modern title, especially if you value the mesmerizing lighting and shadows offered by Nvidia's current-gen ray-tracing technology. Gigabyte GeForce RTX 4080 Super Windforce V2 $1000 at Amazon PNY GeForce RTX 4080 SUPER 16GB Verto OC $962 at Amazon How will manufacturers set the price tags for the next-gen GPUs? Once you leave the $1000 price bracket, there isn't any competition. The RTX 4090 is the only GPU left, though the fact that you need to shell out at least $700 more than an RTX 4080 Super makes it hard to recommend Team Green's current flagship offering. With the way things are going, it doesn't look like GPU prices will stabilize any time soon. Intel's Battlemage is poised to target the budget audience, and given the company's recent track record, the next-generation graphics cards from Team Blue might turn out pretty well. However, the situation might worsen for the upper mid-range and high-end graphics cards. Rumors claim that AMD will only feature mid-range GPUs in its upcoming series. That leaves the high-end market under Nvidia's complete control, meaning we might see absolutely unhinged price tags on the RTX 5000 series lineup. Related Nvidia GeForce RTX 5000 series: All of the rumors so far Interested in the new Blackwell GPUs? Here's everything you need to know about them
[3]
RTX 3050 vs Arc A750 GPU faceoff -- Intel Alchemist goes head to head with Nvidia's budget Ampere
Comparing the RTX 3050 and Arc A750 continues from our previous look at the RTX 3050 vs RX 6600, this time swapping out AMD for Nvidia in the popular $200 price bracket. Nvidia failed to take down AMD, but does it fare any better against Intel? These are slightly older GPUs, as no company has seen fit to offer anything more recent in the budget sector, so let's find out who wins this matchup. The RTX 3050 debuted at the start of 2022 and represents Nvidia's latest xx50-class desktop graphics card, with no successor in the pipeline for the RTX 40-series (at least that we know of). The card nominally launched at $249, though we were still living in an Ethereum crypto mining world so most cards ended up selling well above that mark. Two and a half years later, it's one of the few RTX 30-series Ampere GPUs that remains relatively available, and it's now going for around $199, the rest having been supplanted by the newer Ada Lovelace GPUs. The Arc A750 is Intel's mainstream / budget offering based on the Arc Alchemist GPU architecture. It debuted two years ago and has been one of the most promising offerings for Intel's first lineup of discrete graphics cards in over 20 years. Driver support was a big concerns at launch, but Intel has made excellent progress on that front, which could sway things in favor of the Intel GPU in this face-off. Maybe. Similar to the RTX 3050, the A750 started out as a $289 graphics card, then saw an official price cut to $249, and now it's often selling at or below $199 in the wake of new graphics card releases from Nvidia and AMD. While we're now a couple of years past the 3050 and A750 launch dates, they remain relevant due to the lack of new sub-$200 options. So, let's take an updated look at these budget-friendly GPUs in light of today's market and see how they stack up. We'll discuss performance, value, features, technology, software, and power efficiency -- and as usual, those are listed in order of generally decreasing importance. Intel made a big deal about the Arc A750 targeting the RTX 3060 when it launched, so dropping down to the RTX 3050 should give the A750 an easy victory... and it does. The A750 shows impressive performance compared to the 3050, with both rasterization and ray tracing performance generally favoring the Intel GPU by a significant margin. At 1080p ultra the Arc card boasts an impressive 25% performance lead over the RTX 3050 in our 19 game geomean. That grows to 27% when looking just at rasterization performance, and conversely shrinks to 21% in the ray tracing metric. Regardless, it's an easy win for Intel. The RTX 3050 does a bit better at 1080p medium, but the Arc A750 still wins by a considerable 18% across our test suite. That's not to say that Intel wins in every single game of our test suite, however, and that's likely the same old issue of drivers popping up in the exceptions. Diablo IV (with ray tracing -- it runs much better in rasterization mode) is clearly not doing well on Arc. Avatar, another more recent release, also performs somewhat worse. The Last of Us and Assassin's Creed Mirage, the other two newer releases, are at best tied (ultra settings) and at worse slightly slower on the A750. You might be noticing a pattern here. So while Intel drivers have gotten better, they're still not at the same level as Nvidia. Most of the other games favor the A750 by a very wide margin of roughly 25-55 percent. The lead gets more favorable for the Arc A750 as we increase resolution. At 1440p, the Intel GPU is roughly 32% ahead of the RTX 3050, in all three of our geomeans. The only game where the RTX 3050 still leads is Diablo IV, again likely a driver issue with the ray tracing update that came out earlier this year. (Our advice would be to turn off RT as it doesn't add much in the game, but we do like to use it as a GPU stress test.) Things get a little bumpy for the A750 at 4K ultra, with good rasterization performance -- even better compared to the 1440p results with a whopping 43.6% lead -- but several issues in ray tracing games. The Arc card looses a decent chunk of its lead in RT, sliding down to 17% overall, but with three games (Diablo IV again, Cyberpunk 2077, and Bright Memory Infinite) all performing better on the 3050. Diablo IV routinely dips into the single digits for FPS, and it generally needs more than 8GB of VRAM to run properly at 4K with all the RT effects enabled. Performance Winner: Intel It's a slam dunk for the Arc A750 in this round. The Intel GPU dominates the RTX 3050 even in ray tracing games, an area where Nvidia often holds a strong lead over rival AMD's competing GPUs. Intel may still have driver issues to work out, but the underlying architecture delivers good performance when properly utilized, at least relative to Nvidia's previous generation budget / mainstream GPUs. Pricing has fluctuated quite a bit in recent days, what with Prime Day sales. We saw the Arc A750 drop as low as $170 at one point, but over a longer period it's been hanging out at the $200 mark. RTX 3050 8GB cards also cost around $200 -- and again, note that there are 3050 6GB cards that cut performance quite a bit while dropping the price $20-$30; we're not looking at those cards. If you want to be fully precise, the RTX 3050 often ends up being the higher-priced card. The cheapest variant, an MSI Aero with a single fan, costs $189 right now. $5 more gets you a dual-fan MSI Ventus 2X. Arc A750 meanwhile has sales going right now that drop the least expensive model to $174 for an ASRock card, or $179 for a Sparkle card. Both are "normally" priced at $199, however, and we expect them to return to that price shortly. But RTX 3050 inventory isn't that great, and before you know it, you start seeing 3050 listings for $250, $300, or more. That's way more than you should pay for Nvidia's entry-level GPU. Arc A750 listings are far more consistent at hitting the $200 price point, with most models hitting the $199.99 price point exactly. There may be fewer A750 models overall (only seven exist that we know of), but most are priced to move. Pricing: Tie While we could say the Arc A750 the winner, based on current sales, in general we've seen the two GPUs separated by just $5-$10. Prices will continue to fluctuate, and inventory of the RTX 3050 could also disappear. But for the here and now, we're declaring the price a tie because there's usually at least one inexpensive card for either GPU priced around $190-$200. If there's not, wait around for the next sale. Unlike our recent RX 6600 vs. Arc A750 comparison, Intel's Arc A750 and Alchemist architecture look a lot more like Nvidia's RTX 3050 and its Ampere architecture. Both the Arc A750 and RTX 3050 sport hardware-based acceleration for AI tasks. The RTX 3050 boasts third generation tensor cores, each of which is capable of up to 512 FP16 operations per cycle. The Arc A750 has Xe Matrix eXtensions (XMX), with each XMX core able to do 128 operations per cycle. That ends up giving the A750 a relatively large theoretical advantage. Both GPUs also come with good ray tracing hardware. But let's start with the memory configurations. The RTX 3050 uses a mainstream 128-bit wide interface accompanied by 8GB of GDDR6 operating at 14 Gbps. The Arc A750 has 256-bit interface, still with only 8GB of capacity, but the memory runs at 16 Gbps. The bus width and memory speed advantage on the A750 gives the Intel GPU a massive advantage in memory bandwidth, with roughly 1.3X as much bandwidth as the RTX 3050 -- 512GB/s vs 224 GB/s. And there's no large L2/L3 cache in either of these architectures to further boost the effective bandwidth. It's worth mentioning that Nvidia GPUs generally have less memory physical memory bandwidth than their similarly priced competition. Nvidia compensates for this fact with its really good memory compression, which reduces the memory load. We aren't sure if Intel is using memory compression or how good it might be, but regardless, it's obvious from the performance results that the A750's robust memory subsystem has more than enough bandwidth to defeat the RTX 3050. The physical dimensions of the GPUs are also quite different. The Arc A750's ACM-G10 uses a colossal 406 mm^2 die, while the RTX 3050 uses a significantly smaller die that's only 276 mm^2. Of course, Ampere GPUs are made on Samsung's 8N node while Intel uses TSMC's newer and more advanced N6 node. Transistor count scales with size and process technology, with the A750 packing 21.7 billion transistors compared to the RTX 3050's comparatively tiny 12 billion. These differences can be seen as both a pro and con for the Arc A750. Its bigger GPU with more transistors obviously helps performance compared to the RTX 3050. However, this also represents a weakness of Intel architecture. The fact that Intel has 80 more transistors and a chip that's nearly 50% larger, on a more advanced node, means that the ACM-G10 has to cost quite a bit more to manufacture. It also shows Intel's relative inexperience in building discrete gaming GPUs, and we'd expect to see some noteworthy improvements with the upcoming Battlemage architecture. Put another way, if we use transistor count as a measuring stick, the Arc A750 on paper should be performing more on par with the RTX 3070 Ti. That's assuming similarly capable underlying architectures, meaning Intel's design uses a lot more transistors to ultimately deliver less performance. Yes, the A750 beats up on the 3050 quite handily, but the RTX 3070 wipes the floor with roughly 60% higher performance overall -- and still with fewer transistors and a smaller chip on an older process node. We can make similar point when it comes to raw compute performance. On paper, the A750 delivers 17.2 teraflops of FP32 compute for graphics, and 138 teraflops of FP16 for AI work. That's 89% more than the RTX 3050, so when combined with the bandwidth advantage, you'd expect the A750 to demolish Nvidia's GPU. It wins in performance, but even a 25% advantage is a far cry from the theoretical performance. It's a bit more difficult to make comparisons on the RT hardware side of things. Nvidia tends to be cagey about exactly how it's RT cores work, often reporting things like "RT TFLOPS" -- using a formula or benchmark that it doesn't share. What we can say is that, based on the gaming performance results, when Arc drivers are properly optimized, the RT throughput looks good. We see this in Minecraft, a demanding "full path tracing" game that can seriously punish weaker RT implementations. Pitting the A750's 28 ray tracing cores against the 3050's 20 cores, Intel comes out roughly 40% faster at 1080p medium as an example. So, 40% more cores, 40% more performance. While Intel might have the hardware grunt, however, software and drivers heavily favor Nvidia. Nvidia has been doing discrete gaming GPU software/driver development much longer than Intel, and Intel has previously admitted that all of its prior work on integrated graphics drivers didn't translate over to Arc very well. Nvidia's drivers are the gold standard other competitors need to strive to match, and while AMD tends to be relatively close these days, Intel still has some clear lapses in optimizations. As shown earlier, check out some of the newer games (Diablo IV, Avatar, The Last of Us, and Assassin's Creed Mirage) and it's obvious those aren't running as well on the A750 as some of the older games in our test suite. Intel has come a long way since the initial Arc launch, especially with older DX11 games, but its drivers still can't match AMD or Nvidia. Intel has a good cadence going with driver updates, often releasing one or more new drivers each month, but it still feels like a work in progress at times. Arc generally doesn't play as well as its competitors in day one releases, and we've seen quite a few bugs over the past year where you'll need to wait a week or two before Intel gets a working driver out for a new game. Features are good for Intel, but again nowhere near as complete as what Nvidia offers. Nvidia has DLSS, Intel has XeSS (and can also run AMD FSR 2/3 when XeSS isn't directly supported). As far as upscaling image quality goes, DLSS still holds the crown, but XeSS 1.3 in particular looks better in almost all cases that we've seen than FSR 2. Frame generation isn't something we're particularly worried about, but Nvidia has a bunch of other AI tools like Broadcast, VSR, Chat RTX. It also has Reflex and DLSS support is in far more games and applications than XeSS. The driver interface also feels somewhat lacking on Intel. There's no way to force V-sync off as an example, which can sometimes impact performance. (You have to manually edit a configuration file to remove vsync from Minecraft, for example -- if you don't, performance suffers quite badly.) Maybe for the uninitiated the lack of a ton of options to play with in the drivers could feel like a good thing, but overall it's clearly a more basic interface than what Nvidia offers. Features, Technology, and Software Winner: Nvidia This one isn't even close, at all. The Intel Arc experience has improved tremendously since 2022, but it still lags behind. Intel isn't throwing in the towel, and we're hoping that Battlemage addresses most of the fundamental issues in both the architecture as well as software and drivers. But right now, running an Arc GPU can still feel like you're beta testing at times. It's fine for playing popular, older games, but newer releases tend to be much more problematic on the whole. Power consumption and energy efficiency both heavily favor the RTX 3050. Even though the Arc A750 outperforms the RTX 3050 in raw performance, the RTX 3050 gets the upper hand in all of our power-related metrics. And again, this is with a less advanced manufacturing node -- Nvidia's newer RTX 40-series GPUs that use TSMC's 4N process blow everything else away on efficiency right now. The RTX 3050 pulled roughly 123 watts on average across our testing suite. Peak power use topped out at around 131W in a few cases, matching the official 130W TGP (Total Graphics Power) rating. Nvidia GPU pulled a bit less power at 1080p (122W), but basically we're GPU bottlenecked in almost all of our tests, so the card runs flat out. The Arc A750 officially has a 225W TDP, and actual real-world power use was often 20-30 watts below that. But that's still at least 60W more power use than Nvidia's card, basically matching the power draw of the significantly faster RTX 3060 Ti. 1080p proved to be the most power efficient, averaging 183W. That increased to 190W at 1080p ultra, 201W for 1440p ultra, and 203 at 4K ultra. Divide the framerate by power and we get efficiency in FPS/W. Across all resolutions, efficiency clearly favors the RTX 3050. It's about 25% more efficient at each test resolution and setting, for example the 3050 averaged 0.551 FPS/W at 1080p medium compared to the A750's 0.433 FPS/W. The RTX 3050's power efficiency speaks to Nvidia's experience in building discrete GPU architectures. Even with a less advanced "8nm" process node (Samsung 8N in actuality is more of a refined 10nm-class node), going up against TSMC's "6nm" node, Nvidia comes out ahead on efficiency. The A750 wins the performance comparison, but only by using 50-65 percent more power. And that's not the end of the world, as even a 200W graphics card isn't terribly difficult to power or cool, but there's a lot of room for improvement. Power Efficiency Winner: Nvidia This is another area where the RTX 3050 easily takes the win. Not only does it use 30-40 percent less power, depending on the game, but it's about 25% higher efficiency. This isn't the most important metric, but if you live in an area with high electricity costs, that may also be a factor to consider. It's also worth noting that idle power use tends to be around 40W on the A750, compared to 12W on the 3050, with lighter workloads like watching videos also showing a pretty sizeable gap. Unfortunately (or fortunately, depending on your viewpoint), we are calling this one a draw. Both GPUs stand up very well against each other but have totally different strengths and weaknesses. The brunt of the Arc A750's strength lies in its raw performance. It handily beat the RTX 3050 in rasterized games, and generally comes out ahead in ray tracing games as well -- except in those cases where drivers hold it back. Thanks to its bigger GPU and higher specs, with more raw compute and memory bandwidth, it comes out with some significant performance leads. If you're willing to risk the occasional drivers snafu in new releases, and you only want the most performance per dollar spent, the A750 is the way to go. It gets two points for performance that's often the most important category in these faceoffs, plus it comes out with a relatively large lead of around 30% (unless you only play at 1080p medium). But despite the A750's strong performance, the RTX 3050 also has a lot of upsides. Arguably, the biggest strength of the 3050 is the fact that it's an Nvidia GPU. With Nvidia hardware, you generally know you're getting a lot of extras outside of raw performance. Nvidia cards have more features (e.g. DLSS, Reflex, Broadcast, VSR) and arguably the best drivers of any GPU. CUDA apps will run on everything from a lowly RTX 3050 up to the latest RTX 4090 and even data center GPUs, as long as your GPU has sufficient VRAM for the task at hand. Power consumption and efficiency are less essential advantages, but the gap is more than large enough to be meaningful. PC users looking to pair a budget GPU with lower wattage power supplies will be far better off with the RTX 3050 in general, as it only needs a single 8-pin (or potentially even a 6-pin) power connector and draws 130W or less. The Arc A750 doesn't usually need to use its rated 225W or power, but it's often going to end up in the 200W ballpark, and idle power also tends to be around 20W higher. That adds up over time. For someone willing to deal with some of the potential quirks, the A750 can be enticing. It's also a lot safer if you're not playing a lot of recent releases. People who like to play around with new tech might also find it of interest, simply because it's the first serious non-AMD, non-Nvidia GPU to warrant consideration. And if you can find the A750 on sale for $175, all the better. People who just want their graphics card to work without a lot of fuss, and to have access to Nvidia's software ecosystem, will find the RTX 3050 a safer bet. It often won't be as fast, but it can catch up in games with DLSS support (that lack FSR/XeSS support), and you're less likely to pull your hair out in frustration. What we want now from Intel is to see Battlemage address all these concerns and come out as a superior mainstream option. We'll see what happens with the next generation GPU architectures in the next six to twelve months.
Share
Share
Copy Link
An overview of the current GPU market, highlighting the top-tier NVIDIA RTX 4090, mid-range options, and budget-friendly alternatives like the RTX 3050 and Intel Arc A750.
The NVIDIA GeForce RTX 4090 stands at the pinnacle of graphics card technology, offering unparalleled performance for gaming and content creation. With its Ada Lovelace architecture, the RTX 4090 boasts significant improvements over its predecessor, the RTX 3090. It delivers exceptional frame rates at 4K resolution and even manages playable framerates at 8K in some titles 1.
The RTX 4090's raw power comes at a premium price point, typically around $1,600, making it a choice for enthusiasts and professionals who demand the absolute best in graphics performance. Despite its high cost, the card's capabilities in ray tracing and AI-enhanced graphics set a new standard in the industry.
For users seeking a balance between performance and affordability, the GPU market offers several compelling options. In the $400 to $500 range, cards like the NVIDIA RTX 4060 Ti and AMD Radeon RX 7600 XT provide solid performance for 1440p gaming 2.
These mid-range GPUs offer features like ray tracing and DLSS (Deep Learning Super Sampling) or FSR (FidelityFX Super Resolution), allowing gamers to enjoy modern graphics technologies without breaking the bank. They represent a sweet spot for many users, delivering smooth gameplay at high settings in most current titles.
In the budget segment, competition has intensified with the entry of Intel's Arc series GPUs. The NVIDIA GeForce RTX 3050 and Intel Arc A750 represent two interesting options for gamers on a tight budget, typically priced around $200-$250 3.
The RTX 3050, while being the entry-level card in NVIDIA's RTX 30 series, still offers ray tracing capabilities and DLSS support. It performs well in 1080p gaming, making it suitable for esports titles and casual gamers.
Intel's Arc A750, on the other hand, marks the company's serious entry into the discrete GPU market. It often matches or even surpasses the RTX 3050 in raw performance, especially in newer games that utilize modern APIs like DirectX 12 and Vulkan. However, it may struggle with older titles and suffers from driver immaturity compared to NVIDIA's well-established software ecosystem.
While raw performance is crucial, the GPU landscape is also shaped by software features and driver support. NVIDIA's long-standing presence in the market has resulted in a robust ecosystem with mature drivers and widespread application support. Their DLSS technology has been a game-changer, allowing for significant performance boosts with minimal quality loss.
AMD and Intel are catching up in this regard. AMD's FSR technology is gaining traction as an open-source alternative to DLSS, while Intel is working on improving its drivers and introducing features like XeSS (Xe Super Sampling) to compete in the AI upscaling space.
The GPU market continues to evolve rapidly, with each new generation pushing the boundaries of performance and efficiency. As technologies like ray tracing and AI-enhanced graphics become more prevalent, we can expect to see these features trickle down to more affordable price points, making advanced graphics accessible to a broader range of users.
Reference
[1]
[2]
NVIDIA launches its new GeForce RTX 50 Series GPUs, featuring the Blackwell architecture and DLSS 4 technology, promising significant performance improvements and AI-enhanced gaming experiences.
10 Sources
10 Sources
NVIDIA's new GeForce RTX 5070 Ti offers impressive 4K gaming performance with DLSS 4 technology at a more affordable price point than higher-end models.
7 Sources
7 Sources
Intel launches the Arc B580, its second-generation discrete GPU, offering competitive performance and features at a $249 price point, challenging NVIDIA and AMD in the mainstream gaming market.
5 Sources
5 Sources
NVIDIA launches the GeForce RTX 5090, a high-end graphics card with significant performance improvements and new AI features, marking a leap in GPU technology for gaming and creative professionals.
24 Sources
24 Sources
Nvidia unveils its new RTX 50 Series GPUs, promising significant performance improvements through AI-driven technologies like DLSS 4, potentially revolutionizing gaming graphics and performance.
15 Sources
15 Sources