The Advanced Research and Invention Agency (ARIA) is a UK-based research and development funding agency established to catalyze transformative scientific and technological breakthroughs.
The success of the UK’s COVID response – exemplified by initiatives like the Vaccines Taskforce and Rapid Response Funds – has highlighted the importance of agile funding models. ARIA seeks to apply these lessons, operating as a flexible, independent body dedicated to high-risk, high-reward projects.
First announced in February 2021 and backed by a government investment of £800 million, the organization was inspired by the principles of the US Advanced Research Projects Agency (ARPA), now known as Defense Advanced Research Projects Agency (DARPA).
Since the 1950s, DARPA has been instrumental in funding transformative technological advancements, including developing the Internet (ARPANET), GPS technology, and early voice recognition systems. Other countries, such as Japan and Germany, have since established similar bodies, such as Japan’s Moonshot R&D and Germany’s SPRIN-D.
Funding and project support
Traditional research funding in the UK has often been characterized by cautious investment strategies, prioritizing projects with predictable outcomes. ARIA seeks to disrupt this paradigm by providing the autonomy and resources necessary for researchers to pursue bold ideas without the constraints of conventional funding mechanisms.
Overall, ARIA operates with a budget of £800 million allocated over five years, from 2023 to 2028. The UK government provides this funding through the Department for Science, Innovation and Technology (DSIT). As an independent agency, ARIA can allocate these funds toward projects that align with its mission of unlocking significant scientific and technological advancements. This financial structure is designed to provide ARIA with the agility to respond swiftly to emerging opportunities and support projects that may not fit the traditional funding frameworks.
“Our funding terms are designed to encourage inventor-led startups and stimulate science entrepreneurship in the UK,” comments Antonia Jenkinson, chief finance and operating officer on the ARIA website.
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security – newly updated for 2024.
The following core principles define ARIA’s approach:
High-risk, high-reward research focus: ARIA exclusively supports projects that have the potential to create paradigm shifts in science and technology. While many funded projects may not succeed, those that do could profoundly impact society.
Strategic and scientific autonomy: ARIA operates independently in selecting research programs, funding allocation, and institutional culture. Programme Directors have full discretion over the projects they support, with minimal government intervention.
Empowering talented individuals: ARIA provides exceptional researchers and innovators the freedom to pursue their boldest ideas. Program directors are appointed based on their expertise and vision, allowing them to direct funding dynamically.
Financial flexibility and operational freedom: ARIA is structured to minimize bureaucratic constraints and maximize efficiency. To encourage disruptive innovation, it employs various innovative funding mechanisms, including seed grants, equity stakes, and prize-based incentives.
The agency operates through two primary funding modes: programs and opportunity seeds. Programs are large-scale initiatives to advance complex ideas requiring coordinated investment across multiple disciplines and institutions, totalling between £50-80 million ($63-101 million). Program directors manage a portfolio of projects within these programs to drive significant breakthroughs.
Opportunity seeds of up to £500,000 ($631,700) support individual research teams exploring novel pathways that could inspire future programmes or evolve into standalone projects. This approach allows ARIA to fund diverse ideas and rapidly test their potential.
The agency does not retain intellectual property rights to the work it funds and generally does not require match funding. ARIA also does not take equity stakes in spinouts commercializing ARIA-funded IP. This approach is designed to encourage inventor-led UK startups and stimulate science entrepreneurship in the UK.
ARIA has identified several “opportunity spaces”—critically important but underexplored research areas ripe for breakthroughs. Each opportunity space is a foundation for multi-year programmes directed by the agency’s Programme Directors. Notable opportunity spaces include:
Mathematics for safe AI: This program aims to develop technical solutions to ensure that powerful AI systems interact as intended with real-world systems and populations. It combines scientific models and mathematical proofs to achieve ethical AI that can transform the tech sector while preventing user harm.
Nature computes better: This research explores redefining how computers process information by exploiting natural principles, potentially leading to dramatically more efficient computing systems.
Smarter robot bodies: Focusing on creating robots capable of operating independently in complex and dynamic environments, this program aims to develop smarter robotic systems to reduce the burden of physical labour.
Scalable neural interfaces: This area focuses on developing minimally invasive technologies to interface with the human brain at scale, aiming to transform our understanding and treatment of neurological and neuropsychiatric disorders.
Programmable plants: By programming plants, this initiative seeks to address challenges like food insecurity, climate change, and environmental degradation, ensuring a sustainable biosphere for future generations.
Through these initiatives, ARIA actively funds projects that challenge existing assumptions, open new research paths, and strive toward transformative capabilities. The agency’s commitment to high-risk, high-reward research is designed to position the UK as a leader in scientific and technological innovation, with the potential to generate significant social and economic benefits.
Key ARIA personnel and relationships with public sector
ARIA’s leadership comprises individuals with diverse science, technology, and innovation expertise. Ilan Gur serves as the CEO at ARIA, bringing a wealth of experience from his previous roles, including his tenure as a Program Director at ARPA-E and as the founder of Activate, an organization supporting early-stage scientists in transforming research into viable products and businesses.
Antonia Jenkinson, chief finance and operating officer at ARIA, supports Gur and oversees ARIA’s financial and operational functions. The agency’s strategic direction is further guided by its board, which includes notable figures such as the entrepreneur and government advisor Matt Clifford, who oversaw the UK’s recently-published AI Opportunities Action Plan, as well as Nobel laureate David MacMillan and Kate Bingham, the former head of the UK’s Vaccine Taskforce.
Its advisors also include Demis Hassabis, the co-founder and CEO of Google DeepMind. ARIA states that its board and advisors allow it to ground its high-risk, high-reward scientific exploration in diverse perspectives and expert-led governance.
ARIA operates as an independent public body under the Department for Business, Energy & Industrial Strategy (BEIS) sponsorship. While it has significant autonomy, the agency remains subject to national security oversight and financial transparency requirements, including an annual audit by the National Audit Office.
Unlike UK Research and Innovation (UKRI), which manages a broad research funding portfolio across multiple disciplines, ARIA focuses on a narrower range of projects. However, both agencies must collaborate to ensure alignment in the UK’s research ecosystem. ARIA’s distinct model allows it to take risks that traditional funding mechanisms cannot, complementing UKRI’s more structured approach.
Future development
ARIA represents a bold new approach to research funding in the UK, drawing inspiration from the world’s most successful innovation agencies. By embracing risk, minimizing bureaucracy, and providing top researchers with unprecedented autonomy, ARIA aims to unlock breakthrough discoveries that will shape the future of science, technology, and industry.
With an initial investment of £800 million and a leadership team committed to transformative research, ARIA has the potential to cement the UK’s status as a science superpower and drive economic growth through pioneering technological advancements.
In a world where technological advancements are accelerating, ARIA’s establishment reflects a strategic commitment to ensuring that the UK remains at the cutting edge of scientific discovery and innovation. By empowering researchers to pursue visionary projects, ARIA hopes to deliver breakthroughs that could have profound and lasting impacts on not just the tech sector but wider society.
A new Brookings Institution report on generative AI (genAI) found that the more highly skilled a tech worker is, the more vulnerable they are to having their jobs supplemented by the technology.
That differs dramatically from past automation technologies that primarily displaced low-skilled or physical laborers, according to Brookings, a Washington-based nonprofit public policy research firm.
While IT workers can be found in virtually any organization today, genAI will have its greatest impact on jobs in high-tech geographical regions such as Silicon Valley, Seattle, WA., and Cambridge, MA., where highly skilled workers are concentrated. The report asserts that genAI tools will target cognitive tasks — such as writing, coding, and data analysis — impacting professionals in fields like software development, legal analysis, and finance.
[ Related: Copilot for Microsoft 365 explained: GenAI meets Office apps ]
The report challenges earlier analyses that predicted genAI would mainly automate routine, repetitive tasks, and it highlights the growing risk to white-collar jobs and highly educated workers. But Brookings researchers said the technology is unlikely to eliminate jobs entirely. Instead, it will create a scenario where professionals must work alongside AI, using it as an augmentation tool rather than as a full replacement.
GenAI has already proven itself to be an effective coder, assisting developers in creating new applications. That, coupled with the fact that the demand for skilled software developers is rising, will drive genAI adoption.
Research firm IDC has forecast a shortage of four million developers this year, and the US Bureau of Labor Statistics (BLS) expects nearly 200,000 developer jobs to open annually through 2030. By 2027, genAI tools that can assist in the creation, testing and operation of software are expected to be adopted by half of all enterprise software engineers, according to a study by Gartner Research.
Online coding platform Replit, for example, recently partnered with AI research company Anthropic and Google to help non-technical Zillow employees contribute to software development. The new applications are now being used to route more than 100,000 home shoppers to agents.
“The Brookings report presents a compelling case that AI will have a unique impact on knowledge workers and high-tech regions,” said Peter Miscovich, Global Future of Work Leader at JLL Consulting. “While this is a crucial shift from past waves of automation, it does not mean that AI will spare lower-level jobs entirely. Instead, AI’s influence will be widespread, reshaping industries at multiple levels.”
Miscovich referred to the Brookings report as “a bit nuanced” in that it also indicates lower-skilled technology, operations, and customer service workers will also be affected by the fast-evolving technology.
While manual workers are less affected, as robots haven’t fully replaced most of those jobs, AI-enabled robots are on the rise, according to Miscovich, “and our sense is that manual job disruption will come about at some future point in time.”
Will AI really spare lower-level jobs?
Nearly four in 10 Americans believe genAI could diminish the number of available jobs, according to a study conducted by Deloitte and released in October by the New York Federal Reserve Bank. And the World Economic Forum’s Jobs Initiative study found that close to half (44%) of worker skills will be disrupted in the next five years — and 40% of tasks will be affected by the use of genAI tools and the large language models (LLMs) that underpin them.
The Deloitte results highlight younger workers’ growing anxiety around AI replacing jobs — and the actions they’re taking to improve their own job security. Deloitte’s survey of 1,874 full- and part-time workers from the US, Canada, India, and Australia — roughly two-thirds of whom are early career workers — found that 34% are pursuing a professional qualification or certification courses, 32% are starting their own businesses or becoming self-employed, and 28% are even adding part-time contractor or gig work to supplement their income.
Despite the Brookings report’s assertion that AI will primarily affect high-skilled jobs, there is evidence to suggest it will continue to replace low-wage, repetitive jobs as well, according to Miscovich, including:
Customer service and call centers: AI chatbots and virtual assistants are already replacing entry-level call center jobs. Large corporations are integrating AI-driven customer service platforms, reducing the need for human representatives.
Administrative and clerical Roles: Generative AI tools can automate document processing, email responses, scheduling, and data entry, roles traditionally performed by administrative staff.
Retail and fast-food automation: AI-powered self-checkouts, robotic food preparation, and inventory management systems continue to reduce the need for human workers in retail and food service.
“Thus, while Brookings suggests that AI will hit high-tech jobs the hardest, it is probably more accurate to say that AI will affect a broad range of jobs across skill levels,” Miscovich said.
Key trends to watch, according to Miscovich, include:
New roles and AI-augmented work: Many professionals will need to shift from purely technical jobs to roles that require human-AI collaboration. For example, software engineers might shift toward AI model training and oversight rather than coding from scratch.
Upskilling and reskilling initiatives: Governments and corporations will need to invest in workforce retraining programs to help displaced workers transition into roles that require human judgment, creativity, and oversight of AI systems.
Hybrid workforce models: Companies will integrate AI into workflows but still require human employees to handle complex problem-solving, ethical considerations, and customer interactions that AI cannot fully replicate.
Rather than viewing AI as a job destroyer, it is better to consider it as a force for transformation, Miscovich said. “Workers across industries will need to adapt, reskill, and learn to collaborate with AI rather than compete against it,” he said. “The key challenge for policymakers and businesses will be ensuring that AI-driven economic shifts do not exacerbate existing inequalities but instead create new opportunities across all regions and professions.”
Sarah Hoffman, director of AI research at AlphaSense and formerly vice president of AI and Machine Learning Research at Fidelity Investments, said genAI will change the future of work and how companies deploy the fast-moving technology over the next few years.
The arrival of genAI tools in business will allow workers to move toward more creative endeavors — as long as they learn how to use the new tools and even collaborate with them. What will emerge is a “symbiotic” relationship with an increasingly “proactive” technology that will require employees to constantly learn new skills and adapt, she said in an earlier interview with Computerworld.
“As AI automates more processes, the role of workers will shift,” Hoffman said. “Jobs focused on repetitive tasks may decline, but new roles will emerge, requiring employees to focus on overseeing AI systems, handling exceptions, and performing creative or strategic functions that AI cannot easily replicate.
Gartner analyst: Brookings is wrong
Gartner analyst Nate Suda outright disagreed with the Brookings report findings.
“Generative AI will automate some tasks, for sure — possibly even roles, in time,” Suda said. “However, the Brookings report’s conflation of genAI with automation is a fallacy. In many cases, [the] productivity impact of genAI is a second-order effect. GenAI creates a special relationship with the worker, changes the worker, and that change impacts productivity.”
Gartner found that low-experience workers in low-complexity roles, such as call centers, saw a productivity boost — not from AI’s automation capabilities, but from its ability to help them learn their job more effectively. That, in turn, led to higher productivity from workers using genAI, a phenomenon known as “experience compression,” or the ability for the technology to accelerate learning.
GenAI, Suda argued, boosts productivity for highly experienced workers in complex roles, like corporate finance or software engineering, by acting as a thought partner. That effect, he said, is known as “skill magnification,” where the technology amplifies employee capabilities, creativity, and productivity, leading to greater impact.
As time spent on tasks increases, so does the quality and quantity of output, making productivity rise disproportionately, according to Suda. “GenAI’s true strength lies in inspiring creativity and teaching, not just automating tasks,” he said.
Our expert reviewers spend hours testing and comparing products and services so you can choose the best for you. Find out more about how we test.
Today we’re looking at Corsair’s latest AIO, the iCUE Link Titan 360 RX RGB. Most users interested in this AIO will be interested in it due to its compatibility with the iCUE ecosystem, which allows for a ton of customization options – including cooler upgrades like optional VRM fan modules or even adding a fancy LCD screen. In common scenarios, this cooler has some of the lowest noise levels I’ve seen thus far – but it isn’t without flaws, as I’ll detail below
Will this AIO make our list of Best CPU Coolers? Let’s take a look at the specifications and features of the Titan RX RGB AIO, then we’ll go over thermal performance and noise levels.
(Image credit: Tom’s Hardware)
Cooler specifications
Swipe to scroll horizontally
Cooler
Corsair iCUE Link Titan 360 RX RGB
MSRP
$199 USD
Radiator Material
Aluminum
Pump Speed
Up to ~3000 RPM
Lighting
iCUE Link for CPU Block and fans
Warranty
6 Years
Socket Compatibility
Intel Socket LGA 1851/1700 AMD AM5 / AM4
Unit Dimensions (including fans)
396 (L) x 120 (W) x 52mm (D)
Base
Copper cold plate
Maximum TDP (Our Testing)
>265W with Intel’s i7-14700K
Packing and included contents
(Image credit: Tom’s Hardware)
The packaging for Corsair’s AIO is relatively standard, not much different from the average AIO. The product is secured with plastic wrappings and molded cardboard, and the fans are preinstalled for user convenience.
Included with the box are the following:
Three pre-installed 120mm fans
360mm radiator and CPU block
Pre-installed Thermal Paste
iCUE Link Hub
Mounting accessories for modern AMD & Intel platforms
(Image credit: Tom’s Hardware)
Features of Corsair’s iCUE Link Titan 360 RX RGB
*️⃣Pre-installed Thermal Paste
Corsair only includes pre-installed thermal paste, sufficient for a single installation. This will be useful for most users, but the downside is that you’ll need to purchase additional thermal paste if you ever want to move the cooler to a new system or swap in a new CPU.
(Image credit: Tom’s Hardware)
*️⃣27mm Radiator
The iCUE Link Titan 360 RX RGB includes a radiator 27mm in size, which is standard for most liquid coolers.
*️⃣Upgrade Options
One thing that sets the iCUE Link Titan 360 RX RGB apart is the ability to upgrade the AIO with different options that mount on top of the cold plate. These optional upgrades are cheapest if you purchase the AIO directly from Corsair and customize the features during the checkout process.
(Image credit: Tom’s Hardware)
The module I find most interesting is the VRM fan upgrade, which costs an additional $30 if you purchase it after already owning the AIO – but only $15 if you purchase it with the AIO.
Another upgrade option is a 2.1-inch, 480×480 IPS display which allows you to view real-time CPU temperatures, animated GIFs, movie files, and more. However, I find it hard to recommend as an aftermarket purchase due to its high price of $100 USD.
(Image credit: Tom’s Hardware)
The last upgrade option available is a simpler “Groove” module, which changes the RGB aesthetic for only $15.
(Image credit: Tom’s Hardware)
*️⃣iCUE Ecosystem
The Titan RX 360 RX RGB is controlled by an iCUE hub, pictured below. This allows you to take advantage of the iCUE ecosystem.
(Image credit: Tom’s Hardware)
There’s a lot of options and customization available with Corsair’s iCUE Link system, which is designed to simplify PC building and cable routing, while adding some interesting lighting effects.
One disadvantage of the iCUE Link hub is that it has higher power requirements than a simple USB connection can provide – you’ll need an extra 6-pin PCI-e GPU power connection to power it on.
*️⃣Corsair RX Series 120mm fans
There’s more to a cooler than just the heatsink or radiator. The bundled fans have a significant impact on cooling and noise levels, as well as how the cooler looks in your case. The fans included here aren’t PWM and require using an iCUE Hub to control. However, as the benchmarks will show, they enable strong cooling performance, both at full speed and when restricted to low noise levels!
These fans are pre-installed and feature a quick-connect system, designed to save the user time and offer tidy cable management.
Swipe to scroll horizontally
Dimensions
120 x 120 x 25mm
Fan Speed
300-2100 RPM
Air Flow
Up to 73.5 CFM
Air Pressure
Up to 4.33 mmH2O
Bearing Type
Magnetic Dome
Lighting
iCUE
*️⃣Full RAM Compatibility
Like almost every other AIO on the market, Corsair’s AIO doesn’t interfere with or overhang RAM DIMMs in any manner, allowing for use of all sizes of RAM, no matter how tall.
(Image credit: Tom’s Hardware)
*️⃣Six-year warranty
Most AIOs on the market have a limited warranty of only 1-3 years. Corsair goes the extra mile with a six-year warranty for the Titan 360 RX RGB. This generous warranty almost negates the high price of this AIO – almost.
Things I didn’t like about this AIO
There are two primary things I didn’t like about this AIO, and one minor complaint that is subjective.
❌ First, the iCUe software didn’t always save my custom cooling settings. On multiple occasions when I rebooted my computer, I’d have to manually configure them. Sometimes, my presets wouldn’t save at all. Enabling the “device memory mode” in the iCUE software prevents this problem from occurring entirely, but users shouldn’t have to take this extra step.
❌ The second thing I don’t like about this AIO is that it has higher power consumption compared to competitors. You need an extra 6-pin PCIe GPU power connection to power the hub and cooler. Even if you don’t care about some extra power consumption, this is inconvenient when modern GPUs often require whatever PCIe plugs your PSU has to offer.
❌ The last thing I don’t like about Corsair’s iCUE Link Titan 360 RX RGB is that by default, pump and fan speeds are tied to the temperature of the liquid coolant. But this is a personal preference, you might actually prefer this type of operation. This has the advantage of avoiding fan bursts and delivering lower maximum noise levels.
❌ The primary disadvantage is that this design allows the CPU to reach its peak temperature and throttle during intensive workloads. The other disadvantage is that fans will remain at higher noise levels even after a workload has ended – because it is harder / slower to cool the temperature of the liquid coolant down than it is to cool the temperature of the CPU down.
Testing configuration – Intel LGA1700 and LGA1851 platform
Swipe to scroll horizontally
CPU
Intel Core i7-14700K
GPU
ASRock Steel Legend Radeon 7900 GRE
Motherboard
MSI Z790 Project Zero
Case
MSI Pano 100L PZ Black
System Fans
Iceberg Thermal IceGale Silent
There are many factors other than the CPU cooler that can influence your cooling performance, including the case you use and the fans installed in it. A system’s motherboard can also influence this, especially if it suffers from bending, which results in poor cooler contact with the CPU.
In order to prevent bending from impacting our cooling results, we’ve installed Thermalright’s LGA 1700 contact frame into our testing rig. If your motherboard is affected by bending, your thermal results will be worse than those shown below. Not all motherboards are affected equally by this issue. I tested Raptor Lake CPUs in two motherboards. And while one of them showed significant thermal improvements after installing Thermalright’s LGA1700 contact frame, the other motherboard showed no difference in temperatures whatsoever! Check out our review of the contact frame for more information.
I’ve also tested this cooler with Intel’s latest platform, Arrow Lake and LGA 1851.
Swipe to scroll horizontally
CPU
Intel Core Ultra 9 285K
GPU
MSI Ventus 3X RTX 4070Ti Super
Motherboard
MSI Z890 Carbon Wifi
Case
MSI MPG Gungnir 300R
System fans
Pre-installed case fans
LGA 1700 and 1851 Installation
The installation of the AIO is simple. The following steps assume that you will mount the radiator to your case first, which is generally a good idea unless your case is very small.
1. You’ll first need to place the backplate against the rear of the motherboard. The backplate included is simple, and only really designed for a single installation. It uses adhesive strips around the rubber standoffs. This has the advantage of making the first installation easy, but it doesn’t adhere very well on additional installations – requiring that the user hold the backplate while attempting to complete the other installation steps.
(Image credit: Tom’s Hardware)
2. Next, take the CPU block and place it on top of the CPU. Use the screws pre-attached to secure the CPU block.
(Image credit: Tom’s Hardware)
3. The next step will be to install the iCUE hub, connect it to a 6-pin PCI-e power cable, and then connect cables from the hub to the CPU block and radiator.
4. Now you can power on your computer, as installation is complete.
The most interesting aspect of Nvidia’s new RTX 50 series is not the GPUs themselves – not even close – and it’s not multi-frame generation either. It’s DLSS 4 Super Resolution upscaling, which has received a substantial update aimed at improving visual quality.
The old CNN AI model has been replaced with a newer, larger Transformer model, which Nvidia claims can achieve a level of quality that wasn’t possible with previous versions of DLSS. So how good is DLSS 4 upscaling? Let’s find out.
Deeply analyzing upscaling technology is a massive undertaking, so today’s focus is on DLSS 4 versus DLSS 3 upscaling at 4K resolution. We have data for 1440p and 1080p upscaling as well, but that’s something we plan to revisit later. The goal here is to determine where DLSS 4 has improved, where it struggles, and what the new acceptable minimum quality level for gaming is.
Previously, we found that for the best experience, you likely wouldn’t want to drop below Quality mode at 4K – maybe Balanced at a pinch – but going all the way down to Performance mode usually resulted in noticeable visual artifacts in motion. Is that still the case with DLSS 4, or are those lower settings now more viable?
To assess this, we will explore image quality across 14 different areas, including texture quality, edge stability, ghosting, disocclusion, foliage, particles, water, and more.
After matching and comparing footage in all the areas where upscaling tends to struggle, we will have a solid understanding of which DLSS 4 modes are visually equivalent to DLSS 3 and how much improvement Nvidia has been able to achieve. We will also briefly examine performance, as the new Transformer model is larger and more demanding to run.
One area of confusion surrounding DLSS 4 is compatibility. DLSS 4 upscaling works on all RTX GPUs, going back to the GeForce 20 series. In other words, it’s not restricted to the latest GeForce graphics cards and you don’t need to buy an RTX 5090 to access DLSS 4 upscaling or ray reconstruction.
The only exclusive DLSS features are single-frame generation, locked to the RTX 40 and 50 series, and multi-frame generation, which is locked to the 50 series. The most useful component of DLSS 4 – the upscaling – is widely available, which is great to see.
All the visual examples in this article and the accompanying videos were captured at 4K using a GeForce RTX 5090. We tested a selection of games, all with settings such as motion blur, film grain, vignette, and chromatic aberration disabled.
For the DLSS 3 examples, each game was upgraded to the final version of DLSS 3 (3.8.10) using the DLSS Swapper utility. For DLSS 4, each game was upgraded to the latest model using Nvidia’s override feature available in the Nvidia App. This allows us to compare the best version of DLSS 3 with the best version of DLSS 4 in every title.
Textures and Blur
The most obvious improvement DLSS 4 has made relative to DLSS 3 is in texture quality. Over the last decade, temporal anti-aliasing has been reducing texture quality and overall clarity in games in an effort to eliminate aliasing.
This has created a signature TAA “blur” that’s especially noticeable in motion, and DLSS 3 – which is essentially a fancy AI-based TAA that also incorporates upscaling – wasn’t immune to this issue. DLSS 4 has made enormous strides in eliminating TAA blur, which raises texture quality relative to DLSS 3 and even native TAA.
For a better representation of image quality comparisons, check out the HUB video below:
Across all the examples we’ve seen, running games with DLSS 4 gives the appearance of using a higher texture quality setting, even though texture settings remain completely unchanged.
DLSS 4 makes games look like they’re running at a higher resolution because the blur is eliminated
Another way to look at it is that DLSS 4 makes games look like they’re running at a higher resolution because the blur is eliminated. At 4K, if DLSS 4 is delivering a true 4K presentation, DLSS 3 almost looks like 1440p in comparison. For those who are highly sensitive to detail, this is a game changer.
In Cyberpunk 2077, both using Quality upscaling, DLSS 4 is not only clearly sharper when stationary, but it also preserves this sharpness when movement starts. When we pause the footage while walking forward, it’s immediately obvious how much better the texture quality is with DLSS 4 and how some pattern artifacts present in DLSS 3 are reduced or eliminated.
What’s super impressive is that DLSS 4 on Performance mode has basically the same texture-preserving properties, meaning that in terms of texture detail, DLSS 4 Performance often looks better than DLSS 3 Quality in motion.
There are plenty more examples of this. In Ratchet & Clank, looking at these barrels, DLSS 4 is clearly less blurry, and this even holds true when comparing DLSS 4 Performance to DLSS 3 Quality. Later, in a cutscene, we can see DLSS 4 once again delivering higher texture quality for Ratchet’s leather helmet. In Horizon Zero Dawn Remastered, we see higher texture quality when comparing DLSS 4 Performance to DLSS 3 Quality for the ground and rocks, with both DLSS 4 Quality and Performance modes delivering a similar texture experience.
In most examples, specifically when talking about blur and texture quality, DLSS 4 is superior to native rendering – even using Performance mode. Of course, there are other issues when comparing Performance to Native that we’ll explore later, but textures in particular are undoubtedly rendered best with DLSS 4.
We even found the experience better using DLSS 4 Performance versus DLSS 3 DLAA. The upgrade is so noticeable that most gamers will finally feel like they’re experiencing the “better than native” image quality that DLSS promised way back at the GeForce 20 series launch. Games look, feel, and – most importantly – play in a way that’s less blurry.
Edge Stability
Another crucial aspect of image quality is edge stability. Not only do we want sharp, clean textures with no blur, but we also want those textures and edges to look the same between frames – with no sizzling or aliasing.
DLSS 4 is generally a significant upgrade over DLSS 3 in this area as well, as seen throughout the Horizon Zero Dawn example below. Looking at the wooden bridge, DLSS 4 maintains better stability for each line within the texture and geometry – even when comparing the typically unstable Performance mode in DLSS 4 to DLSS 3 Quality. In examples like this, even the Performance mode provides superior stability to what the Quality mode offered previously.
For a better representation of image quality comparisons, check out the HUB video below:
Where DLSS 4 sees the biggest benefits is when there is a small amount of movement between each frame. In Black Myth: Wukong’s menu, for example, with DLSS 3 Quality mode, the slight swaying of the character causes some instability with fine details and edges. With DLSS 4, even in Performance mode, the new model is much better at identifying edges, accounting for small movements between frames, and locking those edges down to provide much better stability. In some cases, like this one, DLSS 4 Performance actually comes out ahead of DLSS 3 Quality.
This poor stability in motion is generally why we haven’t recommended using the DLSS Performance mode before, even at 4K. In Ratchet & Clank, for example, even though the motion between frames is consistent and relatively slow, DLSS 3 Performance just can’t maintain stability, leading to ugly artifacts. Not only is DLSS 4 Performance sharper, but it’s also much more stable, which is immediately noticeable in most situations.
That doesn’t mean DLSS 4 Performance mode is always better than DLSS 3 Quality for edge stability. For example, when driving in Cyberpunk 2077, overhead wires and bridges are more stable using the higher render resolution of DLSS 3 Quality.
However, DLSS 4 does have advantages in other areas, like fences on the side of the road and overhead lights. Generally speaking, when comparing DLSS 4 Quality to DLSS 3 Quality, DLSS 4 offers a more stable presentation, and we have yet to come across an example where DLSS 4 edge stability is worse than DLSS 3 when using the same mode.
Ghosting
More recent versions of DLSS 3 are not especially prone to ghosting, and across the games we tested, this was rarely an issue – whether using DLSS 3 or DLSS 4. However, when ghosting does occur, it’s hit or miss whether DLSS 4 will resolve the problem compared to DLSS 3.
The Cyberpunk 2077 is notably better at reducing ghosting from overhead street lights or the tailpipes on the car. This is another case where DLSS 4 Performance can look better than DLSS 3 Quality.
For a better representation of image quality comparisons, check out the HUB video below:
But when switching over to Forza Motorsport, a game that is prone to ghosting with most forms of upscaling – or even native TAA – there’s little change between DLSS 3 and DLSS 4, both using the Quality mode. In fact, if anything, ghosting appears more visible in this example with DLSS 4, though neither option is ideal.
We also saw that in Ratchet & Clank, DLSS 4 can introduce ghosting where there previously was none, when comparing the same quality settings. When Ratchet moves across the red carpet, there’s a subtle ghost trail left behind in the DLSS 4 image that isn’t present with DLSS 3, which is a disappointing regression in worst-case scenarios. But like we said, after examining 11 games in detail for this article, ghosting generally isn’t going to be a major issue or concern when enabling DLSS 4.
Disocclusion
Disocclusion is up next, and we’re primarily going to focus on the artefact you get around your character in a third person game when moving. Essentially, disocclusion occurs when something on screen moves to reveal the background behind it, and in the frame where this disocclusion occurs there usually isn’t any temporal data to draw from, so it’s prone to artefacts. Usually this is seen as a sizzling or lower resolution halo around your character in motion where the immediate area lacks a bit of definition that you normally see elsewhere on screen.
For a better representation of image quality comparisons, check out the HUB video below:
While DLSS 4 is generally an improvement in texture detail and stability, the technology struggles to deliver an improvement in disocclusion, often creating more artefacts than DLSS 3 when comparing Quality mode vs Quality mode. This was perhaps the most noticeable in The Last of Us Part I when Joel’s head moves to reveal the water and grass behind, there’s a bit more sizzling in the DLSS 4 image versus DLSS 3 as disocclusion occurs. I suspect this is because DLSS 4 is trying to preserve more detail than DLSS 3, but has the same single frame of lower resolution data to work with, whereas with DLSS 3 the disocclusion artefacts are hidden somewhat by the overall blurrier image.
I also found disocclusion to be slightly worse in Horizon Zero Dawn and Black Myth Wukong in areas where motion was relatively fast and backgrounds were detailed. However the impact was less pronounced in Dragon Age The Veilguard and Star Wars Jedi Survivor, where really there was little difference between the DLSS 4 and DLSS 3 Quality modes. In the best cases, DLSS 4 Balanced mode delivers a similar experience to DLSS 3 Quality mode in this area, but more common I would say the reverse is true, where a lower DLSS 3 mode shows less disocclusion artefacts than a higher DLSS 4 mode.
The good news is that a lot of this is nitpicking, and disocclusion artifacts, in general, are hard to spot at 4K. To really see the changes, we basically had to step through the footage frame by frame or view zoomed-in shots in slow motion. In most of these examples, it’s difficult to notice a difference in real-time gameplay, even if there is a small variation when looking closely.
The Last of Us was the only game where we found the downgrade noticeable in real time. In most other cases, whenever artifacts appear, they are confined to a small area and disappear within one or two frames.
Hair
Hair is one of the most difficult elements to upscale due to its dense, fine detail. Unfortunately, we didn’t see much improvement when comparing DLSS 3 and DLSS 4 in this area.
In games with high-quality hair rendering, like Dragon Age: The Veilguard, there is little difference in the level of detail between the two, and aliasing remains an issue with any level of upscaling – including Quality mode – relative to native rendering with TAA or DLAA. We also noticed no difference in Black Myth: Wukong, where upscaling still results in a reduction in quality compared to native rendering.
The challenges with hair rendering also apply to fur (as tested in Ratchet & Clank). While DLSS 4 is able to extract a higher level of detail from basic textured elements, fur rendering remains largely unchanged.
It’s not a situation where DLSS 4 Balanced is able to match DLSS 3 Quality – it’s very much a like-for-like comparison. Occasionally, we spotted examples, like in The Last of Us, where hair quality appeared slightly sharper, but in terms of aliasing, there is little improvement with the new Transformer model.
Particles
Particle reconstruction is another area that can be difficult for upscalers to handle, as particles are often small, fast-moving, and inconsistent. One area where DLSS 4 improves upon DLSS 3 is particle ghosting.
In the Starfield example (video below), if you look closely, you’ll see streaky trails following particles in the DLSS 3 image, which are eliminated using DLSS 4 at the same Quality mode. Even when dropping DLSS 4 to Performance mode, ghosting is cleaned up significantly, leading to a more stable presentation.
For a better representation of image quality comparisons, check out the HUB video below:
This holds true in other games like Ratchet & Clank, which was a bit less prone to ghosting, but still, any time a particle ghosted in the DLSS 3 image, it was clean in the DLSS 4 image.
A particularly stressful test is airborne spore particles in The Last of Us, and while DLSS 4 doesn’t completely eliminate ghosting here, it noticeably reduces it, resulting in a better-looking image.
However, there are still notable differences in particle quality outside of ghosting that limit the advantages of DLSS 4. Upscaled particle quality tends to degrade at lower render resolutions, like in Performance mode, so in most cases, DLSS 4 Performance compared to DLSS 3 Quality gives an edge to DLSS 3 in particle resolution at the expense of ghosting.
At equivalent quality modes, we wouldn’t say there’s a huge difference in particle resolution and edge quality, though this does depend on the game. Generally speaking, DLSS 4 Balanced mode is on par with DLSS 3 Quality mode for particle quality, with the added benefit of less ghosting in the DLSS 4 image.
Transparency
Transparent items are typically pain points for upscalers, and with DLSS 4, we didn’t see significant improvements in this area. For the most part, the quality of upscaled transparencies is heavily linked to the render resolution, so these elements appear more detailed and less pixelated when using a higher DLSS mode. In most cases, this means DLSS 4 Quality mode is equivalent to DLSS 3 Quality mode.
For a better representation of image quality comparisons, check out the HUB video below:
There were some instances where DLSS 4 Balanced was able to match DLSS 3 Quality in transparencies. But in other areas, like the holographic map in Cyberpunk, it’s more of a parity situation.
We also tested DLSS 4 Balanced vs. DLSS 3 Quality in titles like Dragon Age: The Veilguard, but generally found that DLSS 3 Quality mode delivered smoother, less aliased transparencies with better reconstruction. The output quality here seems more closely linked to render resolution than the upscaling model, whereas the reverse is true for stability and texture detail.
The exception to this is when standing still, where DLSS 4 generally has the edge, producing a more detailed image – similar to the texture quality advantages we’ve seen. However, this advantage usually disappears in motion.
Fine Detail
Fine detail reconstruction is yet another challenge, as pixel-level or near-pixel-level detail can be lost, aliased, or appear sizzled in motion when the render resolution is too low for the upscaler to handle. This issue is most noticeable on wires and other fine-line details.
Like with edge stability, this is an area where DLSS 4 has an advantage over DLSS 3. One thing that became immediately clear during testing is that DLSS 4 is less likely to introduce weird patterns in finely detailed grates or meshes, as seen in Cyberpunk 2077. This holds true even when comparing DLSS 4 Performance to DLSS 3 Quality, where DLSS 4 shows no major artifacts of this kind.
For a better representation of image quality comparisons, check out the HUB video below:
Wire detail, such as overhead power lines, is only marginally improved. Based on testing games like Cyberpunk 2077 and Indiana Jones and the Great Circle, we’d say that in this area, DLSS 4 Balanced is on par with DLSS 3 Quality.
Alternatively, when using the same mode (Quality vs. Quality), DLSS 4 provides a small improvement in reconstruction. However, these modes are still prone to sizzling and other artifacts along the edges of wires in motion, and aliasing remains a problem. The best image quality is still achieved with native rendering, like DLSS 3 DLAA, which outperforms DLSS 4 Quality mode.
For other types of fine details, DLSS 4 Balanced gets close to matching DLSS 3 Quality, though there are some cases where it falls short of the reconstruction power of a higher render resolution. Typically, these configurations trade blows, and for those who prefer Quality upscaling, there is an improvement in fine details – though it’s not as significant as the gains in texture detail and general stability.
Trees
Foliage is a key part of visual presentation in most games, so we tested tree rendering quality in different scenarios. Generally, results fall into two categories.
When trees are relatively still, DLSS 4 is a big improvement over DLSS 3 – to the point where in some games, DLSS 4 Performance looks as good as, or better than, DLSS 3 Quality. One example of this is The Last of Us Part I, which benefits from DLSS 4’s increase in sharpness and detail.
For a better representation of image quality comparisons, check out the HUB video below:
Similar results were observed in Horizon Zero Dawn, where DLSS 4 Balanced is a strong match for DLSS 3 Quality. The DLSS 4 image is generally sharper and more stable in motion, though at a lower base render resolution, it can struggle with the finest details. Performance mode isn’t always a perfect match, but in a title like Indiana Jones, the foliage benefits more from the stability improvements of DLSS 4. Even in Performance mode, trees can look less pixelated than in DLSS 3 Quality mode in motion.
The second category involves faster motion, such as trees blowing in the wind or dense fine-detail branches. Here, DLSS 4 vs. DLSS 3 quality results align more with what we saw in fine detail reconstruction.
In Star Wars Outlaws, for example, DLSS 4 Balanced clearly renders at a lower resolution than DLSS 4 Quality when viewing trees swaying in the wind. This is a particularly difficult test for upscaling, and we only saw quality parity at the same mode (Quality vs. Quality). DLSS 4 is slightly more stable and less prone to sizzling, but there were times when DLSS 3 handled dense foliage better.
In Black Myth: Wukong, looking at fine tree branches, we observed similar behavior to Star Wars Outlaws, where upscaling struggles to resolve these details properly. DLSS 3 DLAA still has an edge over DLSS 4 Quality upscaling.
Typically, the best match was at the same setting, such as Quality vs. Quality. The one exception is Performance mode, which in both Black Myth: Wukong and Outlaws was a notable improvement in DLSS 4 – probably getting closer to the level of DLSS 3 Balanced.
Stability in Performance mode has improved, and for this type of foliage, it’s now in a more usable state. Reconstruction of these details is much cleaner, even if it can’t quite match the output of DLSS 4 Quality. Surprisingly, it gets quite close when viewed side by side.
Grass
We were more impressed with how DLSS 4 handles grass. With DLSS 3, grass often had a grainy look in motion because there was too much variation between frames for the upscaler to handle. This is improved with DLSS 4, as seen in Star Wars Outlaws.
Even when comparing DLSS 4 Performance to DLSS 3 Quality, the DLSS 4 image is less grainy and pixelated when grass moves in the wind. And switching from DLSS 4 Performance to Quality results in even better grass resolution and detail, with less aliasing and improved stability over DLSS 3.
For a better representation of image quality comparisons, check out the HUB video below:
This was also evident in Indiana Jones, where DLSS 4 Performance mode provided more stable and less grainy grass than DLSS 3 Quality in some situations. However, in this extremely demanding test, some detail is lost in Performance mode due to the lower render resolution, so in the worst cases, DLSS 4 Quality is still needed to match the level of detail.
Results depend on the game, though. In Dragon Age: The Veilguard, which has very fine grass detail, we didn’t see much improvement from DLSS 3 to DLSS 4, and in some areas, there were even regressions. Generally, using DLSS 4 Performance results in an image similar to DLSS 3 Performance, so a lower DLSS 4 mode won’t match higher-quality DLSS 3 settings. In Horizon Zero Dawn, DLSS 4 Balanced matched DLSS 3 Quality, with DLSS 4 providing additional sharpness and reducing blur, as previously discussed.
Across all the games we tested, DLSS 4 Balanced or Performance modes handled grass upscaling well – something that wasn’t always the case with DLSS 3. Previously, DLSS 3 Performance mode was often too grainy and unstable in games with dense foliage.
Fences
DLSS 4 does a much better job of reconstructing fences and grates than DLSS 3, giving lower modes an advantage. One of the most noticeable improvements is how DLSS 4 handles repeating patterns without producing moiré artifacts.
In The Last of Us, for example, DLSS 4 Performance has an advantage over DLSS 3 Quality, despite the much lower render resolution of the Performance mode. We saw a similar benefit in Starfield when looking at meshes, where DLSS 4 was much less likely to produce an ugly moiré artifact in motion.
In addition to this, DLSS 4 is typically better at reconstructing super fine mesh detail – down to the pixel level – and it improves the visibility of items behind fences or grates, which can appear blurred or obscured with DLSS 3.
Fences are generally more stable and less prone to sizzling as well, especially when comparing DLSS 4 Balanced to DLSS 3 Quality. The Performance mode also sees a significant improvement, with much better clarity and stability in reconstructing these elements, even if DLSS 4 Performance can’t always match DLSS 3 Quality.
Cloth
Cloth quality benefits from DLSS 4 upscaling, as DLSS 4 is better at preserving texture quality in motion – especially on character clothing, which is often moving constantly. As we saw in the mesh and fence analysis, DLSS 4 also does a better job of reducing moiré patterns when upscaling.
One of the biggest issues with DLSS 3 Performance mode was the constant moiré patterns on detailed cloth textures, but that is largely eliminated with DLSS 4 Performance mode, giving it a big quality advantage.
For a better representation of image quality comparisons, check out the HUB video below:
In general, to achieve DLSS 3 Quality-like cloth detail, you can use DLSS 4 Performance, though sometimes Balanced is required depending on the level of detail and motion. In third-person RPGs where characters wear capes, DLSS 4 is simply the better way to play, as cape elements are frequently visible on screen.
Water
We didn’t see much difference in water quality when comparing DLSS 3 and DLSS 4 – generally, the same quality modes in both deliver a similar visual experience. In Jedi: Survivor, for example, this was definitely the case, and we even noticed a regression in visuals when comparing DLSS 3 Performance to DLSS 4 Performance, with DLSS 4 surprisingly producing a less stable image.
In Horizon Zero Dawn, we saw little difference in water quality. The Performance mode has been slightly upgraded with DLSS 4 in this scenario, but overall, it’s a wash.
Rain
Lastly, we have rain. Upscaling tends to struggle here in two ways: rendering the rain particles without aliasing and preserving background detail as rain occludes it.
When it comes to the rain particles themselves, there’s little difference between DLSS 3 and DLSS 4. For example, in Ratchet & Clank, we were only able to match raindrop detail when using the same mode – so Quality vs. Quality. When dropping DLSS 4 down to Performance, there was a slightly noticeable loss of detail in the rain compared to DLSS 3 Quality.
As for the stability and quality of the image behind the rain, in Horizon Zero Dawn, we noticed that DLSS 4 Balanced was a close match for DLSS 3 Quality. Technically, we were looking at snow here – but snow is just a type of rain, right? Either way, we observed about one quality step of improvement provided by DLSS 4.
Performance Benchmarks
Let’s now take a look at performance. While the focus of this article is on visual quality, based on what we saw when testing Ray Reconstruction, we expect the new DLSS 4 Transformer model to perform slightly worse on older GeForce 30 series cards. However, we’ll need to investigate that further in a future article.
In Starfield at 4K using max settings, enabling the DLSS 4 upscaling override cost about one tier of performance. That is to say, the performance we previously achieved using DLSS 3 Quality mode is now only available with DLSS 4 Balanced mode. When comparing Quality modes directly, there was an 8% drop in frame rate. However, DLSS 4 Quality still provided a 24% performance improvement relative to native TAA rendering and a 29% improvement relative to DLSS 3 native DLAA.
In Dragon Age: The Veilguard, we saw similar results while testing the Ultra preset at 4K. Performance dropped by 7% using DLSS 4 Quality mode versus DLSS 3, meaning a full tier of performance impact across the modes. This also means that FSR Quality upscaling, which was previously slightly slower than DLSS Quality, is now slightly faster – though with a noticeably different level of image quality.
In Ratchet & Clank: Rift Apart, the upgrade from DLSS 3 to DLSS 4 results in about half a tier of performance loss. DLSS 4 Balanced now performs between DLSS 3 Balanced and Quality modes, with a 7% frame rate hit when comparing Quality modes. Upscaling is more effective in this title than in the previous two, with DLSS 4 Quality achieving a 46% higher frame rate than native TAA rendering, and DLSS 4 Performance running 79% faster.
Horizon Zero Dawn Remastered is another title where DLSS 4 causes one tier of performance loss compared to DLSS 3. Now, DLSS 4 Balanced delivers frame rates similar to DLSS 3 Quality, with about a 7% performance hit when comparing Quality modes. In the area we tested, DLSS 4 Performance provided a 39% FPS improvement over native TAA rendering.
In Black Myth: Wukong, tested using the Very High preset without path tracing, DLSS 4 caused about half a tier of performance loss. DLSS 4 Balanced sits between DLSS 3 Quality and Balanced, with a 6% frame rate drop when moving from DLSS 3 to DLSS 4. However, the Performance mode is able to nearly double the frame rate compared to native rendering.
Lastly, we have Cyberpunk 2077. Here, we observed the smallest performance impact – just a 5% drop in frame rate when comparing DLSS 4 and DLSS 3 Quality modes. Each tier remains quite comparable in this game, likely due to the lower overall frame rates from running at 4K with the Ultra ray tracing preset.
Performance Summary: DLSS 4 vs. DLSS 3
Here is the geometric mean average across all six tested titles. Typically, DLSS 4 results in about half a tier of performance loss. That is to say DLSS 4 Balanced now sits between DLSS 3 Quality and Balanced for FPS improvement, and DLSS 4 Performance now sits between DLSS 3 Balanced and Performance.
On average, the performance impact was 7% when comparing the same mode across versions. However, switching from DLSS 3 Quality mode to DLSS 4 Performance mode provided a 14% performance boost on average.
What We Learned
Overall, DLSS 4 Super Resolution upscaling is an impressive improvement for 4K gaming. Nvidia has been able to deliver noticeably higher image quality at each DLSS tier in the majority of scenarios we tested, making lower modes – like DLSS Performance – genuinely viable at this resolution without distracting or ugly artifacts.
We were blown away by how Nvidia has managed to fix TAA blur with DLSS 4, resulting in a clearer, sharper presentation with higher-quality textures. TAA has always caused a signature loss of clarity in motion, but even with DLSS 4 in Performance mode, this issue is almost entirely eliminated.
The outcome is that games with DLSS 4 enabled exhibit sharpness typically associated with running at a higher resolution, along with texture quality and detail comparable to high-quality texture packs. As an analogy, if DLSS 4 represents 4K gaming with ultra textures, DLSS 3 is more like 1440p gaming with high textures.
Anyone using DLSS 4 for the first time will immediately notice its superior clarity. While for some, this may be a more subconscious improvement, nearly everyone will prefer the DLSS 4 experience.
We were also impressed with how DLSS 4 cleans up some of the common pain points of upscaling, particularly at lower modes. DLSS 4 is much more stable in motion, showing a significant improvement over Performance modes, and is less prone to annoying moiré artifacts. Grass is typically more stable and less grainy in motion, fine detail reconstruction is improved (particularly for fences and grates), and particle ghosting is less likely to occur.
Not every aspect has improved, though. Hair and water rendering remain largely unchanged compared to DLSS 3, and there is a regression in some forms of disocclusion, though this is difficult to notice in real-world gaming scenarios. Additionally, in scenes that exhibited heavy ghosting with DLSS 3, ghosting may be slightly worse with DLSS 4, though in general, most scenes show either no ghosting or an improvement.
Assessing all aspects of visual quality, we believe that for 4K gaming, DLSS 4 provides a one-to-two-tier improvement. This means that the experience previously achieved with DLSS 3 Quality mode is now possible using DLSS 4 Performance, or in more demanding scenarios, DLSS 4 Balanced.
Previously, we could not recommend the DLSS Performance mode for 4K gaming due to its instability, but now we can. Across many hours of gaming, we were consistently satisfied with the image quality in this mode. Naturally, image quality improves further when using DLSS Balanced, DLSS Quality, or even native DLAA modes – especially for fine detail reconstruction – but DLSS 4 Performance is more than good enough for everyday gaming without distracting artifacts.
DLSS 4 is a heavier algorithm than DLSS 3, so there is a performance cost associated with using it. On a GeForce RTX 5080, this resulted in about a 7% FPS loss at the same quality mode, or roughly half a tier of impact.
The overall improvement from DLSS 4 equates to approximately one and a half tiers of quality enhancement at the cost of half a tier of performance, which averages out to about a full-tier improvement. Essentially, the new version of DLSS provides the visual fidelity of the Quality mode while delivering the performance uplift of the Balanced mode.
Essentially, the new version of DLSS provides the visual fidelity of the Quality mode while delivering the performance uplift of the Balanced mode.
Another way to think of this is that Nvidia have basically delivered a 15% gain via a software update, at least for more recent GPUs. That may not sound like a whopping improvement at first, but it’s pretty massive just from software.
In fact, it’s a larger gain than Nvidia achieved in hardware when moving from the RTX 4080 Super to the RTX 5080. It also surpasses the performance gain seen from the Windows 24H2 update compared to Windows 23H2 for Ryzen CPUs, which we previously described as a major upgrade. Delivering a driver/software update that allows gamers to either enjoy better visual quality in their favorite games or gain a performance boost while maintaining a similar visual level is excellent work from Nvidia.
It’s surprising that Nvidia focused so much on multi-frame generation instead of highlighting the improvements in DLSS 4 upscaling. Sure, multi-frame generation is designed to sell the new GeForce 50 series GPUs, but DLSS 4 upscaling is far more impressive.
Instead of trying to position an RTX 5070 as an RTX 4090, Nvidia could have emphasized how they have practically solved TAA blur with DLSS 4 – a benefit available to all RTX GPU owners. We bet gamers would be more likely to keep buying Nvidia GPUs if they knew they were being supported and taken care of with meaningful, broadly useful software updates over time.
Finally, where does this leave AMD and FSR upscaling tech? We don’t want to dive too deep into this considering FSR 4 is only a few weeks away. We plan to conduct a thorough analysis once it’s released, however as things stands today, FSR 3.1 (and especially FSR 2.2) is not competitive with DLSS 4. In many cases, DLSS 4 Performance mode upscaling looks significantly better than FSR 3.1 Quality mode, while also being 15 – 20% faster.
In many cases, DLSS 4 Performance mode upscaling looks significantly better than FSR 3.1 Quality mode, while also being 15 – 20% faster.
FSR 4 will need to be a massive leap over FSR 3.1. Simply matching DLSS 3 will not be enough – it would only maintain the generational gap that has existed between DLSS and FSR for years. Beyond that, AMD is in serious trouble if it cannot get FSR 4 into a large number of games quickly.
Nvidia laid the groundwork for driver-level DLSS upgrades years ago by using a DLL that can be intercepted and upgraded on the fly, either through the Nvidia App or third-party tools. This allows the vast majority of DLSS-supported games to be easily upgraded to DLSS 4. AMD, however, only started using DLLs with FSR 3.1. This means that in many games, Nvidia users can upgrade to DLSS 4, while AMD users may be stuck with FSR 2.2 upscaling. In those cases, Nvidia will deliver a decisively better visual experience, and how AMD responds to this challenge will be critical.
We’ve already seen FSR 4 in person at CES, and based on that experience, it appears to be a solid upgrade over FSR 3.1. Will it be enough to match DLSS 4? That remains to be seen and will require detailed analysis. What we do know for sure is that, as of right now, DLSS 4 upscaling is the best way to experience gaming. It is a significant improvement over DLSS 3, and we are eager to see what it can achieve at lower and more mainstream resolutions like 1440p.
After creating a stir with the $200 Neo, DJI is back at it with another innovative drone, the Flip. It has a first-of-a-kind folding design and shrouded propellers to keep people safe. It also integrates 3D infrared obstacle detection to track subjects and has a long list of impressive features.
With a camera borrowed from the Mini 4 Pro, the Flip can take high-quality 4K 60p video indoors or out with little risk. It comes with vlogger-friendly features like Direction Track and Quickshots for social media. And it can be flown with either DJI’s controllers, a smartphone, voice control or the push of a button.
There’s no need for a permit to fly it, and best of all, it’s priced at $439 with an RC-N3 controller included — making it one of the more affordable drones available. To see how well it serves creators, I flew it inside a castle, a 500-year-old house and out in nature. It’s not perfect (hello, stiff winds and obstacles), and it has some stiff competition with the HoverAir X1 Pro, but it’s one of the most useful creator drones yet.
Design
The Flip has a clever, user-friendly design. All four propellers fold down and stack below the body like some kind of Star Wars spacecraft. DJI chose this construction so that it could incorporate permanent (rather than detachable) shrouds that protect the props to limit damage or injury in case of a collision. The design also employs large propellers that aid performance and reduce noise. By comparison, DJI’s Neo has tiny, fast-spinning propellers that make a high-pitched shrieking noise.
DJI kept the takeoff weight including battery and microSD card under 250 grams by using carbon fiber and other lightweight materials. This means the Flip can be flown without special permits. It’s still rather bulky though, especially compared to the sleek HoverAir X1 Pro.
The Flip has far better battery life than its rival, however. DJI promises up to 34 minutes max flight time (about 27 minutes in real-world conditions), compared to just 16 minutes for the X1 Pro. The batteries can be charged up quickly as well, taking about 35 minutes each with the optional four-battery charger. You’ll need a memory card, though, as the Flip only has 2GB of internal storage.
The Flip is DJI’s first lightweight drone with a 3D infrared sensor for forward obstacle avoidance and it also has a downward vision sensor for landing spot detection and stability. However, unlike the Mini 4 Pro and other DJI drones, it has no side or rear obstacle sensors.
One small issue is that the Flip’s propellers don’t have much clearance, so they can snag even in short grass on takeoffs. Like the Neo, though, it’s designed more for takeoffs and landings from your hand. To that end, it has a button on the opposite side of the power switch to select a flight mode and takeoff automatically, just like the Neo. It can also be flown with the app, voice control or manually with a controller — either the DJI RC-N3 controller (which requires a smartphone) or the RC 2 controller with a built-in 5.5-inch display.
Features and performance
Steve Dent for Engadget
The Flip can hum along at up to 26 mph in sport mode, which isn’t bad for a light drone, but a good bit slower than the Mini 4 Pro (37 mph). However, the reduced weight and large surface area means it’s not the best in high winds. When it flew over the roof of a castle, for example, it got hit by a gust that pushed it nearly backwards.
However, the Flip can do things that you’d never attempt with a Mini 4 Pro. The full propeller protection, stability and relatively low noise make it well-suited for flying inside large rooms full of fragile objects and people. That, along with the excellent picture quality, means it’s a great choice for event professionals and content creators working in public spaces.
It’s also perfect for beginners, because like the Neo, you can launch the Flip off your hand at the push of a button. It will then fly a pre-programmed mode and land back where it started. One of those modes, Direction Track, allows the drone to fly backwards and follow you for vlogging. There’s also a follow mode for activities like running and hiking, along with social media-friendly flight modes like Dronie, Rocket, Circle, Helix and Boomerang. Note that video in these automatic modes is limited to 4K 30 fps.
At the same time, the Flip is easy to fly manually either with a smartphone or the supported controllers. Though not as maneuverable as the Mini 4 Pro, it’s easier for novices to fly and makes a stable camera platform. You do need to be careful in areas with untextured floors (painted concrete, for instance), as it can throw off the Flip’s sensors and make it unstable. When that happens, your best bet is to switch it into sport mode to disable the vision-based flight stability sensors (and then fly carefully because obstacle detection will also be disabled).
Steve Dent for Engadget
Oddly, the Flip doesn’t work with DJI’s Goggles N3 and Motion 3 controller, unlike the much cheaper Neo. That’s because DJI sees it predominantly as a camera drone rather than an acrobatic device.
If you’re hoping to use the Flip to track yourself or others, there’s a big issue: It lacks obstacle detection in any direction except forward or down. If you’re flying the drone backwards, for instance, you have to make sure there’s nothing behind it can crash into. And automatic obstacle avoidance doesn’t work at all when you use the Flip’s smart features like Direction Track or ActiveTrack, though the drone will stop 10 feet before hitting anything it detects. The lack of that feature is odd, since obstacle avoidance is an important part of subject tracking, and DJI didn’t say if it had plans to rectify that issue via a future update. None of this is an issue with the HoverAir X1 Pro, which can track forwards, backwards and even sideways with full obstacle detection enabled.
The Flip has excellent range for such a tiny drone at up to eight miles, thanks to DJI’s O4 transmission system. At the same time, it can send a high quality 1080p 60 fps video signal that can be recorded to the controller as a backup. However, if you’re flying using your smartphone with a Wi-Fi connection, range is limited to just 165 feet.
Camera
Samuel Dejours for Engadget
The cameras are the biggest difference between the Flip and the Neo. The Flip comes with a much larger 1/1.3-inch 48-megapixel sensor and a 24mm-equivalent wide angle F/1.7 lens. It’s the same as the one on the Mini 4 Pro and provides sharp, noise-free video in good light.
You can shoot 4K video at up to 60 fps (100 fps in slow-mo mode), rather than just 30 fps like the Neo. In addition, the Flip supports 10-bit D-LogM video that allows for improved dynamic range in bright lighting, like on ski slopes. You can also capture 12MP or 48MP RAW (DNG) photos.
Video quality is noticeably sharper than on the Neo and the Flip is a far better drone for night shoots or dimly lit indoor settings thanks to the lower noise levels. Though the DJI Air 3S and Mavic 4 offer higher quality due to the larger sensors, there isn’t a large difference in good light. Since the Flip has just a single camera, video is noticeably more noisy when using the 2x zoom. Note that when shooting in the automated modes (Direction Track, Dronie, etc.) there is no manual control of the camera to adjust exposure, shutter speed and ISO.
The HoverAir X1 Pro has the same-sized 1/1.3-inch sensor and offers very similar video quality (with a log mode as well), though I find DJI’s colors to be a touch more accurate. The HoverAir has slightly inferior 4K 60p video unless you spend an extra $200 for the Pro Max version to get 8K 30fps and 4K 120fps.
With a three-axis gimbal, the Flip shoots silky smooth video even if it’s being buffeted by winds. You can choose Follow mode to keep the camera level even when the drone banks, or FPV mode that allows the camera to tilt for a more exciting first-person perspective. Generally, video remains smooth even with sudden maneuvers, while footage from the HoverAir X1 Pro exhibits occasional jolts and janky movements.
The Flip’s camera doesn’t rotate 90 degrees like the one on the Mini 4 Pro, so maximum resolution for vertical video is 2.7K — a step backwards from the 4K 60 fps 9:16 vertical video on the Mini 4 Pro.
Wrap-up
Steve Dent for Engadget
The Flip represents a bold change in direction (and design) for DJI. Unlike open prop drones, it gives creators the ability to shoot indoors and around people with relatively high video quality. And it does this for just $439 — much less than the $759 Mini 4 Pro. However, the Flip isn’t perfect, with its main flaws being the reduced maneuverability, problems in wind and lack of obstacle avoidance when using smart modes like ActiveTrack.
As I mentioned, DJI also has some serious competition in this category, namely the $500 HoverAir X1 Pro. Both offer features like palm takeoff, intelligent flight modes and subject tracking and have similar quality, but the HoverAir X1 Pro offers rear-side active collision detection, a wider lens and more internal storage. It’s also about half the size of the Flip. For its part, the Flip has double the flight time and a much longer transmission range.
The choice then depends on what you want. If portability, subject tracking and obstacle avoidance are key, the HoverAir X1 Pro is a better option. Others who prioritize battery life, smoother video and a more established company should choose the Flip. In any case, DJI usually dominates all drone categories, so it’s nice to see multiple products facing off in this creator-centric space.
This article originally appeared on Engadget at https://www.engadget.com/cameras/dji-flip-review-a-unique-and-useful-creator-drone-with-a-few-flaws-181507462.html?src=rss
What is the best internet provider in Tucson, Arizona?
Xfinity is the best internet provider in Tucson, according to CNET’s research. The company earned the top spot thanks to its affordable pricing and wide range of plans. Xfinity has the cheapest internet in Tucson with a $20 per month plan for speeds of 150Mbps. It also has high-speed packages with download speeds up to 1,200Mbps. While fiber internet is gaining traction, its availability in Tucson is still limited. Providers like Quantum Fiber offer cutting-edge speeds, but cable and fixed wireless options from Xfinity, T-Mobile, Verizon and CenturyLink are more accessible throughout the city.
Quantum Fiber leads with an 8,000Mbps plan for $150 per month, but availability is sparse. Alternatively, Cox Communications offers a 2-gig plan at the same price.
Best internet in Tucson, Arizona
Tucson internet providers compared
Provider
Internet technology
Monthly price range
Speed range
Monthly equipment costs
Data cap
Contract
CNET review score
CenturyLink Read full review
DSL
$55-$75
15-940Mbps
$15 for modem/router rental (optional)
None
None
6.7
Cox Communications Read full review
Cable
$30-$110
100-2,000Mbps
None
1.25TB
None
6.2
Quantum Fiber
Fiber
$50-$165
500-8,000Mbps
None
None
None
6.7
T-Mobile Home Internet Read full review
Fixed wireless
$50-$70 ($35-$55 for eligible mobile customers)
87-415Mbps
None
None
None
7.4
Verizon 5G Home Internet Read full review
Fixed wireless
$50-$70 ($35-$45 with qualifying mobile plans)
100-300Mbps
None
None
None
7.2
Xfinity Read full review
Cable
$20-$85
150-1,200Mbps
$15 (optional)
1.2TB
1 year on some plans
7
Show more (1 item)
Source: CNET analysis of provider data.
Other available internet providers in Tucson
CenturyLink: CenturyLink, which is owned by the same parent company, Lumen, as Quantum Fiber and operates on much of the same network, runs its DSL service throughout Tucson. The company’s limited 15Mbps plan is available for $55 per month, which doesn’t include an additional $15 monthly equipment rental fee.
Cox Communications: Cox Communications offers cable service in Tucson, with speeds ranging from 100Mbps to 2Gbps. It operates on a hybrid fiber-coax network, so speeds will vary, and prices start at $30 per month. While customers won’t have to deal with contracts, you will have a data cap of 1.25TB.
Verizon 5G Home Internet: Verizon’s 5G home internet service is available to about 34% of Tucson, according to the FCC, and speeds range from 100Mbps to 300Mbps. While prices start at $50 per month, eligible mobile customers could pay as little as $35 per month with qualifying phone plans.
Satellite internet: There are a couple of options for satellite service in Tucson: Viasat, Starlink and Hughesnet. With Hughesnet, prices start at $50 for 50Mbps. With Viasat, prices run $75 monthly for the first 12 months and speeds go up to 40Mbps. Starlink’s prices start at $120, with speeds reaching as high as 220Mbps. With each ISP, you’ll pay a monthly equipment fee as well as get locked into a contract (except for Starlink, which is a bit more flexible). You may also see a price hike after just a few months, depending on what kind of contract you sign at the start.
Cheap internet options in Tucson
The average starting price for internet service in Tucson is about $47 per month. Most providers offer a $50 monthly plan, but one provider offers plans even lower than that.
Xfinity offers the cheapest plan you’ll find in Tucson with its $20-per-month plan for download speeds of 150Mbps. Or if you need more speed, Xfinity Connect costs $35 monthly and reaches speeds up to 300Mbps.
What’s the cheapest internet plan in Tucson?
Provider
Starting price
Max download speed
Monthly equipment fee
Xfinity Connect Read full review
$20
150Mbps
$15 (optional)
Xfinity Connect More Read full review
$35
300Mbps
$15 (optional)
Cox 100 Read full review
$30
100Mbps
None
Quantum Fiber
$50
500Mbps
None
Verizon 5G Home Internet Read full review
$50 ($35 with eligible mobile plan)
100Mbps
None
T-Mobile Home Internet Read full review
$50 ($35 with eligible mobile plan)
318Mbps
None
Show more (1 item)
Source: CNET analysis of provider data.
How many members of your household use the internet?
How to find internet deals and promotions in Tucson
The best internet deals and top promotions in Tucson depend on what discounts are available during that time. Most deals are short-lived, but we look frequently for the latest offers.
Tucson internet providers, such as Xfinity and Cox, may offer lower introductory pricing or streaming add-ons for a limited time. Others, however, including Quantum Fiber and Verizon, run the same standard pricing year-round.
For a more extensive list of promos, check out our guide on the best internet deals.
Photo by Barry Winiker/Getty Images
How fast is Tucson broadband?
In the most recent Tucson speed tests, Xfinity came out on top in expected download speeds, with Cox just behind. In the most recent Ookla data of the entire US, Tucson came in at number 76th overall in median download speed. (Disclosure: Ookla is owned by the same parent company as CNET, Ziff Davis.)
Quantum Fiber, Cox Communications and Xfinity all offer high-speed gigabit plans in Tucson, with Quantum offering the fastest speeds — 8,000Mbps for $165 per month. However, most areas will get up to a maximum of 940Mbps speeds.
Fastest internet plans in Tucson
Provider
Starting price
Max download speed
Max upload speed
Data cap
Technology type
Quantum Fiber 8 Gig
$150
8,000Mbps
8,000Mbps
None
Fiber
Quantum Fiber 3 Gig
$100
3,000Mbps
3,000Mbps
None
Fiber
Cox 2 Gig Read full review
$110
2,000Mbps
100Mbps
1.25TB
Cable
Xfinity Gigabit Extra Read full review
$85
1,200Mbps
35Mbps
1.2TB
Cable
Cox 1 Gig Read full review
$70
1,000Mbps
100Mbps
1.25TB
Cable
Verizon 5G Home Plus Internet Read full review
$70 ($45 with eligible phone plan)
300Mbps
20Mbps
None
Fixed wireless
Xfinity Gigabit Read full review
$65
1,000Mbps
20Mbps
1.2TB
Cable
Quantum Fiber 1 Gig
$75
940Mbps
940Mbps
None
Fiber
Show more (3 items)
Source: CNET analysis of provider data.
What’s a good internet speed?
Most internet connection plans can now handle basic productivity and communication tasks. If you’re looking for an internet plan that can accommodate video conferencing, streaming video or gaming, you’ll have a better experience with a more robust connection. Here’s an overview of the recommended minimum download speeds for various applications, according to the FCC. Note that these are only guidelines — and that internet speed, service and performance vary by connection type, provider and address.
0 to 5Mbps allows you to tackle the basics — browsing the internet, sending and receiving email, streaming low-quality video.
5 to 40Mbps gives you higher-quality video streaming and video conferencing.
40 to 100Mbps should give one user sufficient bandwidth to satisfy the demands of modern telecommuting, video streaming and online gaming.
100 to 500Mbps allows one to two users to simultaneously engage in high-bandwidth activities like video conferencing, streaming and online gaming.
500 to 1,000Mbps allows three or more users to engage in high-bandwidth activities at the same time.
For more information, refer to our guide on how much internet speed you really need.
How CNET chose the best internet providers in Tucson
Internet service providers are numerous and regional. Unlike with the latest smartphone, laptop, router or kitchen tool, it’s impractical to personally test every ISP in a given city. So what’s our approach? We start by researching the pricing, availability and speed information, drawing on our own historical ISP data, the provider sites and mapping information from the Federal Communications Commission at FCC.gov.
But it doesn’t end there. We go to the FCC’s website to check our data and ensure we consider every ISP that provides service in an area. We also input local addresses on provider websites to find specific options for residents. We look at sources, including the American Customer Satisfaction Index and J.D. Power, to evaluate how happy customers are with an ISP’s service. ISP plans and prices are subject to frequent changes; all information provided is accurate as of publication.
Once we have this localized information, we ask three main questions:
Does the provider offer access to reasonably fast internet speeds?
Do customers get decent value for what they’re paying?
Are customers happy with their service?
Though the answers to those questions are often layered and complex, the providers who come closest to “yes” on all three are the ones we recommend. When selecting the cheapest internet service, we look for the plans with the lowest monthly fee, though we also factor in things like price increases, equipment fees and contracts. Choosing the fastest internet service is relatively straightforward. We look at advertised upload and download speeds and consider real-world speed data from sources like Ookla and FCC reports.
To explore our process in more depth, visit our how we test ISPs page.
What’s the final word on internet providers in Tucson?
Though we rate Xfinity as the best bet in Tucson, your address will dictate which ISP is best for you. The speeds and providers vary throughout the city and the surrounding areas, so you’ll have to plug in your location to find your best options.
Internet providers in Tucson FAQ
Is fiber internet available in Tucson?
Fiber internet is available to just over 12% of Tucson households, according to the FCC, mainly through Quantum Fiber. Prices range from $50 to $150 monthly, and speed plans offered include 500 and 8,000Mbps.
Show more
What is the cheapest internet provider in Tucson?
Xfinity offers the cheapest internet in Tucson with its Connect plan. For $20 per month, customers can get 150Mbps download speeds.
Show more
Is CenturyLink or Xfinity better?
CenturyLink and Xfinity are both really solid options for your internet service. In Tucson, CenturyLink runs DSL and fiber, while Xfinity runs cable. Typically, we’d rate CenturyLink over Xfinity, but that’s only when you can get CenturyLink’s fiber plans. If you can’t get fiber service from CenturyLink (or its sibling brand, Quantum Fiber), we’d pick Xfinity over CenturyLink for Tucson residents.
An unassuming loophole might be giving the U.S. government and its private contractors free rein to withhold evidence of unidentified craft traveling well above our skies — in outer space.
That’s the argument made by former Capitol Hill policy advisor and attorney Dillon Guthrie, published this January in the Harvard National Security Journal, a publication run by Harvard Law School. Guthrie spent three years as a legislative assistant to Senator John Kerry covering national security issues and later worked directly for the Senate Foreign Relations Committee. He describes this UFO loophole as a kind of “definitional gap.”
“Congress has redefined what were formerly called ‘unidentified flying objects’ [UFOs] to first ‘unidentified aerial phenomena’ [UAP in 2021], and then the following year to ‘unidentified anomalous phenomena’ [also UAP],” Guthrie told Mashable.
As Americans have been learninga lot lately in the age of Elon Musk’s DOGE, the devil is in the details when it comes to the nation’s large and complex federal bureaucracies. And an antiquated, mid-century sci-fi concept like “unidentified flying objects” packed a lot of assumptions into one short acronym. That’s a reality lawmakers determined would hinder good faith efforts to seriously investigate more credible cases of UAP reported by U.S. military personnel in recent years.
Did the Navy pilots who witnessed the now notorious 2015 “GoFast” UFO, for example, really see something that was aerodynamically “flying”? Or was it just floating, like a balloon? Was it or any other strange airborne sighting truly a hard physical “object”? Or were these cases all something more amorphous and temporary, like the plasmified air of ball lightning?
SEE ALSO:
Aliens haven’t contacted us. Scientists found a compelling reason why.
As a term, UAP has offered a more broad and empirically conservative bucket for some of these still as-yet-unexplained events, categorizing them in a way that is not just more palatable to scientists and government officials; it has also made it harder for secretive U.S. defense and intelligence agencies to dodge the new annual reporting requirements now mandated by Congress, as part of the National Defense Authorization Act (NDAA). Or, that’s the idea, in theory.
A careful study of the NDAA’s most recent definition for UAP, as Guthrie noted in his new article, indicates that “data of any unidentified, spaceborne-only objects may be exempt.”
“Under that current statutory definition, there are three kinds of unidentified anomalous phenomena,” Guthrie told Mashable. “The first are airborne objects, or phenomena, that are not immediately identifiable. The second are submerged objects [or phenomena] that are not immediately identifiable — so, these would be unidentified objects in the ‘sea domain,’ or underwater.”
“And then there’s this third category of UAP, which are ‘transmedium objects,’” he continued, “those that are observed to transition between, on the one hand, space and the atmosphere, and, on the other hand, between the atmosphere and bodies of water.”
“Just under that strict reading of the definition,” Guthrie said, “there is no spaceborne-only UAP.”
NASA’s UAP independent study team during a public meeting on May 31, 2023 at the space agency’s headquarters. Credit: NASA / Joel Kowsky
Any U.S. intelligence agency or branch of the military, in other words, that tracked a spacecraft circling (but respecting) Earth’s border would be free to legally withhold that incredible hard data from Congress. And dozens of very recent cases like this may very well exist: Last November, the Defense Department’s official UAP investigators with its All-domain Anomaly Resolution Office (AARO) disclosed that no less than 49 of last year’s 757 cases in their annual unclassified report involved strange sightings of UAP in outer space.
AARO’s 2024 report emphasized, however, that “none of the space domain reports originated from space-based sensors or assets; rather, all of these reports originated from military or commercial pilots or ground observers.” But, Chris Mellon — formerly a minority staff director for the Senate Intelligence Committee and a deputy assistant secretary of Defense for Intelligence under Presidents Bill Clinton and George W. Bush — believes that this lack of sensor data is likely “a failure of reporting.”
Mashable Light Speed
“Why is it that none of America’s unparalleled space surveillance systems captured and reported what these pilots observed?” Mellon asked in an essay for the technology news website The Debrief this month.
“Did these systems actually fail to capture any data, or is this another case,” the former Pentagon official continued, “in which the information is simply not being shared with AARO or Congress? If the pilots and ground observers were mistaken, cross referencing with these systems could help confirm that as well.”
A Ground-Based Electro-Optical Deep Space Surveillance (GEODSS) System site located on Diego Garcia island in the British Indian Ocean Territory. Credit: U.S. Space Force
Mellon, a longtime advocate for transparency on UAP, recounted his own past government service experience supervising one of these systems, the Ground-based Electro-Optical Deep Space Surveillance (GEODSS) stations now managed by the U.S. Space Force. First established in the 1980s to effectively spy on spy satellites and other foreign orbital platforms, GEODSS can track objects as small as a basketball sailing 20,000 miles or more above Earth’s surface.
“Many years ago, I asked a colleague visiting the Maui GEODSS site to inquire if the system had recorded anything ‘unusual’ in the night skies lately,” Mellon recalled. “Sure enough, just a month or so earlier, the system recorded what appeared to be 4–5 bright objects traveling parallel to the horizon.”
GEODSS personnel reportedly were baffled. These gleaming objects appeared to be at once too slow and consistent in their trajectory to be meteors but too fast, hot and high up in space to be any known aircraft.
“Site personnel had no idea what the objects were and, in those days, had no incentive to acknowledge or report the data,” according to Mellon. “That incident occurred in the 1990s, when the GEODSS system was far less capable than it is today.”
And, as Guthrie told Mashable, the full suite of America’s space monitoring, missile defense and early warning platforms could easily be recording critical, perhaps world-changing evidence about UAP — which could reveal if it’s another nation’s advanced spacecraft, something mundane, or something truly unknown. Data from these systems — including the Space Fence, NORAD’s Solid-State Phased Array Radars (SSPAR), the Space-Based Infrared Monitoring System (SBIRS), and others — could also be kept under wraps based on just this one technicality.
“If there are no requirements to report on spaceborne-only UAP,” Guthrie said, “then there are no requirements by elements of the defense and intelligence communities to report on those objects using these especially sensitive space collection sensors.”
“Our ballistic missile defense people were very concerned.”
The now well-known 2004 USS Nimitz “Tic Tac” UFO incident, made famous by The New York Times in 2017 and testified to under oath in Congress, included the monitoring of similar objects in space, according to veteran Navy radar operator Kevin Day. Then a senior chief petty officer supervising radar efforts onboard the USS Princeton, a guided-missile cruiser with the Nimitz carrier strike group, Day told Mashable that crew tasked with looking out for ICBM warheads saw these unexplained tracks moving up at 80,000 feet.
“Our ballistic missile defense people were very concerned,” Day told Mashable.
Greater engagement with these kinds of potential UAP risks does not appear to be on the way from some of the United States’ best unclassified collection tools — the worldwide network of astronomical observatories and satellites managed by NASA. Despite much fanfare around NASA’s announcement of a dedicated director of UAP research in 2023, the position has been left quietly vacant since September 2024, according to a recent statement from the space agency’s press office.
Guthrie chalks the crux of this problem up to “an absence of overarching political oversight.”
“There have been so many agencies that have been alleged to have been or currently be involved in the UAP matter,” he explained. “It’s all too easy for any of these agencies to pass the buck.”
Guthrie hopes lawmakers will take-up the advice offered by former Pentagon official Luis Elizondo, who told Congress last November that it should “create a single point-of-contact responsible for a whole-of-government approach to the UAP issue.”
“Currently, the White House, CIA, NASA, the Pentagon, Department of Energy, and others play a role, but no one seems to be in charge,” Elizondo added, “leading to unchecked power and corruption.”
Beyond redefining the strict legal definition of what UAP means, or even creating a new acronym that would bring “clarity to this issue,” Guthrie argues that this more centralized, whole-of-government approach could also help close-up these kinds of loopholes.
“Breaking down those stovepipes,” as Guthrie put it, “and along with those stovepipes the ability of a particular agency to just say, ‘Oh, we don’t feel the need to further act on this matter.’”
I have tested some other routers and have several more in the queue. These aren’t as great as the picks above but are worth considering.
Netgear Nighthawk M6 Pro
Photograph: Simon Hill
Netgear Nighthawk M6 Pro for $800: While I am keen to add a 5G router and mobile hot spot to this guide, and the Nighthawk M6 Pro is an excellent performer, it is simply too expensive to recommend for most folks. (I plan to test cheaper models in the coming weeks.) That said, the M6 Pro is easy to use and might suit business folks with an expense account. Pop a 5G SIM in there and you have a tri-band Wi-Fi 6E router (2.4-, 5-, and 6-GHz) with a sturdy design, a handy 2.8-inch touchscreen, a 2.5-gigabit Ethernet port, and a battery that’s good for up to 13 hours of use. You can connect up to 32 devices via Wi-Fi and expect a range of around 1,000 square feet. You can also use the Ethernet port as a WAN connection or employ the M6 Pro as a secure Wi-Fi repeater. It’s versatile, but configuration can be a chore, speeds are limited if you want to extend battery life, and it’s too expensive.
Asus RT-BE86U for $300: The new Wi-Fi 7 version of the Asus RT-AX86U listed above, this dual band (2.4- and 5-GHz) router is very similar to the Asus RT-BE88U below. It lacks the 6-GHz band but brings all the other improvements that Wi-Fi 7 offers, from MLO to better security. The RT-BE86U proved reliable in my tests and performed extremely well on the 5-GHz band, matching the slightly more expensive RT-BE88U. It is slightly smaller but still has one 10-Gbps and four 2.5-Gbps Ethernet ports, alongside a USB 2.0 and a USB 3.0 port. It also offers all the usual benefits of an Asus router, including onboard security, parental controls, AiMesh and VPN support, and a host of configuration options. It’s perhaps a little pricey at the moment, but when this router starts to drop, it will be a solid choice for many homes and may well claim a place above.
Netgear Nighthawk RS200 for $200: The RS200 is Netgear’s dual-band (2.4- and 5-GHz) Wi-Fi 7 router and the cheapest in its Wi-Fi 7 lineup. After the tri-band RS300 won a recommendation, I expected this router to put in a decent performance, but I encountered several issues, including random drops and poor range. After turning the router off and on again, many devices, including my Pixel and iPhone, struggled to reconnect. Perhaps I have too many devices in my home for it, though Netgear suggests it can handle up to 80. It has two 2.5 Gbps, three Gigabit Ethernet, and a USB 3.0 port. Test results were OK, but significantly slower than the RT-BE86U. The expensive subscriptions for Netgear Armor ($100/year) and Premium Smart Parental Controls ($8/month or $70/year) seem especially expensive with a cheaper router like this.
The TP Link Archer GE800 Wi-Fi Router
Photograph: Simon Hill
TP-Link Archer GE800 for $450: This stunning tri-band Wi-Fi 7 gaming router came very close to a place above. The angled design with customizable RGB lighting screams Vader’s castle but also provides room for antennas to ensure extremely fast performance across the board. You also get a 10-Gbps port for your incoming internet connection, a further two 10-Gbps and four 2.5-Gbps Ethernet LAN ports, and a USB 3.0 port. The Tether app is solid, with some gaming-specific options, but separate subscriptions are required for extra security and parental controls. Despite the blazing fast results, the GE800 couldn’t quite match our top Wi-Fi 7 gaming pick above on the 6-GHz band, and it produced quite a lot of heat and audible fan noise, though it is significantly cheaper.
Asus RT-BE88U for $300: This dual-band Wi-Fi 7 router is an odd prospect because it does not offer the 6-GHz band at all, just 2.4 GHz and 5 GHz. But you can still combine those bands with MLO and enjoy features like 4K QAM, and this router will be fast enough for the average home. It has ports galore (two 10 Gbps, four 2.5 Gbps, four Gigabit, and one USB 3.0). It outperformed several more expensive routers on the 5-GHz band, and that’s likely what most of your devices are using most of the time right now. Asus also offers free security software and parental controls with its routers, so there’s no need for subscriptions. But when I consider that you can snag the Netgear Nighthawk RS300 listed above for less, I find it tough to recommend this router to folks in the US. If the 6-GHz band is unavailable or nerfed in your country, the RT-BE88U is for you.
TP-Link Travel Router AX1500 for $60: If you don’t want to spend much on a travel router, this is a good alternative to our pick above and less than half the price. The catch is that you can expect around half the performance. If you just need to cover a hotel room, it’s fine, but the USB 2.0 port limits the effectiveness of using your phone’s cellular connection, and the 2.4-GHz band is only Wi-Fi 4. It does have two Gigabit ports, some handy modes, and VPN support. I also love that it is powered via USB-C, as it affords some versatility (you could even use a fast portable charger).
Netgear Nighthawk RS700 for $550: Although I had setup issues that required a factory reset, there’s no hiding the top-notch performance of this router. It’s a Wi-Fi 7 tri-band router with two 10-Gbps Ethernet ports, four gigabit ports, and a USB 3.2. The tower design is new for the Nighthawk line, and it looks great. This router will blend in far better than our bulky Wi-Fi 7 pick above from Asus, and it was slightly faster on the 6-GHz band, though not the 5-GHz or 2.4-GHz bands. It mainly misses out on a recommendation because it is more expensive. We’re already seeing discounts on the RT-BE96U, and Asus offers free security software and parental controls. If you get the Nighthawk RS700S, the “S” at the end denotes a free year of Netgear Armor, which costs $100 a year thereafter. If you need parental controls, that’s another $70 a year.
TP-Link Archer GX90 AX6600 for $180: Picks above too expensive? The slightly more affordable TP-Link Archer GX90 (8/10, WIRED Recommends) might tempt you. It looks like a Sith spider, but this gaming-focused behemoth is feature-packed. It’s easy to set up and configure, and boasts a game accelerator feature and prioritization, making it easy to reserve bandwidth for gaming. I had no issues with multiple simultaneous gaming sessions. It has a 2.5-Gbps WAN/LAN port, a gigabit WAN/LAN port, three gigabit LAN ports, and two USB ports (3.0 and 2.0). Sadly, full parental controls and enhanced security require subscriptions.
Aircove ExpressVPN Router for $190: This router has a built-in VPN service, allowing you to shield your network traffic from prying eyes. You do have to buy a subscription to ExpressVPN separately (it’s $13 per month, or just over $8 if you pay annually). But setup is simple, and having a VPN at the router level is much easier than having to install it on each device (though several of our picks above can do this too). It’s worth noting that ExpressVPN doesn’t make our Best VPNs guide because it was sold to a parent company with a less-than-sterling reputation; that might matter to you if you’re the kind of person who wants a VPN. I also ran into a few issues with websites and streaming services that aren’t keen on VPNs.
Photograph: Simon Hill
Vodafone Pro II from £37 a month: Folks in the UK looking for a new internet service provider (ISP) should check out Vodafone’s Pro II (8/10, WIRED Review). While ISPs have traditionally provided shoddy routers to their customers, that seems to be changing. The Vodafone Pro II is a tri-band router that supports Wi-Fi 6E, and it delivered lightning-fast speeds in my tests, on par with many of my picks above. The range is limited, especially on the 6-GHz band, but this service comes with a range extender that appears as part of the same network. You can also get a 4G backup that connects to Vodafone’s mobile network to keep you online should your regular internet connection fail. It’s only available with a two-year Vodafone service contract, starting from £37 a month.
Firewalla Gold SE for $449: This quirky portable device is perfect for people who worry about security and privacy. It offers comprehensive tools for monitoring all traffic in and out of your house, robust and detailed parental controls, ad-blocking, and enhanced security with a built-in firewall and VPN option. It serves as a router, but you will want to pair another router in access point mode for Wi-Fi in your home. It’s expensive and may prove intimidating for inexperienced folks, but it offers deep insight into your network and an impressive depth of security features without an additional subscription. The Gold SE has two 2.5-Gbps ports and two gigabit ports and is suitable for folks with up to 2-gigabit connections. If your internet is only one gigabit, try the more affordable but slightly less capable, Firewalla Purple ($359) (8/10, WIRED recommends).
TP-Link Archer BE800 for $477: With a fresh design that’s more desktop PC than router, the BE800 (8/10, WIRED Review) tri-band beast came out on top or close to it in my tests on the 2.4-GHz, 5-GHz, and 6-GHz bands, proving impressively swift for file transfers and downloads. It also boasts speedy ports galore, a cool but kind of pointless customizable dot-matrix LED screen, and the Tether app offers a guest network, IoT network, VPN server or client, EasyMesh, QoS for device prioritization, and remote management. This was our Wi-Fi 7 pick, but the Asus RT-BE96U beat it in my tests and does not require a subscription. TP-Link’s Security+ ($5/month, $36/year) and Advanced Parental Controls ($3/month, $18/year) bring full-featured parental controls and network security.
Reyee RG-E6 for $140: This affordable gaming router from Reyee is a decent budget gaming pick that recorded some impressive test results. It is only a dual-band router, but with support for 160-MHz channels, the speeds on the 5-GHz band were very good. It has a 2.5-Gbps WAN/LAN and three gigabit LANs, but no USB ports. Reyee’s app offers prioritization for devices, ports, and gaming traffic, separate guest and IoT networks, and basic parental controls. What it lacks is any security, and the app is poorly translated. But if that doesn’t bother you, this is likely the best gaming router you can get for the money.
TP-Link Archer AXE75 for $150: While this tri-band router makes Wi-Fi 6E affordable, its performance was mixed. The 6-GHz band offers fast speeds at close range but drops off sharply with distance. I found the 5-GHz band somewhat inconsistent, recording zippy performance in most of my tests but relatively slow results on a few occasions. You also need subscriptions if you want full-featured parental controls and network security, and all four Ethernet ports are limited to 1 Gbps.
Synology WRX560 for $220: If you already have the Synology RT6600ax listed above, the WRX560 is a decent additional device for setting up a mesh network. I had some issues with setup that required a factory reset, but once up and running, the WRX560 offers a strong and stable signal on the 2.4-GHz and 5-GHz bands. However, a dual-band Wi-Fi 6 router is a tough sell at this price, so if you just need one, it’s worth spending the extra $80 for the RT6600ax.
TP-Link Archer AX5400 Pro for $200: This dual-band Wi-Fi 6 router is almost identical to the Archer AX73, except for the 2.5-Gbps WAN port. It delivers relatively fast speeds on the 2.4-GHz and 5-GHz bands and boasts a 160-MHz channel width on 5 GHz. The range is good, easily covering my home and garden, but the performance was inconsistent. It was also relatively slow at moving files locally. There’s support for TP-Link OneMesh, VPN, and QoS, but you only get basic parental controls and network security unless you subscribe.
MSi RadiX AXE6600 for $153: This Wi-Fi 6E tri-band gaming router has that familiar red and black Sith spider look, though you can customize the lighting. It proved very fast in most of my tests, coming close to the top of the table at short range on the 6-GHz band and offering average performance on the 5-GHz and 2.4-GHz bands. But the mobile app had limited options, a confusing layout, and was buggy (it crashed on me more than once). The web interface was better, with more options, including open VPN, simple parental controls, guest network, and QoS optimization for gaming. Unfortunately, performance was inconsistent, and I suffered random drops twice in a week of testing.
Linksys Hydra Pro 6E for $159: One of the first Wi-Fi 6E tri-band routers (2.4 GHz, 5 GHz, and 6 GHz) to hit the market, the price has dropped significantly since release. It proved easy to set up and has a very straightforward app, though it was often slow to load. It has a 5-Gbps WAN port and four gigabit LAN ports. The performance proved reliable, and it’s possible to get lightning-fast speeds at close range if you have a device that supports Wi-Fi 6E. Coverage and speeds at mid and long range were average. There are free basic parental controls that enable you to block sites and schedule downtime, but only on a per-device basis (no profile creation or age restrictions filters). You can split bands if you want to and prioritize three devices. There’s also a guest network option and easy Wi-Fi sharing. Another positive is that this router works with any other Linksys Intelligent Mesh router (including the Velop mesh range).
Linksys Hydra 6 for $100: Specs-wise, this compact router is similar to our top pick (TP-Link Archer AX55). It’s a dual-band Wi-Fi 6 router with a gigabit WAN and four gigabit LAN ports. The setup was easy, and it uses the same Linksys app as the Pro 6E above, so you get free parental controls, guest network, prioritization, and band splitting. It proved speedy at close range and not bad at mid-range, but if your home is larger than 1,600 square feet, it may struggle. However, as an Intelligent Mesh router, it can mix and match with other Linksys routers or its Velop mesh system. Linksys suggests a limit of 25 connected devices. Although it managed more than 40 without issues in my testing, busy households will likely want something more powerful.
The Lenovo Legion Go S was supposed to change things. It was poised to show Valve isn’t the only one that can build an affordable, portable, potent handheld gaming PC — you just need the right design and the right OS.
I was intrigued when Valve’s own Steam Deck designers told me this Windows handheld would double as the first authorized third-party SteamOS handheld this May. When I heard Lenovo had procured an exclusive AMD chip that would help that SteamOS version hit $499, I got excited for a true Steam Deck competitor.
But I’m afraid that chip ain’t it.
I’ve spent weeks living with a Legion Go S powered by AMD’s Z2 Go, the same chip slated to appear in that $499 handheld. I’ve used it with both Windows and Bazzite, a SteamOS-like Linux distro that eliminates many of Windows’ most annoying quirks. I tested both directly against a Steam Deck OLED and the original Legion Go, expecting to find it between the two in terms of performance and battery life. But that’s not what I found.
Watt for watt, its Z2 Go chip simply can’t compete with the Steam Deck, and it’s far weaker than the Z1 Extreme in last year’s handhelds. That’s inexcusable at the $730 price you’ll currently pay for the Windows version, and I won’t be the first reviewer to say so. But with this less efficient chip and a mere 55 watt-hour battery, I worry the Legion Go S isn’t a good choice at all.
$730
The Good
Good ergonomics
Great variable refresh rate screen
Powerful cooling
Fast 100W charging
The Bad
Performance is too low
Windows is bloated and can’t be trusted to sleep
Somewhat slippery texture
Nearly useless touchpad
I want to say that the Legion Go S “makes a great first impression,” but Windows 11 still features a terrible out-of-box experience. I spent nearly 45 minutes waiting for mandatory updates to install and dismissing dark-patterned offers for Microsoft products that have no business being on my handheld gaming machine.
Still, the Go S is built far better than the original Legion Go, whose flat-faced controllers felt awkward in my hands. The new portable has some of the best-sculpted grips I’ve felt on a handheld, though their smooth texture can feel a little slippery. I’d have gone with more aggressive stippling to help me hold its 1.61-pound weight.
Photo by Antonio G. Di Benedetto / The Verge
Photo by Antonio G. Di Benedetto / The Verge
But its buttons all feel precise and secure, if the triggers are longer than I’d like, and its concave-topped, drift-resistant Hall effect joysticks feel comfy and wonderfully smooth to spin. The only weak control is the touchpad, which is so tiny I flick repeatedly to move the cursor an inch at a time.
Audio is much improved from front-facing speakers, and a larger fan moves more air while staying quieter than before. And it’s one of the fastest-charging handhelds yet — I clocked each of its top-mounted USB 4 ports drawing a full 100 watts of USB-C PD power during actual use. The cooling and charging are so good, Lenovo lets you crank the chip up to 40-watt TDP while it’s plugged in or 33 watts on battery alone.
The backs of the original Legion Go and Legion Go S, showing detachable controls vs. fixed grips.Photo by Sean Hollister / The Verge
But as you’ll see in my benchmark charts, the Z2 Go simply isn’t in the same ballpark as the Steam Deck OLED’s “Sephiroth” chip. In some games, it can’t beat the Steam Deck at all, even if you plug it in and crank it all the way up.
Legion Go S 720p benchmarks
Game
Legion Go S (Z2 Go)
Steam Deck OLED
Legion Go (Z1 Extreme)
Z1E vs. Z2 Go
AC Valhalla, 15-watt TDP
44
52
49
11.36%
20-watt TDP
55
N/A
63
14.55%
25-watt TDP
60
N/A
69
15.00%
30-watt TDP
62
N/A
71
14.52%
Plugged in
65
52
73
12.31%
Cyberpunk 2077, 15-watt TDP
36
52
42
16.67%
20-watt TDP
41
N/A
54
31.71%
25-watt TDP
45
N/A
59
31.11%
30-watt TDP
46
N/A
61
32.61%
Plugged in
49
52
62
26.53%
DX: Mankind Divided, 15-watt TDP
56
70
61
8.93%
20-watt TDP
63
N/A
84
33.33%
25-watt TDP
66
N/A
89
34.85%
30-watt TDP
67
N/A
91
35.82%
Plugged in
70
70
92
31.43%
Horizon Zero Dawn Remastered, 15-watt TDP
18
34
25
38.89%
20-watt TDP
21
N/A
28
33.33%
25-watt TDP
20
N/A
28
40.00%
30-watt TDP
24
N/A
28
16.67%
Plugged in
24
34
33
37.50%
Returnal, 15-watt TDP
24
26
32
33.33%
20-watt TDP
26
N/A
38
46.15%
25-watt TDP
29
N/A
40
37.93%
30-watt TDP
30
N/A
41
36.67%
Plugged in
32
26
38
18.75%
Shadow of the Tomb Raider, 15-watt TDP
53
61
50
-5.66%
20-watt TDP
53
N/A
69
30.19%
25-watt TDP
55
N/A
75
36.36%
30-watt TDP
64
N/A
73
14.06%
Plugged in
65
61
75
15.38%
Average framerates. All games tested at 720p and low or (Cyberpunk 2077) handheld-specific settings.
Take Cyberpunk 2077. With the Steam Deck, which runs at 15-watt TDP, I can average 52 frames per second at an upscaled 720p resolution and low settings on battery power alone. But even if I feed the Legion Go S with 40 watts and plug it into a wall, the open-world game runs slower at 49fps. And that’s after a new set of drivers; the shipping ones were much worse.
Photo by Antonio G. Di Benedetto / The Verge
In other games, cranking up Lenovo’s TDP by five, 10, or 15 watts can give it a comfortable lead over the Deck. But that significantly impacts battery. In Lenovo’s default 25W “Performance” mode, I saw some games run just as smoothly as on the Deck — but with total system power consumption of around 36 watts, draining the handheld’s 55 watt-hour battery in about an hour and a half. The Steam Deck, which drains at around 22 to 24 watts at full bore, lasts two hours at the same smoothness.
I have possible good news about SteamOS: when I installed Bazzite, which can serve as a decent preview of what SteamOS might look and feel like, I saw frame rates improve by an average of 16 percent in early tests (minus Returnal, which seems to hate Linux for some reason), and Bazzite is such a breath of fresh air after attempting to use Windows. But it still didn’t reach Steam Deck performance unless I sacrificed more battery to get it. That works with a handheld like the Asus ROG Ally X with a big 80 watt-hour battery, but not so much here.
Legion Go S Windows vs. Bazzite
Game
Legion Go S (Windows)
Legion Go S (Bazzite)
Steam Deck OLED
Bazzite vs. Windows
Cyberpunk 2077, 15-watt TDP
36
42
52
16.67%
20-watt TDP
41
53
N/A
29.27%
25-watt TDP
45
59
N/A
31.11%
30-watt TDP
46
60
N/A
30.43%
Plugged in
49
60
52
22.45%
DX: Mankind Divided, 15-watt TDP
56
62
70
10.71%
20-watt TDP
63
74
N/A
17.46%
25-watt TDP
66
80
N/A
21.21%
30-watt TDP
67
84
N/A
25.37%
Plugged in
70
82
70
17.14%
Returnal, 15-watt TDP
24
17
26
-29.17%
20-watt TDP
26
22
N/A
-15.38%
25-watt TDP
29
24
N/A
-17.24%
30-watt TDP
30
25
N/A
-16.67%
Plugged in
32
25
26
-21.88%
Shadow of the Tomb Raider, 15-watt TDP
53
51
61
-3.77%
20-watt TDP
53
59
N/A
11.32%
25-watt TDP
55
62
N/A
12.73%
30-watt TDP
64
63
N/A
-1.56%
Plugged in
65
65
61
0.00%
Average framerates. All games tested at 720p and low or (Cyberpunk 2077) handheld-specific settings.
Even if you crank up the Z2 Go, its “turbo” modes are never anywhere near as effective as the Z1 Extreme in last year’s portables. In my tests, the original Legion Go with Z1E runs anywhere from 15 percent to 40 percent faster comparing Windows to Windows — a lot for a handheld, where modern games struggle to reach smooth frame rates at all.
The Legion Go S does have an ace up its sleeve: its crisp, colorful 1920 x 1200 IPS screen looks better at lower resolutions than its predecessor’s 2560 x 1600 panel, and it runs more smoothly at lower frame rates now that it has VRR to adjust its refresh rate anywhere between 48Hz and 120Hz on the fly. I would not buy a Legion Go over a Legion Go S for this reason alone.
Photo by Antonio G. Di Benedetto / The Verge
And if you primarily play games that don’t require performance, the Legion Go S is a bit more efficient at lower wattage: by setting TDP, brightness, and refresh rate low, I was able to achieve a total of just 7.5W battery drain in Windows and 7W in Bazzite while playing magic math poker game Balatro. That should net me seven to eight hours of battery life, and you should be able to hit the four-hour mark without those tricks just by setting the Legion Go S to its 8-watt TDP “Quiet” mode. When I played the similarly easy to run Slay the Spire on the original Legion Go, pulling out all the stops, I couldn’t even reach five hours.
Photo by Antonio G. Di Benedetto / The Verge
But again, the Steam Deck does efficiency better. Simply limiting frame rate to 25fps and brightness to 40 percent can yield over eight hours of Balatro on the Deck, and I’ve gotten four hours, 42 minutes in Dave the Diver there. With the Legion Go S, my Dave only got 2.5 hours to hunt those sushi ingredients and blow up fake environmentalists!
I am comfortable saying no one should buy the Windows version of the Lenovo Legion Go S, which costs $730 at Best Buy. Even if the performance, battery life, and price weren’t disqualifiers, Windows is a stain on this machine. And like other recent Windows handhelds I’ve tested, it does not reliably go to sleep and wake up again: I woke several mornings to find the system hot with fans spinning, even though I’d pressed the power button the previous evening. I found it uncomfortably warm pulling it out of my bag the other day.
Even if you prefer Windows to SteamOS, you can get notably better performance and far better battery life from the $800 Asus ROG Ally X, which is worth every extra penny, particularly since it doubles as the best Bazzite machine you can buy.
But even if you add Bazzite to the Legion Go S, it’s no Steam Deck, and I’m not sure that’ll change by May. If you’re waiting for a $499 Legion Go S with SteamOS, here’s my advice: just buy a $530 Steam Deck OLED instead.
Agree to Continue: Legion Go S
Every smart device now requires you to agree to a series of terms and conditions before you can use it — contracts that no one actually reads. It’s impossible for us to read and analyze every single one of these agreements. But we started counting exactly how many times you have to hit “agree” to use devices when we review them, since these are agreements most people don’t read and definitely can’t negotiate.
To start using the Legion Go S, you’ll need to agree to the following:
Microsoft Software License Terms: Windows Operating System and Terms of Use
Lenovo Limited Warranty and “Software license agreements”
You can also say “yes” or “no” to the following:
Privacy settings (location, Find My Device, sharing diagnostic data, inking and typing, tailored experience, advertising ID)
That’s two mandatory agreements and six optional agreements. Windows also asks you if you want a variety of software and subscription services during the out-of-box experience.
Social media giants Meta and X approved ads targeting users in Germany with violent anti-Muslim and anti-Jew hate speech in the run-up to the country’s federal elections, according to new research from Eko, a corporate responsibility nonprofit campaign group.
The group’s researchers tested whether the two platforms’ ad review systems would approve or reject submissions for ads containing hateful and violent messaging targeting minorities ahead of an election where immigration has taken center stage in mainstream political discourse — including ads containing anti-Muslim slurs; calls for immigrants to be imprisoned in concentration camps or to be gassed; and AI-generated imagery of mosques and synagogues being burnt.
Most of the test ads were approved within hours of being submitted for review in mid-February. Germany’s federal elections are set to take place on Sunday, February 23.
Hate speech ads scheduled
Eko said X approved all 10 of the hate speech ads its researchers submitted just days before the federal election is due to take place, while Meta approved half (five ads) for running on Facebook (and potentially also Instagram) — though it rejected the other five.
The reason Meta provided for the five rejections indicated the platform believed there could be risks of political or social sensitivity which might influence voting.
However, the five ads that Meta approved included violent hate speech likening Muslim refugees to a “virus,” “vermin,” or “rodents,” branding Muslim immigrants as “rapists,” and calling for them to be sterilized, burnt, or gassed. Meta also approved an ad calling for synagogues to be torched to “stop the globalist Jewish rat agenda.”
As a sidenote, Eko says none of the AI-generated imagery it used to illustrate the hate speech ads was labeled as artificially generated — yet half of the 10 ads were still approved by Meta, regardless of the company having a policy that requires disclosure of the use of AI imagery for ads about social issues, elections or politics.
X, meanwhile, approved all five of these hateful ads — and a further five that contained similarly violent hate speech targeting Muslims and Jews.
These additional approved ads included messaging attacking “rodent” immigrants that the ad copy claimed are “flooding” the country “to steal our democracy,” and an antisemitic slur which suggested that Jews are lying about climate change in order to destroy European industry and accrue economic power.
The latter ad was combined with AI-generated imagery depicting a group of shadowy men sitting around a table surrounded by stacks of gold bars, with a Star of David on the wall above them — with the visuals also leaning heavily into antisemitic tropes.
Another ad X approved contained a direct attack on the SPD, the center-left party that currently leads Germany’s coalition government, with a bogus claim that the party wants to take in 60 million Muslim refugees from the Middle East, before going on to try to whip up a violent response. X also duly scheduled an ad suggesting “leftists” want “open borders”, and calling for the extermination of Muslims “rapists.”
Elon Musk, the owner of X, has used the social media platform where he has close to 220 million followers to personally intervene in the German election. In a tweet in December, he called for German voters to back the Far Right AfD party to “save Germany.” He has also hosted a livestream with the AfD’s leader, Alice Weidel, on X.
Eko’s researchers disabled all test ads before any that had been approved were scheduled to run to ensure no users of the platform were exposed to the violent hate speech.
It says the tests highlight glaring flaws with the ad platforms’ approach to content moderation. Indeed, in the case of X, it’s not clear whether the platform is doing any moderation of ads, given all 10 violent hate speech ads were quickly approved for display.
The findings also suggest that the ad platforms could be earning revenue as a result of distributing violent hate speech.
EU’s Digital Services Act in the frame
Eko’s tests suggests that neither platform is properly enforcing bans on hate speech they both claim to apply to ad content in their own policies. Furthermore, in the case of Meta, Eko reached the same conclusion after conducting a similar test in 2023 ahead of new EU online governance rules coming in — suggesting the regime has no effect on how it operates.
“Our findings suggest that Meta’s AI-driven ad moderation systems remain fundamentally broken, despite the Digital Services Act (DSA) now being in full effect,” an Eko spokesperson told TechCrunch.
“Rather than strengthening its ad review process or hate speech policies, Meta appears to be backtracking across the board,” they added, pointing to the company’s recent announcement about rolling back moderation and fact-checking policies as a sign of “active regression” that they suggested puts it on a direct collision course with DSA rules on systemic risks.
Eko has submitted its latest findings to the European Commission, which oversees enforcement of key aspects of the DSA on the pair of social media giants. It also said it shared the results with both companies, but neither responded.
The EU has open DSA investigations into Meta and X, which include concerns about election security and illegal content, but the Commission has yet to conclude these proceedings. Though, back in April it said it suspects Meta of inadequate moderation of political ads.
A preliminary decision on a portion of its DSA investigation on X, which was announced in July, included suspicions that the platform is failing to live up to the regulation’s ad transparency rules. However, the full investigation, which kicked off in December 2023, also concerns illegal content risks, and the EU has yet to arrive at any findings on the bulk of the probe well over a year later.
Confirmed breaches of the DSA can attract penalties of up to 6% of global annual turnover, while systemic non-compliance could even lead to regional access to violating platforms being blocked temporarily.
But, for now, the EU is still taking its time to make up its mind on the Meta and X probes so — pending final decisions — any DSA sanctions remain up in the air.
Meanwhile, it’s now just a matter of hours before German voters go to the polls — and a growing body of civil society research suggests that the EU’s flagship online governance regulation has failed to shield the major EU economy’s democratic process from a range of tech-fueled threats.
Earlier this week, Global Witness released the results of tests of X and TikTok’s algorithmic “For You” feeds in Germany, which suggest the platforms are biased in favor of promoting AfD content versus content from other political parties. Civil society researchers have also accused X of blocking data access to prevent them from studying election security risks in the run-up to the German poll — access the DSA is supposed to enable.
“The European Commission has taken important steps by opening DSA investigations into both Meta and X, now we need to see the Commission take strong action to address the concerns raised as part of these investigations,” Eko’s spokesperson also told us.
“Our findings, alongside mounting evidence from other civil society groups, show that Big Tech will not clean up its platforms voluntarily. Meta and X continue to allow illegal hate speech, incitement to violence, and election disinformation to spread at scale, despite their legal obligations under the DSA,” the spokesperson added. (We have withheld the spokesperson’s name to prevent harassment.)
“Regulators must take strong action — both in enforcing the DSA but also for example implementing pre-election mitigation measures. This could include turning off profiling-based recommender systems immediately before elections, and implementing other appropriate ‘break-glass’ measures to prevent algorithmic amplification of borderline content, such as hateful content in the run-up elections.”
The campaign group also warns that the EU is now facing pressure from the Trump administration to soften its approach to regulating Big Tech. “In the current political climate, there’s a real danger that the Commission doesn’t fully enforce these new laws as a concession to the U.S.,” they suggest.