Your cart is currently empty!
Tag: renewable energy innovations
Chilkey ND75 LP Review: Impressive performance for $100
Why you can trust Tom’s Hardware
Our expert reviewers spend hours testing and comparing products and services so you can choose the best for you. Find out more about how we test.There aren’t a ton of low-profile mechanical keyboards on the market — after all, the best mechanical keyboards are about trying to achieve an amazing typing experience, and low-profile keyboards tend to be about compromising said experience for something slim, lightweight, and travel-friendly. But not everyone wants to travel with a paper-thin Apple Magic Keyboard, so it’s always nice to see a well-built low-profile board that delivers a fantastic typing experience — and it’s even nicer to see one with a sub-$100 price tag.
Chilkey’s ND75 LP is the brand’s popular ND75 keyboard in low-profile form, and it comes with all the bells and whistles: wireless, with a full-aluminum body, double-shot PBT keycaps, a hot-swappable PCB, and tri-mode wireless connectivity. It even has a little LCD screen that shows you the time, battery life, and various settings like system and Caps Lock (and can, of course, be configured to display a picture or gif of your choosing — because that’s important). The ND75 LP is a little heavy to be a travel-friendly low-profile keyboard, but it’s nice to have the option of traveling with something that prioritizes typing feel and sound over portability.
The ND75 LP is available now, for $99 (black and white versions) or $105 (color versions).
Design and Construction of the ND75 LP
The ND75 LP is a wireless low-profile mechanical keyboard with a 75-percent layout, which means it has arrow keys and a function row, but no number pad and only a few of the navigation keys (Del, Ins, PgUp, and PgDn).
It also features a small 1-inch screen on the right side, which shows the time, battery life, and connection status by default and can be used to configure some of the keyboard’s lighting effects. (You can also, of course, upload an image or gif to play on the screen, but we’ll get to that later.)
The ND75 LP is slim and low-profile, but it’s also hefty — it weighs a solid 2.88 pounds (1,305g), which is about 4.5 ounces heavier than the Asus ROG Azoth. It is slim, though, measuring just 1.05 inches (26.6mm) thick at the back and 0.37 inches (9.3mm) thick at the front. The keyboard is 12.68 inches (322mm) wide by 5.45 inches (138.5mm) deep, which makes it just slightly larger than the Nuphy Air75 V2 (12.5 x 5.2 x 0.59 inches / 316.4 x 132.5 x 13.5mm) — though it is, of course, 1.5 pounds heavier than the 1.31lb / 598g Air75 V2. It does have a roomier layout than the Air75 V2 — the function keys are separated into clusters of four, and the arrow keys are offset slightly from the alphanumeric keys.
The keyboard is so heavy because it features a fully aluminum case with a decorative keyboard weight on the back. The keyboard comes in six colors, starting at $99 for pure white and jet black (both of which come with black and white keycaps and black keyboard weights on the back). The more colorful options cost an extra $6: lavender, baby blue, silver, and milk tea (a light orange-beige color). These options all come with matching keycaps and silver keyboard weights on the back.
Our review unit came in baby blue, which has white alphanumeric keycaps with light blue printed legends, and light blue accent keys with white printed legends. It’s an attractive keyboard, with a finely-machined two-part case, thick, double-shot PBT keycaps, and concentric circles etched on the backplate for style.
The back of the keyboard features four small silicone anti-slip feet in addition to the backplate. The feet fit into the grove on the wrist rest, in the event you want to use the wrist rest as a keyboard stand to adjust the board’s typing angle from its standard 6.5 degrees to 10 degrees. This probably isn’t necessary, but it’s a nice touch, especially for people who shy away from full-aluminum mechanical keyboards because they’re concerned about not being able to change the typing angle. It’s not the most advanced solution, but it works well enough.
(Image credit: Tom’s Hardware) In the box, the ND75 LP comes with a handful of accessories, including a 5-foot rubber USB-C to USB-A cable, a dual keycap/switch puller, and a slim rubber wrist rest. It also comes with a screwdriver, three extra switches, a 2.4-GHz dongle, and the parts you’ll need if you want to switch the keyboard’s mounting system from gasket-mount to top-mount. The company also includes a replacement flexible flat cable, because you can pretty easily rip the one that connects the screen to the PCB if you yank the top cover off too aggressively.
Speaking of which, the ND75 LP’s case is tool-less. The top case is held on magnetically, so all you have to do to open up the keyboard is apply some pressure and pop it off (but not too far off, or that cable will rip). The magnets are fairly strong — I tossed this in my bag and went traveling with it, and not once was I worried the top case might separate or get jostled loose — but it pops off easily enough. (For what it’s worth, the flat cable that attaches the screen is pretty long — you’d be unlikely to accidentally rip it unless you just forgot that there was a screen altogether. Once inside, you can switch the keyboard’s mounting system using the included accessories, or, well, do whatever else you want before you pop the top back on.
Specs
Swipe to scroll horizontallySize
75%
Number of keys
80
Switches
Chilkey Aster (linear, low-profile)
Backlighting
Yes
Onboard Storage
Yes
Dedicated Media Keys
No
Game Mode
No
Additional Ports
0
Connectivity
2.4GHz wireless, Bluetooth 5.3, wired (USB-C)
Cable
5-ft, USB-C to USB-A
Keycaps
ABS
Construction
Plastic chassis Anodized aluminum top plate
Software
Chilkey Console
Dimensions (LxWxH)
12.68 x 5.45 x 1.05 inches / 322 x 138.5 x 26.6 mm
Weight
2.88lbs / 1,305g
MSRP / Price at Time of Review
$99 – $105
Release Date
Oct. 2024
Typing and Gaming Experience on the ND75 LP
The ND75 LP sounds and feels great out of the box — and not just for a budget-friendly, low-profile keyboard. The keyboard features Chilkey’s self-developed Aster switches, which are full POM linear switches with a total travel of 3mm — pretty close to the standard switch travel of 3.5 – 4mm.
The switches have an actuation force of 45g with a bottom-out force of 52g, and are rated for up to 50 million keystrokes. They’re super smooth, thanks to their pre-lubed full POM construction, and, combined with the premium double-shot PBT keycaps, they sound great. The board also features five layers of sound-dampening material inside, and the plate-mounted stabilizers are well-lubed, so you won’t get any hollowness or rattle.
The keycaps have an LSA profile — low-profile, with large, rounded tops that let you slip smoothly between keys while still giving you plenty of room to prevent typos. The rows aren’t sculpted, but I’m not sure that would do much on a low-profile keyboard like this, anyway. The keycaps have a smooth, lightly textured finish, and sound and feel premium. The keyboard does have backlighting, but the keycaps aren’t shine-through — so, while there’s enough light to give you a pretty lightshow, it’s not all that useful for actually seeing the keys, if that’s important to you.
This is a low-profile keyboard, so it’s fairly short in the front (0.37 inches / 9.3mm), but it also comes with a wrist rest… just in case. The wrist rest is small, and can either be used as wrist rest or as a way to angle the back of the keyboard toward you; I’m not sure it’s terribly useful either way, but it’s a nice inclusion — especially on a budget board — and it has a nice, soft rubbery feel.
(Image credit: Tom’s Hardware) The ND75 LP comes with a gasket-mount structure that’s flexible enough to be comfortable while still delivering crisp, thocky sound. But if you’re looking for even crisper, thockier sound, you might want to switch the board to a top-mount, which you can do by popping the top case off and using the screws Chilkey includes with the board to switch over. It’s a surprisingly easy board to tinker with — not that most boards are difficult, but the magnetic top case is a nice touch.
The ND75 LP features N-key rollover and a 1,000 Hz polling rate over both its wired and 2.4-GHz wireless connections, so while it’s not necessarily designed for gaming it will absolutely work in a pinch. Its low-profile linear switches are also excellent for gaming, as they’re quick and snappy and it’s easy to move around the board quickly. The board’s 75-percent layout also makes it a great gaming option, as it’s not so small you’ll be missing keys, but it’s also not so large you won’t be able to make big mouse swipes on a small desk or table.
Features and Software of the ND75 LP
The ND75 LP is configurable — to some extent — on the board itself using Fn shortcuts and the built-in screen. You can navigate through the screen with Fn + the plus/minus/enter keys, and you can do things like change the keyboard’s lighting effects, color, and brightness. You can also switch connections and system types from this screen. Of course, you can also do these things using Fn shortcuts — you don’t technically need the screen, it’s just a nice thing to be able to look at for confirmation.
As for keymapping, macro recordings, and putting the image or gif of your choice on the keyboard’s screen, you’ll need to use Chilkey’s online configuration software for that. The online software is fairly straightforward:
Connect your keyboard and you can remap keys (two layers), adjust the backlighting, and upload your own picture to the keyboard’s screen. There is a macro recorder, but otherwise the software is fairly basic — remapping is limited to keyboard and mouse functions (and macros), and lighting is limited to preset effects. But it does save directly to the board, and you can make up to three configurations.
(Image credit: Tom’s Hardware) While you can use the screen to switch between connections, you’ll probably just end up using the built-in shortcuts — Fn + Q, W, and E for Bluetooth, FN + R for 2.4Ghz wireless, and Fn + T for wired.
The power switch for wireless connectivity is located under the right shift key, which is a little annoying, especially if you were thinking of traveling with this keyboard. But this is perhaps not the most travel-friendly keyboard — it’s fairly heavy and there’s no place to store the tiny 2.4GHz wireless dongle, so you’ll need to depend on Bluetooth or a wire for connection when you’re on the road.
While this shouldn’t be too much of an issue, I definitely had some problems with the keyboard’s Bluetooth connection — specifically, whenever I connected to another Bluetooth device (such as a mouse or a headset), the board’s Bluetooth would disconnect and then be unable to reconnect for some reason.
This was frustrating mostly because I’d forgotten to bring all but one USB cord with me, so when I was trying to charge my other devices I would occasionally find the keyboard was just no longer connected to my laptop, and then I had to decide whether I was going to type on my laptop’s built-in keyboard or hope my mouse could work on a 5-minute charge.
On top of that, the board’s battery life is not great (especially given its weight). It has two 1,800 mAh batteries, but both the backlighting and the screen will drain those faster. I found the keyboard lasted about one full day of typing (8 – 10 hours), which is… well, not great. It almost makes me wonder if, given the weight and screen, this would have made more sense as a wired-only keyboard (perhaps it would have been cheaper, too).
The Bottom Line
The ND75 LP is a very impressive board for the price — it looks, sounds, and feels like a premium mechanical keyboard, features a customization-friendly magnetic case and hot-swappable PCB, and tri-mode wireless connectivity (sort of) — and it’s under $100 (unless you want it in a pretty color, in which case it’s $105). Other similar low-profile keyboards looking to deliver a premium typing experience, like the Lofree Edge and the Nuphy Air75 V2 are pricier (though they are lighter weight, thinner, and more travel-friendly) — and don’t sound as good.
That said, this isn’t the most travel-friendly keyboard, for more reasons than just its weight. The ND75 LP’s battery life is mediocre at best; its Bluetooth connectivity leaves something to be desired; and there’s nowhere to store the 2.4-GHz dongle. So if you’re traveling with it, it’ll mostly be a wired keyboard. It’s far from the thinnest or lightest low-profile keyboard, and while Chilkey’s Aster switches are excellent, they are the only option — it would be nice to see a tactile switch option, for that ultimate typing experience.
AI has grown beyond human knowledge, says Google’s DeepMind unit
worawit chutrakunwanit/Getty Images The world of artificial intelligence (AI) has recently been preoccupied with advancing generative AI beyond simple tests that AI models easily pass. The famed Turing Test has been “beaten” in some sense, and controversy rages over whether the newest models are being built to game the benchmark tests that measure performance.
The problem, say scholars at Google’s DeepMind unit, is not the tests themselves but the limited way AI models are developed. The data used to train AI is too restricted and static, and will never propel AI to new and better abilities.
In a paper posted by DeepMind last week, part of a forthcoming book by MIT Press, researchers propose that AI must be allowed to have “experiences” of a sort, interacting with the world to formulate goals based on signals from the environment.
Also: With AI models clobbering every benchmark, it’s time for human evaluation
“Incredible new capabilities will arise once the full potential of experiential learning is harnessed,” write DeepMind scholars David Silver and Richard Sutton in the paper, Welcome to the Era of Experience.
The two scholars are legends in the field. Silver most famously led the research that resulted in AlphaZero, DeepMind’s AI model that beat humans in games of Chess and Go. Sutton is one of two Turing Award-winning developers of an AI approach called reinforcement learning that Silver and his team used to create AlphaZero.
The approach the two scholars advocate builds upon reinforcement learning and the lessons of AlphaZero. It’s called “streams” and is meant to remedy the shortcomings of today’s large language models (LLMs), which are developed solely to answer individual human questions.
Google DeepMind Silver and Sutton suggest that shortly after AlphaZero and its predecessor, AlphaGo, burst on the scene, generative AI tools, such as ChatGPT, took the stage and “discarded” reinforcement learning. That move had benefits and drawbacks.
Also: OpenAI’s Deep Research has more fact-finding stamina than you, but it’s still wrong half the time
Gen AI was an important advance because AlphaZero’s use of reinforcement learning was restricted to limited applications. The technology couldn’t go beyond “full information” games, such as Chess, where all the rules are known.
Gen AI models, on the other hand, can handle spontaneous input from humans never before encountered, without explicit rules about how things are supposed to turn out.
However, discarding reinforcement learning meant, “something was lost in this transition: an agent’s ability to self-discover its own knowledge,” they write.
Instead, they observe that LLMs “[rely] on human prejudgment”, or what the human wants at the prompt stage. That approach is too limited. They suggest that human judgment “imposes “an impenetrable ceiling on the agent’s performance: the agent cannot discover better strategies underappreciated by the human rater.
Not only is human judgment an impediment, but the short, clipped nature of prompt interactions never allows the AI model to advance beyond question and answer.
“In the era of human data, language-based AI has largely focused on short interaction episodes: e.g., a user asks a question and (perhaps after a few thinking steps or tool-use actions) the agent responds,” the researchers write.
“The agent aims exclusively for outcomes within the current episode, such as directly answering a user’s question.”
There’s no memory, there’s no continuity between snippets of interaction in prompting. “Typically, little or no information carries over from one episode to the next, precluding any adaptation over time,” write Silver and Sutton.
Also: The AI model race has suddenly gotten a lot closer, say Stanford scholars
However, in their proposed Age of Experience, “Agents will inhabit streams of experience, rather than short snippets of interaction.”
Silver and Sutton draw an analogy between streams and humans learning over a lifetime of accumulated experience, and how they act based on long-range goals, not just the immediate task.
“Powerful agents should have their own stream of experience that progresses, like humans, over a long time-scale,” they write.
Silver and Sutton argue that “today’s technology” is enough to start building streams. In fact, the initial steps along the way can be seen in developments such as web-browsing AI agents, including OpenAI’s Deep Research.
“Recently, a new wave of prototype agents have started to interact with computers in an even more general manner, by using the same interface that humans use to operate a computer,” they write.
The browser agent marks “a transition from exclusively human-privileged communication, to much more autonomous interactions where the agent is able to act independently in the world.”
Also: The Turing Test has a problem – and OpenAI’s GPT-4.5 just exposed it
As AI agents move beyond just web browsing, they need a way to interact and learn from the world, Silver and Sutton suggest.
They propose that the AI agents in streams will learn via the same reinforcement learning principle as AlphaZero. The machine is given a model of the world in which it interacts, akin to a chessboard, and a set of rules.
As the AI agent explores and takes actions, it receives feedback as “rewards”. These rewards train the AI model on what is more or less valuable among possible actions in a given circumstance.
The world is full of various “signals” providing those rewards, if the agent is allowed to look for them, Silver and Sutton suggest.
“Where do rewards come from, if not from human data? Once agents become connected to the world through rich action and observation spaces, there will be no shortage of grounded signals to provide a basis for reward. In fact, the world abounds with quantities such as cost, error rates, hunger, productivity, health metrics, climate metrics, profit, sales, exam results, success, visits, yields, stocks, likes, income, pleasure/pain, economic indicators, accuracy, power, distance, speed, efficiency, or energy consumption. In addition, there are innumerable additional signals arising from the occurrence of specific events, or from features derived from raw sequences of observations and actions.”
To start the AI agent from a foundation, AI developers might use a “world model” simulation. The world model lets an AI model make predictions, test those predictions in the real world, and then use the reward signals to make the model more realistic.
“As the agent continues to interact with the world throughout its stream of experience, its dynamics model is continually updated to correct any errors in its predictions,” they write.
Also: AI isn’t hitting a wall, it’s just getting too smart for benchmarks, says Anthropic
Silver and Sutton still expect humans to have a role in defining goals, for which the signals and rewards serve to steer the agent. For example, a user might specify a broad goal such as ‘improve my fitness’, and the reward function might return a function of the user’s heart rate, sleep duration, and steps taken. Or the user might specify a goal of ‘help me learn Spanish’, and the reward function could return the user’s Spanish exam results.
The human feedback becomes “the top-level goal” that all else serves.
The researchers write that AI agents with those long-range capabilities would be better as AI assistants. They could track a person’s sleep and diet over months or years, providing health advice not limited to recent trends. Such agents could also be educational assistants tracking students over a long timeframe.
“A science agent could pursue ambitious goals, such as discovering a new material or reducing carbon dioxide,” they offer. “Such an agent could analyse real-world observations over an extended period, developing and running simulations, and suggesting real-world experiments or interventions.”
Also: ‘Humanity’s Last Exam’ benchmark is stumping top AI models – can you do any better?
The researchers suggest that the arrival of “thinking” or “reasoning” AI models, such as Gemini, DeepSeek’s R1, and OpenAI’s o1, may be surpassed by experience agents. The problem with reasoning agents is that they “imitate” human language when they produce verbose output about steps to an answer, and human thought can be limited by its embedded assumptions.
“For example, if an agent had been trained to reason using human thoughts and expert answers from 5,000 years ago, it may have reasoned about a physical problem in terms of animism,” they offer. “1,000 years ago, it may have reasoned in theistic terms; 300 years ago, it may have reasoned in terms of Newtonian mechanics; and 50 years ago, in terms of quantum mechanics.”
The researchers write that such agents “will unlock unprecedented capabilities,” leading to “a future profoundly different from anything we have seen before.”
However, they suggest there are also many, many risks. These risks are not just focused on AI agents making human labor obsolete, although they note that job loss is a risk. Agents that “can autonomously interact with the world over extended periods of time to achieve long-term goals,” they write, raise the prospect of humans having fewer opportunities to “intervene and mediate the agent’s actions.”
On the positive side, they suggest, an agent that can adapt, as opposed to today’s fixed AI models, “could recognise when its behaviour is triggering human concern, dissatisfaction, or distress, and adaptively modify its behaviour to avoid these negative consequences.”
Also: Google claims Gemma 3 reaches 98% of DeepSeek’s accuracy – using only one GPU
Leaving aside the details, Silver and Sutton are confident the streams experience will generate so much more information about the world that it will dwarf all the Wikipedia and Reddit data used to train today’s AI. Stream-based agents may even move past human intelligence, alluding to the arrival of artificial general intelligence, or super-intelligence.
“Experiential data will eclipse the scale and quality of human-generated data,” the researchers write. “This paradigm shift, accompanied by algorithmic advancements in RL [reinforcement learning], will unlock in many domains new capabilities that surpass those possessed by any human.”
Silver also explored the subject in a DeepMind podcast this month.
Panasonic S1R II review: An excellent hybrid camera that’s cheaper than rivals
With the A1, Sony was the first to introduce a high-resolution hybrid camera that was equally adept at stills and video — but boy was it expensive. Nikon and Canon followed that template with the R5 II and Z8 models that offered similar capabilities for less money, but those were still well north of $4,000.
Enter the S1R II. It’s Panasonic’s first camera that can not only shoot up to 8K video at the company’s usual high standards, but also capture 44-megapixel (MP) photos in rapid bursts. And unlike its rivals, the new model is available at a more reasonable $3,300 — half the price of Sony’s A1 II. At the same time, it’s a massive upgrade over the original S1R.
The main catch is the lack of a high-speed stacked sensor found in the other models, which can cause some skewing in both images and video. As I discovered, though, that tradeoff is well worth it for the lower price and picture quality that matches its competition. All of that makes the S1R II Panasonic’s best camera yet and a very tempting option in the high-resolution mirrorless category.
Design and handling
The S1R II is similar to other recent Panasonic models like the GH7 in terms of the design and control layout. It’s much lighter than the original S1R at 1.75 pounds compared to 2.24 pounds, so it’s less tiresome to carry around all day. As for handling, the massive grip has a ridge where your fingertips sit, making it nearly impossible to drop. The rubberized exterior is easy on the hands, though not quite as nice as the R5 II’s softer material.
I’ve always liked Panasonic’s controls and in that regard the S1R II may be the company’s best model yet. Along with a joystick and dials on the top front, top back and rear, it has lockable mode and burst shooting dials on top. You also get a dedicated button for photos, video and slow and quick (S&Q) modes, each with separate settings. There’s a dedicated autofocus switch, video record buttons both on top and front, a tally light and multiple programmable buttons.
The menu system is equally good, with logical color-coded menus and submenus. You can also rapidly find your most-used functions in the quick menu. All of that allowed me to shoot photos and video without fumbling for settings. You can also fully program buttons, dials and the quick menu to your own preferences.
Steve Dent for EngadgetThe rear display is great for content creators and photographers alike. It tilts up and down to allow for easy overhead or shoot-from-the hip photography and also swivels out to the side so vloggers can conveniently film themselves. It’s very sharp and bright enough to use on sunny days. The electronic viewfinder is also excellent with 5.76 million dots of resolution and 100 percent magnification, matching Canon’s R5 II and beating the Nikon Z8.
Battery life isn’t a strong point, though, with 350 shots on a charge or just 280 when using the electronic viewfinder — far below the 640 shots allowed by the R5 II. It also only allows just over an hour of start-and-stop video shooting. However, Panasonic’s optional DMW-BG2 battery grip doubles endurance and also allows for battery hot-swapping.
The S1R II supports both SDXC UHS II and much faster CFexpress Type B cards, while also supporting SSD capture via the USB-C port like the S5 IIX and GH7. The latter two storage methods enable shooting in high-bandwidth RAW and ProRes to maximize quality.
Panasonic also included a full-sized HDMI port along with microphone and headphone jacks. For the best possible sound quality, the optional XLR2 accessory lets you capture four channels at up to 32-bit float quality to reduce the possibility of clipped audio. And finally, the S1R II is Panasonic’s first mirrorless model with a protective carbon fiber curtain that comes down to protect the sensor, just like recent Canon and Sony models.
Performance
Steve Dent for EngadgetAlthough the original S1R could only manage an anemic 6 fps burst speeds, its successor can hit 40 RAW images per second in silent electronic mode, beating all its rivals — though shooting at that speed limits quality to 12-bit RAW. To get 14-bit quality, you need to use the mechanical shutter for burst shooting which tops out at 9 fps.
However, the Panasonic S1R II doesn’t have a fast stacked sensor like rivals. The result is rolling shutter that can be a problem in some circumstances, like shooting race cars, propellers or golf swings. However, it does outperform many other non-stacked high-resolution cameras like Sony’s A7R V and Panasonic’s own S5 IIX in that area.
Pre-burst capture is now available and starts when you half-press the shutter. That lets you save up to 1.5 seconds of photos you might have otherwise missed once you fully press the shutter button.
With an overhauled phase-detect autofocus system and a new, faster processor, the S1R II features Panasonic’s fastest and smartest AF system yet. It can now lock onto a subject’s face and eyes quicker and follow their movements more smoothly, while also detecting and automatically switching between humans, animals, cars, motorcycles, bikes, trains and airplanes. I found it to be fast and generally reliable, but it’s still not quite up to Sony’s and Canon’s standards for speed and accuracy.
Panasonic boosted in-body stabilization to 8 stops. That’s nearly on par with rivals, though Canon leads the way with 8.5 stops on the R5 II. Still, it lets you freeze action at shutter speeds as low as a quarter second in case you want to blur waterfalls or moving cars when shooting handheld.
Image quality
Photo quality is outstanding with detail as good as rivals, though understandably short of Sony’s 61-megapixel A7R V. Colors are as accurate as I’ve seen on any recent camera, matching or even beating Canon’s excellent R5 II. My pro photographer friends took a number of shots with the S1R II and found it slightly superior to their Sony A1, noting that they rarely needed to white balance in post.
Thanks to the dual-ISO backside-illuminated sensor, low-light capability is excellent for a high-resolution camera, with noise well controlled up to ISO 12,800. Beyond that, grain becomes more problematic and shadows can take on a green cast. JPEG noise reduction does a good job retaining detail while suppressing noise, but gets overly aggressive above ISO 6,400.
If 44MP isn’t enough, the S1R II offers a high-resolution mode that captures eight images with a slightly offset sensor position and composes them into a single 177 megapixel file (either RAW or JPEG). It can supposedly be used without a tripod, though I found I had to remain very still to get decent images when doing so.
Video
The S1R II is Panasonic’s best mirrorless camera yet for video, albeit with some caveats I’ll discuss soon. You can capture up to 8K 30p 10-bit video at a reasonably high 300 Mbps, close to what Sony’s far more expensive A1 can do. Better still, it supports oversampled 5.8K ProRes RAW video internally with no crop for maximum dynamic range, or 4K video at up to 120 fps. Finally, the S1R II is capable of “open gate” 3:2 capture of the full sensor at up to 6.4K (and 8K down the road via a firmware update), making it easy to shoot all types of formats at once, including vertical video for social media.
Steve Dent for EngadgetSome of these resolutions, particularly the 5.9K 60 fps and 4K 120 fps modes come with a slight crop of about 1.1x and 1.04x, respectively. 4K 120 fps also uses pixel binning, which introduces a loss of resolution and other artifacts like rainbow-colored moire.
That takes us to the main downside: rolling shutter. The S1R II is actually a bit better than the S5 II in that regard, with a total readout speed of about 1/40th of a second, or about 25 milliseconds at any of the full sensor readout resolutions (8K or 5.8K). That can result in wobble or skew if you whip the camera around or film fast-moving objects. However, it’s acceptable for regular handheld shooting.
One complication is Panasonic’s dynamic range expansion (DRE) that boosts video dynamic range by a stop, mostly in an image’s highlights. Enabling that feature makes rolling shutter worse.
Should you need to reduce rolling shutter, you can simply disable DRE without a big hit in quality. And shooting 4K at 60p minimizes rolling shutter so that it’s nearly on par with stacked sensor cameras, while still offering high-quality footage with just a slight crop.
As for video quality, it’s razor sharp and color rendition is accurate and pleasing. Dynamic range is on the high end of cameras I’ve tested at close to 14 stops when shooting with Panasonic’s V-log, allowing excellent shadow and highlight recovery, especially in DRE mode. It’s still very good without DRE though, particularly if you’re not shooting in bright and sunny conditions.
Frame grab from Panasonic S1R II 8K video
Steve Dent for EngadgetVideo AF is also strong, keeping even quick-moving subjects in focus. Face, eye, animal and vehicle detection work well, though again, the system isn’t quite as reliable as what I saw on Sony and Canon’s latest models.
The S1R II offers more stabilization options than its rivals, though. Optical stabilization provides good results for handheld video, while electronic stabilization (EIS) smooths things further . Cranking that up to the most aggressive high EIS setting provides gimbal-like smoothness but introduces a significant 1.5x crop.
Along with those, Panasonic introduced something called “cropless” EIS. That setting takes advantage of unused areas of the sensor to correct corner distortion typical with wide angle lenses while also fixing skew. I found it worked very well to reduce rolling shutter even for quick pans and walking, which may help alleviate such concerns for some creators.
So yes, rolling shutter wobble is worse on this camera than rivals like the R5 II. However, there are ways to work around it. If minimal skewing is a critical feature then don’t buy the S1R II, but it shouldn’t be an issue for most users, particularly at this price.
Wrap-up
Steve Dent for EngadgetThe S1R II is Panasonic’s best hybrid mirrorless camera to date, offering a great balance of photography and video powers. It’s also the cheapest new camera in the high-resolution hybrid full-frame category, undercutting rivals like Canon’s R5 II and the Nikon Z8.
The main downside is rolling shutter that primarily affects video. As I mentioned, though, it won’t pose a problem for many content creators and there are workarounds. Aside from that, it delivers outstanding photo and video quality while offering innovative features like cropless electronic stabilization.
If you need even more resolution, Sony’s 61MP A7R V offers slightly better image quality. And if rolling shutter is really an issue then I’d recommend Canon’s R5 II (though that model does cost $1,000 more) or the Nikon Z8. Should you want to spend considerably less, the Canon R6 II or even Panasonic’s S5 II or S5 IIx are solid picks. For other hybrid shooters, though, Panasonic’s S1R II is a great choice.
This article originally appeared on Engadget at https://www.engadget.com/cameras/panasonic-s1r-ii-review-an-excellent-hybrid-camera-thats-cheaper-than-rivals-163013065.html?src=rss
Best Internet Providers in Pueblo, Colorado
What is the best internet provider in Pueblo?
Xfinity is the top internet provider in Pueblo, Colorado, according to our CNET broadband experts. The cable provider took the top spot thanks to its extensive local coverage and affordable pricing. Xfinity offers plans starting at just $20 per month for 150Mbps. You can more than double that speed for an additional $10, making it an excellent value.
CenturyLink is also widely available in Pueblo, but its DSL speeds range from 10 to 140Mbps, which falls short compared to Xfinity. On the other hand, Quantum Fiber — part of the Lumen Technologies family — delivers faster speeds of up to 8,000Mbps over fiber internet and offers symmetrical upload and download speeds, which is ideal for video calls and gaming. However, Quantum Fiber’s availability in Pueblo is limited.
Secom provides fiber internet in Pueblo as well, but most residents will find the company’s fixed wireless service more accessible. Additional fixed wireless providers in the area include T-Mobile Home Internet, Rise Broadband, and Kellin Communications, with T-Mobile leading in terms of availability, speeds and overall value.
Best internet in Pueblo, Colorado
Pueblo, Colorado internet providers compared
Provider Internet technology Monthly price range Speed range Monthly equipment costs Data cap Contract CNET review score CenturyLink
Read full reviewDSL $55 20-100Mbps $15 (optional) None None 6.7 Quantum Fiber Fiber $50-$165 500-8,000Mbps None None None 6.7 Rise Broadband
Read full reviewFixed wireless $45-$50 25-100Mbps None 250GB or unlimited None 6.2 Secom Fiber, fixed wireless $60-$90 fiber, $60-$110 fixed wireless 100-1,000Mbps fiber, 15-100Mbps fixed wireless $5 None Varies N/A T-Mobile Home Internet
Read full reviewFixed wireless $50-$70 ($35-$55 with eligible mobile plans) 87-415Mbps None None None 7.4 Verizon 5G Home Internet
Read full reviewFixed wireless $50-$70 ($35-$45 for eligible Verizon Wireless customers) 50-1,000Mbps None None None 7.2 Xfinity
Read full reviewCable $20-$85 150-1,300Mbps $15 (included in most plans) 1.2TB None or 1 year 7 Show more (3 items)
Source: CNET analysis of provider data.
What’s the cheapest internet plan in Pueblo?
Plan Starting price Max download speed Monthly equipment fee Xfinity Connect
Read full review$20 150Mbps $15 (optional) Xfinity Connect More
Read full review$30 400Mbps $15 (optional) Rise Broadband Unlimited
Read full review$45 25Mbps $10 Quantum Fiber $50 500Mbps None T-Mobile Home Internet
Read full review$50 ($35 with eligible mobile plans) 318Mbps None Verizon 5G Home Internet
Read full review$50 ($35 with eligible mobile plans) 300Mbps None Xfinity Fast
Read full review$55 600Mbps $15 (optional) CenturyLink Internet
Read full review$55 20-140Mbps $15 (optional) Show more (4 items)
Source: CNET analysis of provider data.
How to find internet deals and promotions in Pueblo
The best internet deals and top promotions in Pueblo depend on what discounts are available during that time. Most deals are short-lived, but we look frequently for the latest offers.
Pueblo internet providers, such as T-Mobile Home Internet and Xfinity, may offer lower introductory pricing or promotions for a limited time. Many, however, including Quantum Fiber and CenturyLink, run the same standard pricing year-round.
For a more extensive list of promos, check out our guide on the best internet deals.
Fastest internet plans in Pueblo
Plan Starting price Max download speed Max upload speed Data cap Connection type Quantum Fiber $165 8,000Mbps 8,000Mbps None Fiber Quantum Fiber $100 3,000Mbps 3,000Mbps None Fiber Quantum Fiber $75 940Mbps 940Mbps None Fiber Xfinity Gigabit Extra
Read full review$85 1,300Mbps 35Mbps 1.2TB Cable Secom Fiber 1000 $90 1,000Mbps 1,000Mbps None Fiber Xfinity Gigabit
Read full review$65 1,100Mbps 20Mbps 1.2TB Cable Verizon 5G Home Plus Internet
Read full review$70 ($45 with eligible mobile plans) 85-1,000Mbps 50-75Mbps None Fixed wireless Show more (3 items)
Source: CNET analysis of provider data.
What’s a good internet speed?
Most internet connection plans can now handle basic productivity and communication tasks. If you’re looking for an internet plan that can accommodate video conferencing, streaming video or gaming, you’ll have a better experience with a more robust connection. Here’s an overview of the recommended minimum download speeds for various applications, according to the Federal Communications Commission. Note that these are only guidelines — and that internet speed, service and performance vary by connection type, provider and address.
For more information, refer to our guide on how much internet speed you really need.
- 0 to 5Mbps allows you to tackle the basics — browsing the internet, sending and receiving email, streaming low-quality video.
- 5 to 40Mbps gives you higher-quality video streaming and video conferencing.
- 40 to 100Mbps should give one user sufficient bandwidth to satisfy the demands of modern telecommuting, video streaming and online gaming.
- 100 to 500Mbps allows one to two users to simultaneously engage in high-bandwidth activities like video conferencing, streaming and gaming.
- 500 to 1,000Mbps allows three or more users to engage in high-bandwidth activities at the same time.
How CNET chose the best internet providers in Pueblo
Internet service providers are numerous and regional. Unlike the latest smartphone, laptop, router or kitchen tool, it’s impractical to personally test every ISP in a given city. So what’s our approach? We start by researching the pricing, availability and speed information drawing on our own historical ISP data, the provider sites and mapping information from FCC.gov.
But it doesn’t end there. We go to the FCC’s website to check our data and ensure we consider every ISP that provides service in an area. We also input local addresses on provider websites to find specific options for residents. We look at sources, including the American Customer Satisfaction Index and J.D. Power, to evaluate how happy customers are with an ISP’s service. ISP plans and prices are subject to frequent changes; all information provided is accurate as of the time of publication.
Once we have this localized information, we ask three main questions:
- Does the provider offer access to reasonably fast internet speeds?
- Do customers get decent value for what they’re paying?
- Are customers happy with their service?
While the answer to those questions is often layered and complex, the providers who come closest to “yes” on all three are the ones we recommend. When it comes to selecting the cheapest internet service, we look for the plans with the lowest monthly fee, though we also factor in things like price increases, equipment fees and contracts. Choosing the fastest internet service is relatively straightforward. We look at advertised upload and download speeds, and also take into account real-world speed data from sources like Ookla and FCC reports. (Ookla is owned by the same parent company as CNET, Ziff Davis.)
To explore our process in more depth, visit our page on how we test ISPs.
FAQs on internet providers in Pueblo, Colorado
What is the best internet service provider in Pueblo?
Xfinity is the best internet service provider in Pueblo due to its wide availability of high-speed plans and competitive pricing. Xfinity is available to nearly every Pueblo address, offering the cheapest internet plan and the fastest speeds in the area.
Is fiber internet available in Pueblo?
According to the most recent FCC data, fiber internet service in Pueblo is available to approximately 30% of households, or roughly 16,200 homes. Serviceability is greatest around CSU Pueblo and in the southwest part of the city. Quantum Fiber is the largest fiber internet provider in Pueblo, though Secom also offers local fiber internet service.
What is the cheapest internet provider in Pueblo?
Xfinity offers the cheapest internet plan in Pueblo, with service starting at $20 per month for max download speeds of 150Mbps. For $10 more per month (and still cheaper than service from any other major ISP in Pueblo), Xfinity’s Connect More plan comes with speeds up to 300Mbps. A one-year contract may be required for the lowest pricing, and renting Wi-Fi equipment from Xfinity could add $15 to your monthly bill.
Which internet provider in Pueblo offers the fastest plan?
Quantum Fiber offers the fastest download speed in Pueblo, up to 8,000Mbps, starting at $165 per month. After Quantum Fiber’s services, Xfinity comes in second. However, its max upload speeds are significantly slower (35Mbps) due to the use of a cable network. Secom and several other local fiber internet providers in Pueblo don’t offer max download speeds as fast as Xfinity but are capable of delivering much faster upload speeds, often equal to the plan’s max download speeds.
Bionic Bay Review: A speedrunners delight
Let’s get this out of the way: Bionic Bay is going to be compared to Limbo and Inside. A lot. It’s inevitable. Psychoflow Studios, in collaboration with Mureena Oy, has delivered what feels like a sci-fi reimagining of Playdead’s moody 2010 classic. The visual storytelling, the shadowy menace, the precisely brutal puzzles — it’s all here, reassembled with a slick, biomechanical sheen.
But don’t mistake Bionic Bay for a copycat. Beneath the familiar silhouette lies a wildly inventive and occasionally maddening precision platformer that plays like a love letter to physics. This isn’t just puzzle-solving; it’s gravity-bending, object-swapping, mid-air improvisation that can make you feel like a time-warping parkour demigod when it all clicks.
Clocking in at around 8–10 hours (depending on how reckless or masochistic you are), it’s tightly paced — though not always evenly. I played on PlayStation 5, and somewhere in the middle of its surreal, flesh-and-metal dreamscape, I found myself wondering: How the hell are they going to top this?
Welcome to the Otherworld
Credit: Psychoflow Studios / Mureena Oy / Kepler InteractiveBionic Bay technically has a story, but don’t expect much of a narrative to latch onto. Most of it unfolds through cryptic text logs that pop up as you stumble across the corpses of long-dead scientists, scattered like breadcrumbs across this eerie, decaying world.
From what my very smooth, very confused brain could piece together, you’re the unfortunate scientist who has survived an experiment gone sideways — catapulted into the guts of an ancient, hyper-advanced alien civilization. That’s…pretty much it. And honestly, that’s fine. The “plot” is more ambient than essential — it’s just vibes, bro. Really, it’s just an excuse to hurl yourself over chasms wider than your rent bill.
Thankfully, you’re not doing it alone, or entirely as a human. Early on, the game zaps you with a genetic upgrade called “elasticity,” essentially turning your character from discount Gordon Freeman into a wall-bouncing, momentum-bending physics god.
As you progress, Bionic Bay hands you a trio of reality-breaking tools that would make any physics professor sweat. First up: a transporter that lets you swap places with nearby objects. Then there’s the Chronolag, a pair of sunglasses that slows time in a tight radius around you. Finally, the gravitational backpack, a piece of high-tech wizardry that lets you rotate the direction of gravity with a flick of the right stick.
Naturally, these gadgets come with caveats. The swap tool only works with objects currently on screen (no teleporting cheese here). The Chronolag is limited to a tense 30 seconds and cuts off the second you take damage or go full ragdoll. The gravity backpack allows for two midair uses — after that, you’re out of tricks and headed straight for a hard landing.
But despite the limitations, or even because of them, each tool is essential to cracking Bionic Bay’s brutally tight puzzle platforming. And I mean tight. These puzzles don’t just flirt with precision; they demand pixel-perfect timing and surgical object placement. Especially in the later levels, success hinges on mastering momentum, nailing swaps mid-fall, and contorting through gaps designed to mock your sense of space and rhythm.
Even with all the high-tech tools at your disposal, mastering your own movement is essential to solving Bionic Bay’s intricate puzzles. One of the most versatile mechanics is the dash, triggered with the Circle button. It sends your character hurtling forward in a curled, high-speed motion — part movement boost, part crouch — perfect for slipping through tight gaps or gaining momentum.
The dash can also be chained with jumps for extended traversal. Combining it with the X button allows for long, arcing leaps that feel like controlled bursts of flight. In practice, it’s a rhythmic sequence: dash, jump, dash again. The Circle button also functions as a dive midair, letting you fine-tune your trajectory or squeeze through narrow environmental windows with just the right amount of force.
A solution for everyone
Credit: Psychoflow Studios / Mureena Oy / Kepler InteractiveThe environments in Bionic Bay aren’t just backdrops — they’re fully interactive playgrounds where the rules are loose, and experimentation is everything. Most puzzles don’t lock you into a single solution; instead, they hand you a toolbox and let your grasp of the game’s intricate physics system guide the way. Getting from point A to point B is less about following a path and more about inventing one, usually while avoiding hazards like vaporizing lasers, insta-freeze traps, and an absurd number of explosive land mines.
Take one scenario: I needed to reach a high cliff from ground level. One option was to roll a barrel into place, launch myself off it, swap positions mid-air, race over to climb the object, jump off it, and grab the ledge. Another route? Use the land mines — delicately timed detonation included — to catapult me skyward using the previously mentioned object as a shield. The game doesn’t just allow for creativity; it thrives on it, practically begging players to break it in the most stylish ways possible. It’s built for the kind of player who sees every mechanic as a potential exploit, and Bionic Bay rewards that mentality at every turn.
Bionic Bay drips with atmosphere — equal parts decaying alien architecture and rusted industrial labyrinth. In one moment, you’re dwarfed by writhing, root-like structures lit by an amber glow that feels almost biblical in its intensity. In the next, you’re navigating a colossal tangle of mechanical guts like massive gears, broken scaffolding, and planet-sized orbs suspended in shafts of scorching light. It’s biomechanical horror meets cosmic wonder, with every frame soaked in grime, heat, and a strange, almost sacred silence. It’s haunting, oppressive, and stunningly beautiful all at once.
Bionic Bay walks a fine line visually. Despite the protagonist being mostly a black silhouette, the environments are detailed enough that you never lose track of him, even in the most chaotic moments. And — maybe this dates me — but the contrast between the character and the background instantly brought Vector to mind, that sleek parkour side-scroller from the iOS glory days of 2012. It’s as if Psychoflow took that minimalist, kinetic style and mashed it together with moody pixel art, otherworldly concept design, and the eerie tone of Limbo.
The result is something familiar yet fresh, a visual identity that feels both nostalgic and completely alien.
Is Bionic Bay worth it?
Credit: Psychoflow Studios / Mureena Oy / Kepler InteractivePerformance-wise, there’s not much to complain about. Bionic Bay runs smoothly on PS5, with just a single framerate dip cropping up late in the game. I’m curious to see how the online mode holds up, but since I was playing on a pre-release build, the multiplayer was a ghost town even after I unlocked it by finishing the main campaign.
As for sound design, I was fully locked in. The soundtrack rarely takes center stage, but when it does, it hits — pulsing synths that creep in and swell at just the right moments, adding a heavy, unnerving layer to the game’s far-future horror vibe. It looks great, it sounds great, and while the single-player campaign does drag a bit in the middle, it’s a gorgeous slog. A stylish, ambient descent into mechanical madness that knows how to hold your attention, even when it’s testing your patience.
Bionic Bay is absolutely worth your time, especially if you’re the kind of player who thrives on challenge, experimentation, and atmospheric immersion. It doesn’t reinvent the puzzle platformer but pushes the genre in a clever direction with its physics-driven mechanics and open-ended puzzle design. It’s a game that respects your intelligence and rewards your curiosity while looking like a fever dream built from scrap metal and alien roots.
It’s not perfect — the pacing stumbles in the middle, and the story barely registers — but the overall experience is too striking to ignore. For fans of Limbo, Inside, or even old-school Vector, Bionic Bay is a beautifully harsh evolution of the genre. Just be prepared to die. A lot.
For more Mashable game reviews, check out our OpenCritic page.
5 games I used to think were 10/10 masterpieces but was wrong about
I’ve been playing video games since the age of two, with the yellow Beetle from Midtown Madness 2 being my first companion in the digital world. Now, 24 years later, I’ve racked up countless gaming experiences — some good, some bad, and some unforgettable. As a teen growing up during a time when gaming rapidly evolved, my benchmarks for a “perfect” game kept shifting. Sure, some of those games have aged like fine wine. But others? Not so much.
There was a time when a hack-and-slash like Daemon Vector would’ve cracked my top ten, but today it’s barely relevant. Just like that, there were times I went gaga over certain games, calling them masterpieces and handing them a mental 10/10, G.O.A.T. badge without hesitation. But with age and experience, I’ve come to accept that some of those so-called “perfect” games… weren’t really that perfect.
Related
I’m still mad about these 5 canceled games we never got
From Spider-Verse dreams to Boba Fett’s lost origin, here are 5 cancelled games that still haunt gamers.
5
Cyberpunk 2077
A brilliant foundation, but the house is missing rooms
Cyberpunk 2077 scratched a very specific itch for me — one I hadn’t felt since the golden days of Deus Ex. The prologue alone had me raving to my non-gamer friends. It was that cool. The gameplay is slick, the traversal is fun, and the premise is flat-out bonkers in the best way. But after finishing Elden Ring — arguably a flawless open-world experience — it became impossible to ignore the cracks in Cyberpunk’s design.
The side quests are insanely fleshed out, but the main story rings emotionally hollow and leaves very little impact. A great story is supposed to have an impact above all, and that’s what I believe Cyberpunk 2077’s central narrative feels like. Worse yet, the “life path” you choose, which defines V’s entire backstory, barely changes anything in the story outside a handful of dialogue options during quests.
Why couldn’t I have remained a Corpo, playing double agent from within Arasaka? Why did Johnny’s takeover boil down to a binary choice at the very end instead of a steady emotional decline? For someone who stole Arasaka’s most prized tech, the lack of serious consequences throughout the campaign was baffling. The excellent expansion, Phantom Liberty, proves that Cyberpunk 2077 can tell a gripping, focused story, which only makes the base campaign feel more hollow in comparison.
Cyberpunk 2077 is still an 8/10 for me, and I fully intend to start over the game in the near future. However, it’s just not the 10/10 banger I once believed it to be.
Cyberpunk 2077
Related
10 greatest open world games you can get lost in
Lose yourself in these 10 unforgettable open world games that make you forget the real world
4
Batman: Arkham Knight
The definitive Batman experience buried beneath a Batmobile obsession
I loved Batman: Arkham Knight as a teenager. The gritty visuals, the brutal combat, and the rain-drenched city — it was all so Gotham. The story was emotionally impactful, the ending beautiful, and it all came together to make Arkham Knight a solid 10/10 for me. In retrospect, however, I can’t shake off just how over-reliant the game is on the Batmobile, so much so that they left a bad taste in my mouth upon a revisit.
I spent a major chunk of the game maneuvering the Batmobile, and throughout those moments, I was a mech on wheels, not the world’s greatest detective or the terrifying shadow who stalked evil. When the Batmobile is practically shoehorned into puzzles, combat, boss fights, and stealth segments, it becomes less of a cool tool and more of an overbearing requirement.
Worse, the true ending is locked behind Riddler trophies that made online guides almost required reading. It’s like buying a box set and being told the finale is in a separate box you don’t have. Today, Arkham Knight is still a solid, highly recommended game for me, but definitely not the flawless superhero sim I used to champion. That mantle has been taken by 2018’s Marvel’s Spider-Man.
Batman: Arkham Knight
Related
7 PC gaming scandals you forgot about
Sometimes, it’s worth remembering the bad bits
3
Assassin’s Creed IV: Black Flag
Gorgeous but structurally dated
Assassin’s Creed IV: Black Flag was the first game ever that made me go “holy cow, this is next-gen.” I played it on my brand-new GTX 760 back in 2013, and it was breathtaking. Naval combat finally clicked for me, despite having paid no mind to it in AC III. In Edward, I once again had a handsome, roguish, and charming protagonist after Ezio, and he became my third-favorite protagonist in the entire Assassin’s Creed series, behind Altaïr and Ezio.
But on a recent revisit, I couldn’t ignore just how much the game leans on repetitive tailing and eavesdropping missions. The world hinted at the open-world RPGs Ubisoft would eventually lean into, but back when it felt expansive yet digestible. I still want that rumored remake — I’d play it day one — but in hindsight, the repetition and lack of real mission variety bring it down from masterpiece territory. My nostalgic glasses may be strong, but they don’t make me blind.
Assassin’s Creed IV: Black Flag
Related
Calm down — Assassin’s Creed Shadows is surprisingly good
Assassin’s Creed Shadows delivers stunning visuals and tight combat but stumbles under weak writing and pacing issues.
2
Forza Horizon 4
A love letter that forgets to include the reader
After having played Driveclub on my base PS4, and then mourning the shutdown of its studio, Forza Horizon 4 was the game that reignited my love of racing. My friend and I spent weeks on it, skipping weeks’ worth of lectures to get that H badge. It was everything I wanted—visually stunning, lightning-fast, and packed with content.
But recently, while introducing my partner to gaming, I noticed how punishing the game can be for newcomers, not to the Horizon series, but to racing in general. The narrow roads in Edinburgh? Brutal. Watching her bounce off walls more than asphalt was heartbreaking. I myself had taken a while to master the game, but the fun factor gets buried when your first impression is so discouraging. Worse still, the beautiful map feels small and likely sacrificed in favor of showcasing seasonal shifts. But not being able to manually change seasons? That was a buzzkill. We started in winter, and it was so cold and unforgiving that I had to literally change my PC’s system date just so she could experience spring evenings in Edinburgh.
I still love Forza Horizon 4, but it’s not quite the masterpiece I once made it out to be.
Forza Horizon 4 is now delisted from all online storefronts.
Related
Someone connected a racing simulator setup to their RC car using an Arduino, and I’m seriously jealous
Micro Machines in real life.
1
The Last of Us Part II
An emotionally complex narrative that stumbles in its delivery
At one point, I believed The Last of Us Part II was the boldest and most powerful narrative ever delivered in a video game. And in many ways, I still admire its raw ambition. It subverted expectations, shattered comfort zones, and forced me to confront the uncomfortable. But on replay — and with the benefit of hindsight — the cracks in its pacing and structure began to show. The early game’s jarring time jumps and tonal imbalance between the prologue and Act 1 feels unrefined, almost unsure of themselves. And then, just as the story regains momentum, it slams the brakes and resets halfway through.
Yes, the structure serves a purpose — to humanize, challenge bias, make you lose your sense of self, and question the act of revenge. But a day-by-day switching narrative could’ve preserved that emotional duality without draining the impact. The problem isn’t the story it tells — it’s how it tells it. The shifts in gameplay and tone can feel like a grind, with emotional peaks dulled by repetition and uneven pacing. And in a game so dependent on narrative to drive home its weight, that’s a real problem.
A lot of moments while playing The Last of Us Part II reminded me of the problems I had with seasons seven and eight of Game of Thrones, where everybody and their dog were practically teleporting all across the country, while the first game was all about taking almost a whole year to go across the country. Make no mistake, The Last of Us Part II is still one of the boldest AAA games ever made. But perfect? I used to think so. Now, I think it’s a beautifully flawed experience that aims for greatness and lands just short.
The Last of Us Part II
Related
I reopened an old wound by playing The Last of Us Part II Remastered on PC
I played The Last of Us Part II Remastered on PC, and it hit harder than ever. A technical triumph and emotional wrecking ball.
Growing up means looking back
It’s strange, really. We often think of the games we loved as timeless, untouchable classics — as if our memories of them somehow froze their perfection in place. But just as we grow, so do our expectations. And sometimes, with a bit of distance and a new perspective, we see the cracks in what once felt like masterpieces.
That’s not to say these games are bad — far from it. I still cherish each of them for what they gave me at the moment. The rush, the wonder, the hours lost to obsession. But a 10/10 game? That’s a rare thing. And the older I get, the more I realize it’s okay to admit that some of my former “perfect” games weren’t really perfect after all. They were just perfect for me at the time.
M4 iPad Pro, USB-C Magic Mouse, iPhone 15 Pro, more 9to5Mac
Today’s Apple gear deals are headlined by a couple notable open-box listings with full Apple warranties – the most affordable M4 iPad Pro is now $180 off and we have some rare discounts on the USB-C Magic Mouse (including both the black and white models). From there a new low has emerged on the 13-inch M3 iPad Air in brand-new condition alongside unlocked iPhone 15 Pro units at up to $650 off the original listings. All of that and more awaits below.
Apple’s most affordable M4 iPad Pro hits one of its best prices at $180 off from $820 (Open-box w/ 1yr. Apple warranty)
Deals on Apple gear have started to get a little bit tight over the last week or so in the wake of U.S. tariffs, but Best Buy’s open-box program remains a wonderful source of discounts on everything from the deals we spotted yesterday on Apple Pencil Pro to one of the lowest cash prices to date on the most affordable M4 MacBook Air. Today, however, we are looking at the M4 iPad Pro and more specifically the least pricey model in the lineup. We are yet to see this one go any more than $150 off, and those are limited-time holiday on-page coupon offers at Amazon, but you can now land one at $180 off in “excellent” open-box condition with a full warranty. Details below.
Best Buy is now offering the 11-inch 256GB Space Black M4 iPad Pro down at $819.99 shipped. This is the “excellent” condition open-box listing that also ships with all of the usual accessories and a 1-year warranty – an actual “Apple One (1) Year Limited Warranty.”
Regularly $999, and currently starting at $919 via Amazon in brand-new condition, this is $179 off the list price, the lowest we have tracked in 2025 from a dealer of Best Buy’s repute, and the lowest price we can find with a 1-year Apple warranty. This model, the most affordable M4 iPad Pro model, almost never drops more than $150, if that.
Again you can score the “good” and “fair” condition open-box units for less, but it’s hard to recommend something that’s in just good condition at prices like this – we all really want our shiny new iPad Pro to be as shiny as possible if you know what I mean.
All of that said, it is worth browsing through the rest of the M4 iPad Pro configurations at Best Buy right here – there are open-box deals on just about all of them that are well below what you will find in new condition right now on most models.
Here’s a look at the best new discounts via Amazon across the lineup:
M4 iPad Pro 11-inch
M4 iPad Pro 13-inch
Upgrade to Apple’s USB-C Magic Mouse with these rare open-box deals: White $61 or Black $72 (1-yr. Apple warranty)
If you have been holding off for a deal on the new USB-C Apple Magic Mouse, today might be your chance. While, historically speaking, deals are relatively rare on Apple’s official Magic Mouse – there has only been once good chance to score a price drop on the new black variant and the white USB-C model has yet to drop below $78. However, Best Buy now has some “excellent” condition open-box listings with full 1-year Apple warranties in tow at the best prices we have tracked to date on both the black and white modelsfrom a reputable dealer.
Just as a reminder, the white model carries a $79 list price and the black fetches a premium at $99 from Apple, both of which are fetching as much at Amazon right now. But, as mentioned above, pricing on the Geek Squad-verified open-box units in “excellent” condition at Best Buy are much less than that:
Alongside the “Apple One (1) Year Limited Warranty” they ship with, as well as being covered by Best Buy’s Return & Exchange Promise, here are the details you need to know about these open-box listings:
- Works and looks like new. Restored to factory settings.
- Includes all original parts, packaging and accessories (or suitable replacement).
The newer USB-C edition of the Apple Magic Mouse is largely identical the Lightning versions, albeit with a USB-C port on the underside so you can finally be rid of those Lightning cables. I don’t know about you, but my Magic Mouse is the only piece of kit I still use that requires one and, while I really don’t need to upgrade, I really can’t wait to finally shed my reliance on the old Apple connector standard.
Unlocked iPhone 15 Pro now up to $650 off orig. prices from $744 (Amazon Renewed Premium, 1-yr. warranty)
Apple has already been flying in plane loads of iPhones to get ahead of potential tariff conundrums, but Amazon’s Renewed Premium listings on the existing iPhone 15 Pro and Pro Max units can deliver some serious savings, coming in at hundreds below the original unlocked prices from Apple. They also ship with a full 1-year warranty and deliver units in better condition than the average refurb you might bump into on Amazon. We just spotted a new low on the heavily upgraded 1TB iPhone 15 Pro in Natural Titanium down at 846.45 shipped – that’s more than $650 under the original price or a comparable new condition iPhone 16 Pro – but there are deals on several configurations worth scoping out today down below.
While the pricing on iPhones (and about a million other things) is still up in the air at this point, Amazon’s Renewed Premium units remain a notable source of savings. We are talking about prices as much $650 under the original listings and Apple Store prices on unlocked iPhone 16 models. These certainly aren’t iPhone 16 models, but they are the only other Apple handsets that support Apple Intelligence features and still deliver a compelling iPhone experience – they were after all arguably the world’s great phone as of September last year before iPhone 16 launched.
Satechi has now launched a sale eventon its official site featuring a range of its charging gear – everything will drop 30% at checkout using code CHARGE30. However, one of the standout deals is arguably its Qi2 Trio Wireless Charging Pad that will drop to $91 with the code above. That’s a solid price, but you’ll want to completely ignore it and head straight over to the brand’s official Amazon storefront instead where you’ll find it marked down to $87.12 shipped right now, and with Prime shipping benefits. This one carries a regular $130 list price via Satechi but has more recently been sitting down at closer to $100 on Amazon where it is now undercutting the direct sale price.
The Qi2 Trio Wireless Charging Pad is easily one of the best models with this sort of form-factor I have ever used. The metal rimmed base with the vegan leather wrap up top is simply gorgeous for me. The fully-articulating main Qi2 15W MagSafe pad delivers ideal viewing angles and you can even fold it down flat if you ever needed to stick it in your carry kit. It, of course, also features a magnetic Apple Watch charger and a third Qi pad for AirPods or a second handset. It is a really good one if you ask me and a clear contender for top 5 on the internet.
Hit up our launch coverage for a closer look.
There are some notable deals worth browsing through in the direct sale on the Satechi site though, including desktop charger units and some of its higher-end USB-C cable solutions, but we also wanted to direct your attention to its wonderful 15W Qi2 Wireless Car Charger– we loved this one after getting to go hands-on for review and the CHARGE30 code will drop its down from the usual $60 to $42 to deliver the lowest price we can find. This is on par with the lowest price we have tracked on Amazon.
Browse though the rest of the Satechi eligible for the code above on this landing page.
Today’s accessories and charging deals:
Apple’s most affordable new 16GB M4 MacBook Air is now up to $110 off (Open-box w/ 1-yr. Apple warranty)
We are still tracking some straight up $50 price drops on the new M4 MacBook Air – these are best cash discounts we have tracked for folks without gear to trade-in against one to date in new condition. That said, we love our Best Buy open-box listings with the full 1-year Apple warranty attached and they are now offering the lowest prices to date on the most affordable model at up to $110 off. Details below.
Now, you will find the entry-level 13-inch model with 16GB of RAM and 256GB storage capacity starting from $950 in brand-new conditionover at Amazon. However, all but the silver model are selling for much less than that as part of Best Buy’s “excellent condition” open-box listings with a full 1-year Apple warranty attached.
We can certainly understand why some folks would rather a brand new unit, but if you’re looking to score the best deal possible from a reputable dealer in the early part of the year here, these open-box listings are worth a look:
Now you will find even lower prices on the “good” and “fair” condition units, but we tend to recommend the “excellent” models – if you’re going to be buying a brand new M4 MacBook Air, you’re also likely going to want one in more than just good condition.
You will also find open-box deals on other configurations in the M4 Air lineup waiting right here – look for the small “Open-Box” link below the “Add to Cart” button.
Here’s how the brand-new deal pricing works out at Amazon right now for comparison:
- 13-inch M4 MacBook Air 16GB/256GB $949 (Reg. $999)
- 13-inch M4 MacBook Air 16GB/512GB from $1,184 (Reg. $1,199)
- 13-inch M4 MacBook Air 24GB/512GB $1,359 (Reg. $1,399)
- 15-inch M4 MacBook Air 16GB/512GB from $1,139 (Reg. $1,199)
- 15-inch M4 MacBook Air 16GB/512GB $1,342 (Reg. $1,399)
- 15-inch M4 MacBook Air 24GB/512GB $1,549 (Reg. $1,599)
FTC: We use income earning auto affiliate links. More.
How to add a super-fast SSD to your Mac mini M4 without paying Apple’s ridiculous storage prices
The Apple Mac mini M4 is arguably the biggest bargain in computing. This (almost) pocket-sized mini Mac is fast, powerful, near-silent and costs around half the price of the cheapest equivalent MacBook Air. It’s almost too good to be true.
I bought one last month, my first new Mac since the MacBook Air M1 in 2020, and it’s given me that same sense of ‘how did they do that?’ wonder.
We described it as “the best small form factor PC” in our Mac mini M4 review – and with good reason. I can’t believe how quiet it is, how small it is, how swift it is at doing things that my now-slightly creaking M1 MacBook Air struggles with (such as opening more than 10 Chrome tabs at once).
But if you’re thinking of buying one – and you totally, definitely, absolutely should – I have one bit of advice for you: do not waste your money on Apple’s own internal SSD upgrades.
Seriously, don’t even consider it. Because while the Mac mini is an undoubted bargain, Apple’s storage is so overpriced that it’s a joke.
Instead, you’ll want to buy an external enclosure and NVMe storage. That’s exactly what I did, and it’s saved me a fortune.
The problem: Apple SSD storage is too expensive
(Image credit: Apple) The simple fact is that Apple charges too much for SSD storage. Like way too much.
The base Mac mini M4, with 16GB RAM and a paltry 256GB SSD, costs $599 / £599. And while it really is one of the best bargains in computing history, that’s despite the storage on offer, rather than because of it.
Doubling it to 512GB costs another $200 / £200, and bumping it up to 1TB doubles that again.
The maximum SSD size available on the base M4 is 2TB – and for that you’d pay a whopping $1,399 / £1,399. That’s $800 / £800 extra for another 1.75TB of SSD storage.
There’s no 4TB model available on the base M4, but if you step up to the M4 Pro – which has other benefits, such as a more powerful 12- or 14-core CPU and 16- or 20-core GPU – you can upgrade to a 4TB or 8TB SSD.
For that privilege, you would pay an astonishing $600 / £600 extra for the jump from 2TB to 4TB, and then a further $1,200 / £1,200 to take you to 8TB.
Swipe to scroll horizontallyModel
Storage
Price US
Price UK
Mac mini M4 16GB
256GB
$599
£599
Mac mini M4 16GB
512GB
$799
£799
Mac mini M4 16GB
1TB
$999
£999
Mac mini M4 16GB
2TB
$1,399
£1,399
Mac mini M4 Pro 24GB
512GB
$1,399
£1,399
Mac mini M4 Pro 24GB
1TB
$1,599
£1,599
Mac mini M4 Pro 24GB
2TB
$1,999
£1,999
Mac mini M4 Pro 24GB
4TB
$2,599
£2,599
Mac mini M4 Pro 24GB
8TB
$3,799
£3,799
You can easily calculate what Apple is charging per GB for its upgrades, so I did just that.
Swipe to scroll horizontallyModel
Extra storage (GB)
Extra cost ($/£)
Cost per GB ($/£)
Mac mini M4 16GB / 512GB
256
200
0.78
Mac mini M4 16GB / 1TB
488
200
0.41
Mac mini M4 16GB / 2TB
1000
400
0.40
Mac mini M4 Pro 24GB / 1TB
488
200
0.41
Mac mini M4 Pro 24GB / 2TB
1000
400
0.40
Mac mini M4 Pro 24GB / 4TB
2000
600
0.30
Mac mini M4 Pro 24GB / 8TB
4000
1200
0.30
So, leaving aside the jump from 256GB to 512GB – which is just ridiculously bad value at 0.78 US dollars or pounds per gigabyte extra – you’re generally paying 30-40 cents/pence per GB.
How does that compare to third-party storage? Not well.
Our best SSD buying guide lists nine options, that have all been thoroughly tested and TechRadar-approved.
Number one in the list, the Samsung 990 Pro, currently costs $100/£90 for 1TB – which works out at around 10 cents / 9 pence per GB. Jump up to the 4TB model and the price per GB drops to 7.5 cents, or 6 pence.
Factor in that the 990 Pro is one of the most expensive options and you can see the difference here – it’s roughly a quarter of the price of going direct with Apple.
The flipside is that you will need to buy an external enclosure too, but these are not expensive.
And nor do you need to worry about it being a difficult installation process. The most complicated thing about all of this, if you’re not particularly techie, is simply the terminology around it all.
The solution: What you need to buy
(Image credit: Future) You have two options for upgrading your Mac mini’s storage: a portable SSD or an internal SSD plus an external enclosure.
The former is simpler, in that you just buy one off the shelf and plug it in to a port on the mini, but they’re generally more expensive per GB and almost certainly slower.
Still, if you want to take your storage on the road with you, this might be your best bet; our guide to the best portable SSDs has plenty of options.
I took the other route, which involved buying an internal SSD and a separate enclosure, or case, to put it in; I’ll go into details on that below.
This has the advantage of being fast enough to rival the mini’s internal SSD – well, so long as you buy the right one.
Know your ports
(Image credit: Future) If you’re coming to the Mac mini M4 from a MacBook, the ports on offer will be a welcome surprise: you get two USB-C 3 ports and a 3mm headphone socket on the front, plus three Thunderbolt 4 / USB-C 4 ports, HDMI and Ethernet on the rear.
Jargon buster
M.2: The SSD’s form factor; small, rectangular, like a stick of gum
NVMe: The SSD type; massively faster than the older SATA
PCie 4.0: The interface bus standard the SSD will connect to. For the fastest speeds this would be 4.0, but the older 3.0 will also do just fine
Thunderbolt 4: The connectivity standard used by the Mac mini 4’s rear ports. It can charge devices, handle two 4K displays and transfer data via USB
USB 4: The USB protocol used by Thunderbolt 4, enabling speeds of up to 40Gb/s
USB 3.2: The previous generation of USB standard has a maximum speed of 10Gb/s. The mini’s two front USB-C ports have this spec
External storage can plug into any of those five USB-C ports, but you’ll get the fastest speeds from the Thunderbolt 4 options on the rear. These use USB 4, and have a maximum data transfer speed of 40Gb/s, compared to 10Gb/s for the front ports.
(The Mac mini M4 Pro, meanwhile, has Thunderbolt 5, which can handle up to 120Gb/s. That’s arguably overkill, but then so is the CPU…)
Theoretically, the absolute fastest speeds will come from an SSD that can take advantage of USB 4 – look for SSDs listed as PCIe 4.0 or ‘Gen4’, with above 7,000MB/s read and 6,000MB/s write. The Samsung 990 Pro mentioned above is one such SSD.
That said, you won’t get those kind of speeds in real-world use, due to USB 4’s 40Gb/s limit. You could therefore buy a cheaper PCIe 3.0 card such as the Samsung 970 EVO Plus. It might be a tiny bit slower than a 4.0 SSD, but you won’t notice it outside of benchmarks.
In terms of form factor and type, meanwhile, there’s a dizzying array of jargon associated with SSDs – but there’s no need to be confused by it all.
Simply make sure you buy an M.2 NVMe SSD, ideally Gen4 / PCIe 4.0 if you can afford it, and all will be well.
The enclosure
Once you’ve chosen your SSD you’ll need something to put it into. It is technically possible to upgrade the Mac mini’s internal storage, but this would void your warranty, and given how easy it is to use an enclosure I’m not sure it’s worth it.
There are dozens of suitable enclosures for SSDs, but all simply provide a home for the storage to slot into, plus a cable to connect to a USB port.
Some have active cooling fans, some use passive cooling; given that the Mac mini is almost silent, I hated the idea of spoiling that quiet, so went with a passive option.
Once again, you’ll need to ensure the enclosure can handle the speed of your SSD and then transfer that speed to the Mac.
Therefore, searching for ‘M.2 enclosure’ will not suffice – you might end up with something that only works with USB 3.2.
Instead, you specifically need an M.2 NVMe USB 4 or Thunderbolt 4 enclosure. You can use USB 3.2 if you want, but you’ll be limited to about a third of the speed.
What I bought
(Image credit: Samsung) SSD: Samsung 990 Pro M.2 NVMe 4TB
In terms of the SSD, I ended up buying the Samsung 990 Pro M2 NVMe in its 4TB guise. This cost me £257 – which works out at 6 pence per GB.
This is definitely overkill for the Mac mini M4, in that I’m only getting about half of its potential speed, with USB 4 being the bottleneck.
However, I figured that I may well upgrade the Mac in the next couple of years, and if I do the Samsung SSD can come along for the ride. Plus, we gave it 5 stars in our Samsung 990 Pro SSD review, so it would seem rude not to pick this.
As I said above, you could spend less on a PCIe 3.0 SSD and not lose too much in terms of performance. Either way, you will definitely save money over taking the Apple upgrade.
Enclosure: OWC Express 1M2
For the enclosure, I went with the OWC Express 1M2, which cost another £149.
This is one of the highest-rated USB 4 enclosures around; we awarded it 4.5 stars in our OWC Express 1M2 review, and on Amazon it has the same score from more than 200 user reviews.
It’s a beautifully made thing, with an entirely metal body covered with fins that make possible its passive cooling. It’s not small – about the length of the Mac mini itself, albeit much more narrow – and weighs about 250g, but that didn’t worry me as it isn’t something I’ll be moving around very much.
(Image credit: Future) More importantly (to me), it looks great next to the Mac mini; many of the cheaper enclosures are black plastic affairs, but I would rather pay slightly extra for the aesthetics.
It also has rubber feet on the bottom that keep it stable, plus a USB-C port in which to plug the all-important (and supplied) data cable. I can’t praise it enough.
Putting it all together
I’m no stranger to SSD or RAM upgrades, but even a complete novice will find the OWC Express 1M2 easy to set up – not least because there’s a super-helpful video tutorial on the OWC website.
You’ll need to remove a couple of screws, then slide off the bottom of the case to reveal the NVMe slot inside. Remove one more screw, insert the SSD, push down to make contact with the thermal pads, put the screws back in and you’re away.
The whole thing takes about five minutes, max; it’s really not a complicated process.
Next, you’ll need to hook it up to one of the Thunderbolt 4 ports on the back of the Mac mini, then format the drive for use.
Make sure you choose APFS, unless you want to also use it with Macs running an older version of macOS (in which case go for Mac OS Extended) or Windows (ExFAT, generally).
(Image credit: Future) Performance gains
(Image credit: Future) Any SSD will be fast enough for most people, particularly if you’re used to an old-school hard drive. However, if you’re going down the external route rather than buying an Apple upgrade, you’ll want your solution to be at least comparable to the internal SSD.
It’s worth noting that the SSDs in Apple’s mac Mini M4s vary in speed depending on the size; the 512GB SSD is about 30% faster than the 256GB model, according to discussions on Reddit at least, and the 1TB model is faster still.
I’m only using the 256GB model, of course, and get a speed of around 2,000 MB/s write and 2,800 MB/s read, based on BlackMagicDesign’s Disk Speed Test.
The Samsung 990 Pro plus OWC 1M2 combo, meanwhile, gives me 3,100 for both write and read – so, slightly faster than the internal SSD.
Nor does it get too hot. The 1M2 does a fantastic job of keeping it cool whilst in use, and while you can feel it heating up, it’s never uncomfortably warm.
(Image credit: Future) In real life, the difference in speed between the internal and external drives is irrelevant; either one can copy a 5GB file in a matter of seconds. But psychologically, it’s great – not only have I saved myself at least £400, but I’ve even improved the performance, too.
The result is that I can treat my external storage almost as if it’s internal. I have folders on it, I have applications running from it, I have lots and lots of music and photos stored on it – and I’d never know it wasn’t sitting inside the Mac mini itself if I didn’t look at it. It’s one of the best upgrades I’ve ever made.
You might also like
All the live updates as they happened
Refresh
To catch up on some of the news from the past week, check out our ITPro Podcast episode on the conference here.
And with that, we’ve finished the developer keynote! You can refer back to the rest of this blog for all the latest and stay tuned on the ITPro site for more coverage from Google Cloud Next 2025.
Within the Kanban Board, Densmore can ask Code Assist to add code for specific features. If another team member has changed code and broken something – in this case, Densmore uses Seroter as a negative example – Code Assist can flag the changes to make a fix.
When a developer notices a bug, they can tag Code Assist directly in their messaging app, or add a comment within their bug tracker.
Densmore shows us the Gemini Code Assist Kanban Board, which includes something Google Cloud calls a ‘backpack’ – which contains all context for code, security policies, formats, and even previous feedback.
Rounding us out, we’re welcoming Scott Densmore, senior director, Engineering, Code Assist at Google Cloud, to demo a sneak peek at Google Cloud’s software engineering agent.
To share the visualization with colleagues, Nelson can press a ‘create data app’ button to quickly generate a link to the interactive forecast.
The agent uses a new foundation model called TimesFM, which has been built specifically for forecasting, to produce a table with product IDs and dates, as well as a chart with sales over time.
Within the Colab notebook, Nelson can ask the Gemini data science agent to generate a forecast based on his data.
Here to explain is Jeff Nelson, developer advocate at Google Cloud. Nelson starts with Colab, where we’ll be shown a demo of Google Cloud’s new data science agent in action.
We’re moving on to learning about data agents, Google Cloud’s tools for easily analyzing data.
Gemini can see and make sense of information that isn’t apparent to the human eye, says Wong, showing a video of her basketball throw as an example. She adds that a team of developers recently produced an AI commentator for sport and that X Games is interested in using AI for judging.
DiBattista notes that Gemini is capable of analyzing multiple frames at once to evaluate motion, rather than just snapshots. He stresses that he built the tool in just one week, with no need to build a custom model or handle complex data sets.
To demonstrate the amateur pitch, we’re shown a clip of Seroter throwing a baseball outside Google HQ. The system grades him as a ‘C’, with breakdowns or his arm, balance, and stride & drive.
Via Gemini API, DiBattista created a system that can analyze video and produce text analysis of the pitch in the video – both for pros and amateurs.
(Image credit: Future)
The winner of the Cloud X MLB (TM) Hackathon was Jake DiBattista, who’s here now to tell us all about his project – measuring pitches using MLB high-speed video.
What does all this look like in practice? Wong and Seroter say MLB is using Gemini to measure its 25 million data points per game. Google Cloud ran a hackathon to see what innovative use cases people could come up with for Gemini in sports.
“We’re striving to meet developers where you are,” says Cabrera. “Your team can build great apps using Gemini as your IDE of choice, or you can use Vertex AI Model Garden to call your model of choice. No matter what you use, we’re excited to see what you come up with.”
Within Model Garden, developers can test out the model’s response to questions like “what capabilities can you offer for designing renovation subjects?” and see how it responds to evaluate which one best suits their purpose.
Cabrera says while Gemini is her favorite model, Model Garden on Vertex AI offers a range of models from Meta, Mistral, and Anthropic among others.
We’re really cooking now, as Cabrera moves over to Gemini Copilot to produce unit tests by entering a prompt in Spanish – which it quickly does.
Cabrera wants to make an agent to help with budgets, powered by Gemini 2.5. Moving over to Cursor, Cabrera adds input validation to the agent.
For this demo, Cabrera is using the Windsurf IDE, which is intended to support devs with ‘vibe coding’.
Debi Cabrera, senior developer advocate at Google Cloud is now onstage to show us how developers can use Gemini in their IDE of choice, and then bring their model of choice to Google Cloud for their apps.
Google Cloud is at pains to stress that it does not require devs to use Gemini – with Vertex AI Model Garden, there’s a wide range of models to choose from.
Seroter says that Google Cloud is helping developers with its new Agent2Agent, which not only connects agents together but helps developers discover new agents to connect with in the first place.
Within the tool, Gemini suggests a fix to the problem and Sukumaran can immediately deploy it without having to affect anyone’s access to the agent.
To fix this issue, Sukumaran shows us Cloud Assist Investigations, a new tool for diagnosing problems in infrastructure and massively cutting down on debugging time.
Within Agentspace, Sukumaran asks for information related to ordering, expecting a relevant sub-agent to provide the right response. But instead, we’re presented with an error message.
Once she’s deployed this agent system, she’ll be able to share it within Agentspace, where she can interact with the agent.
Sukumaran creates a multi-agent system, right here in the keynote. This means creating a ‘root agent’ with a number of sub-agents, which will work together to automate a task.
Abirami Sukumaran, developer advocate at Google Cloud, is here to show us how to build agents within Vertex AI using ADK with Gemini.
We’re now learning about Vertex AI Agent Engine, which has recently been made generally available and helps enterprises deploy agents with enterprise-grade security. We’ll also hear about Agentspace, Google Cloud’s new solution for building no-code agents, or for developers to share agents they’ve built with the rest of their company.
The moment of truth comes – and the agent produces a detailed PDF proposal that Hinkelmann can access right within the prompt window.
(Image credit: Future)
The next step is to select the AI model Hinkelmann wants for the agent. Because ADK is model agnostic, Hinkelmann says she could use Llama 4 or another model – but in this case will use Gemini 2.5.
Performing RAG requires accessing information from outside the agent, for which model context protocol (MCP) comes in Handy, Hinkelmann says.
Next, Hinkelmann adds an ‘analyze bulding codes’ tool, which allows the agent to use RAG to check a private dataset for local buildings.
Hinkelmann says agents need instructions, tools, and a model. So to start, she uses Gemini in Vertex AI to create a custom instruction: in this case, taking a customer request and creating a PDF proposal.
Here to demo this is Fran Hinkelmann, developer relations engineering manager at Google Cloud.
Wong and Seroter say Vertex AI’s new Agent Development Kit can create an agent that can verify building codes and go deeper into meeting Bailey’s requirements.
Next up, Seroter wants to know what an agent can do.
“An agent is a service that talks to an AI model to perform a goal-based operation using the tools and context it has,” Wong explains.
Wong asks Bailey to go into more detail on the benefits of long context windows.
“This example is some things like photos, images, and a few sketches,” Bailey says. “But with long context, you’re able to send full videos to use for your projects.”
Bailey asks the model to add two globe pendant lights into the image and within seconds, they’ve been added.
In another tab, we’re shown Bailey has used Gemini to generate a prompt for its image generation capabilities and then used this to produce a concept image for the kitchen. It can produce the image, which is photorealistic, in just a few seconds.
Straight away, the model’s ‘thinking’ box shows the model has considered the floor plan (based on a sketched floor plan Bailey provided) and local regulations and building codes.
To start, the pair ask Gemini 2.0 Flash to generate a very detailed plan for remodeling a 1970s style kitchen. Bailey says the model has 65,000 token output window, which is great for generating long plans.
The two are going to make an AI app to help remodel Bailey’s kitchen, taking into account all the details and laws around doing that.
Gemini is key here, of course. Here to show us how is Paige Bailey, AI developer experience engineer at Google DeepMind, and Logan Kilpatrick, senior product manager at Google DeepMind.
Wong says today’s keynote is all about how Google Cloud can help developers build software, from start to scaling, and a sneak peek at the future of development in Google Cloud.
Here to tell us more is Stephanie Wong, head of developer skills & community at Google Cloud and Richard Seroter, chief evangelist at Google Cloud.
Finally, Gemini underpins all these innovations with its large context window, multimodality, and advanced reasoning.
Next, Google Cloud is helping developers be as productive as possible via Gemini Code Assist and Gemini Cloud Assist.
Here to welcome us to the developer keynote is Brad Calder VP & GM at Google Cloud. He says Google Cloud is innovating in three key areas. First up, helping companies build agents, which can collaborate to achieve goals on behalf of users.
To count us down for the final 30 seconds, we’re being shown numbers generated by Veo 2, i ncluding some truly abstract clips such as a giant 1 blasting off to a planet shaped like a 0.
And we’re off! As with yesterday’s keynote, we’re starting with a sizzle reel – this time all about developers, skills, AI, and production.
We’re now sat in the arena and once again listening to the AI-sampled music of The Meeting Tree onstage, accompanied by abstract visuals generated with Google DeepMind’s video generation model Veo 2.
(Image credit: Future)
There are just 30 minutes to go until the developer keynote. Presented under the subtitle ‘You can just build things’, we’re expecting this session to be all about the ease of deploying AI with Google Cloud – expect to hear lots about Agentspace, automation in Workspace powered by Google Workspace Flows, and Google Cloud’s new infrastructure for training custom AI models.
With the press conference done, all eyes are now on the developer keynote – we’ll be seated and ready to bring you images and updates as they come.
Finally, he adds that Google Cloud has European partnerships with firms such as TIM and Thales, to operate in a supervisory role and provide trust and verification in Europe.
He adds that for customers who are worried about long-term survivability, Google Distributed Cloud runs fully detached with no connection to the internet.
Kurian says that technologically, Google Cloud can prevent this from impacting its customers, because the firm doesn’t have access to its customer’s environments and no ability to reach their encryption keys.
Now another question on tariffs from Techzine – specifically on the potential risk that American companies could be ordered to stop delivering services to European customers.
In response, Kurian says Agentspace arose from an observation that organizations struggle with information searches, particularly across different apps. He adds that the service already has 100 connectors live and 300 connectors in development so people can adopt it without ripping out and replacing anything.
We’ve just had a question on how easy it will be for companies to adopt Agentspace when one’s enterprise has already invested heavily in other AI ecosystems such as Microsoft or Oracle, from Diginomica.
A question on tariffs, now – which have been a repeated talking point throughout the event. Kurian is asked whether Google Cloud is prepared for their impact and in response says the “tariff discussion is an extremely dynamic one,” and that Google has been through many cycles like this including the 2008 financial crisis and the pandemic.
Kurian also said Google is working hard to identify opportunities for renewable energy to power data centers and looking to using nuclear as a source of power for its sites.
“We have done many things over the years to improve the infrastructure – for example, we introduced water cooling many years ago for our processors,” he says.
Asked a question on how Google Cloud is meeting the increased energy demand from data centers for generative AI, Kurian says the cost of inference has decreased 20 times.
He adds there’s a competitive advantage to adopting AI and some of the changes in the past few months have changed the European attitude to the technology.
In response, Brady says that Google Cloud is helping EMEA customers with security and flexibility, which are very important in the region, particularly when it comes to not being locked into long-term contracts.
Now a question on pressure facing the EMEA region from our sister publication TechRadar Pro.
The first question is on the challenge of AI adoption in certain countries, to which Kurian says Google Cloud is working hard on its sovereign cloud capabilities. He also highlights the importance of it allowing companies to use its global technology infrastructure in meeting security requirements.
Kurian begins by highlighting how hard Google Cloud is working to expand across the globe and how it now operates in 42 regions.
Before the developer keynote later on, we’re getting to hear from Thomas Kurian, CEO at Google Cloud, Tara Brady, president EMEA at Google Cloud, and Eduardo Lopez, president Latin America at Google Cloud in a press conference.
(Image credit: Future)
It’s coming up on 8:00 in Las Vegas and we’re back to report on day two of Google Cloud Next 2025. With the developer keynote due to kick off this afternoon, there’s sure to be more detail on all the announcements we’ve heard about so far and more hands-on demos of some of Google Cloud’s newest tools.
If you’ve ever wondered what it’s like on the ground at an event such as Google Cloud Next 2025, this photo gives a good impression. You can see it’s incredibly busy here, with attendees in the thousands entering and exiting each keynote. Google Cloud has a huge range of partners and customers, many of whom will be looking to reaffirm or expand their business relationship to make the most of AI, so the event is thick with meetings, roundtables, and live demos in the expo hall.
(Image credit: Future)
“What an amazing time for all of us to experience and work with these technology advances,” Kurian concludes.
“We at Google Cloud are committed to helping each of you in effect by delivering the leading enterprise-ready, AI-optimized platform with the best infrastructure, leading models, tools, and agents. By offering an open multi-cloud platform and building for interoperability so we can speed up time to value from your AI tests, we are honored to be building this new way to cloud with you.”
And with that, the first keynote of the event comes to a close. We’ll keep bringing you all the updates as they happen live from Las Vegas.
Kurian says Google Cloud is working hard on making its innovations easy to adopt in four key ways:
- Better cross-cloud networking.
- Hands-on work with ISVs to improve Google Cloud integration.
- Working with service partners on agent rollouts.
- Offering more sovereign cloud compatibility via Google Cloud.
We’re rounding out now and Kurian is back onstage to bring the keynote to a close.
He acknowledges Google’s recent acquisition of Wiz as evidence of how seriously it takes cybersecurity.
In a demo, Payal Chakravarty shows us how Google Unified Security can detect vulnerabilities in code and extensions used within an enterprise’s environment.
The agentic, autonomous features of the new platform can automatically detect when an AI extension has put sensitive data at risk and flag it to a human in the company’s security team. In addition to providing response advice, it can proactively quarantine the suspicious extension.
Continuing at pace, we’re now welcoming Sandra Joyce, VP, Google Threat Intelligence, to hear about the security announcements Google Cloud is making today.
Chief among these announcements is the new Google Unified Security, the new converged security platform for better visibility and faster threat detection.
Read our detailed write-up on Google Unified Security here.
(Image credit: Future)
We’re moving onto Gemini Code Assist, Google Cloud’s AI pair programmer, which Calder says is already being used by a wide range of enterprises.
Google Cloud is today announcing Gemini Code Assist agents, which can help developers to quickly complete tasks such as the generation of software and documentation, as well as AI testing and code migration.
Via the new Gemini Code Assist Kanban board, developers can interact with agents to get insight into why they’re making the decisions they are and see which tasks they’re still yet to complete.
Calder says that Google Cloud is announcing new agents for every role in the data team.
Data engineering agents, embedded within BigQuery pipelines, can perform data preparation and automate metadata generation.
Meanwhile, data science agents can intelligently select models, flag data anomalies, and clean data to reduce the time teams have to spend manually validating all data.
Finally, Looker conversational analytics allows users to explore data using natural inputs. This will be made available via a new conversational analytics API, now in preview, so data teams can embed this easy question and answer layer into their existing applications.
Imagen 3 and Veo 2 models are coming to Adobe Express, we’re told, as the firm pushes forward on AI-generated content.
Moving onto data agents, we’re now welcoming Brad Calder, VP & GM, Google Cloud, onstage.
He tees up a video showing that Mattel is using Google Cloud’s AI to reduce the need for its teams to manually analyze customer sentment.
“We can instantly identify key issues and trends improving growth, efficiency, and innovation,” says Ynon Kreiz, CEO at Mattel.
“For example, we improved the ride mechanism in the Barbie Dreamhouse elevator.”
We’re back to creative agents – it seems creative output is a major focus for Google Cloud at this year’s event. We’re being told about Wizard of Oz at Sphere again – find the details for that at the start of this live blog.
O’Malley is back onstage to discuss purpose-built agents.
For example, Mercedes Benz is using AI for conversational search and route mapping in a new line of its cars.
In a demo by Patrick Marlow, product manager for Applied AI at Google Cloud, we’re shown how the suite can be used to get instant answers and assistance at a garden store.
Marlow is able to hold petunias he has purchased up to a camera and receive real-time, voice output assistance from the agent. For example, he asks if he’s buying the right fertilizer for the plants and the agent is able to recommend an alternative fertilizer and add it to his cart.
In cases where human assistance is required – such as Marlow asking for a 50% discount on his purchase – the agent escalates to a manager in Salesforce.
(Image credit: Future)
O’Malley says Google Cloud’s Customer Engagement Suite is already helping organizations meet customer knowledge demand.
She gives the example of Verizon, which adopted the Customer Engagement Suite. The firm uses the offering to provide its 28,000 customer assistants with up-to-date data and move customers to resolution even quicker.
O’Malley announces new feaures for Customer Engagement Suite, including human-like voices, integration with CRM systems and popular communications platforms, and the ability to comprehend customer emotions.
Customers are using all kinds of agents to unlock new value in their enterprise environment – but what are these different kinds?
Kurian welcomes Lisa O’Malley, leader of Product Management, Cloud AI at Google, to explain more.
O’Malley says we’ll start with customer agents, showing us a video of how Reddit is using Gemini for Reddit Answers, a new conversational layer on the message board website.
Next, we’re told about how Vertex AI Search is helping healthcare and retail organizations to deliver more relevant results to their customers and boost their conversion rates.
“Agentspace is the only hyperscaler platform on the market that can connect third-party data and tools, and offers interoperability with third-party agent models,” says Weiss.
Here to show us more is Gabe Weiss, Developer Advocate Manager, Google Cloud.
Weiss shows us how he can simply identify potential issues with his business’ customers within Agentspace. Based on this, he can ask for an agent to identify client opportunities in the future. He can then iterate on this prompt by asking for an audio summary of its findings, to be delivered to him every morning – creating an in-depth, analytical agent with a few sentences of code.
Finally, he can ask for the agent to write an email within Agentspace, which once approved is automatically sent via Outlook without him having to open the app himself.
It’s time to talk about agents – sound the klaxon. These advanced AI assistants work to automate tasks autonomously, as Kurian explains.
To hear more about the potential of agents, we’re shown a clip of Marc Benioff, CEO at Salesforce.
(Image credit: Future)
“Right now, we’re really at the start of the biggest shift any of us have ever seen in our careers,” Benioff says.
“That’s why we’re so excited about Agentforce and our expanded partnership with Google. I just love Gemini, I use it every single day whether it’s Gemini inside Agentforce, whether it’s all the integrations between Google and Salesforce.”
Starting today, Kurian announces, customers can scale agents across their environment, deploy ready-made agents, and connect agents together.
This will largely be driven by the Agent Development Kit, a new open source framework for widespread systems of agents interacting with one another.
Agent2Agent, a newly-announced protocol. will allow disparate agents to communicate across enterprise ecosystems regardless of which vendor built them and which framework they are built on.
“This protocol is supported by many leading partners who share a vision to allow agents to work across the agent ecosystem,” Kurian says.
Already, more than 50 partners including Box, Deloitte, Salesforce, and UiPath are working with Google Cloud on the protocol.
Within Google Agentspace, enterprises can have Google-made agents, as well as third-party agents and custom-built agents easily communicate with one another.
Vertex AI provides customers with all of Google’s internally-made models as well as open models such as Meta’s Llama 4.
“With Vertex AI, you can be sure your model has access to the right information at the right time,” he says.
“You can connect any data source or any vector database on any cloud, and announcing today you can build agents directly on your existing NetApp storage without requiring any duplication.”
Kurian adds that Google Cloud has the most comprehensive approach to grounding on the market.
Promising Kurian will crowd-surf at tomorrow’s concert, he welcomes the CEO back onstage.
Kurian moves quickly onto Vertex AI, with a look at how it helps customers.
“Tens of thousands of companies are building with companies in Gemini,” he says, giving examples such as Nokia buiding a tool to speed up application code development, Wayfair updating product attributes five times faster, and Seattle Children’s Hospital making thousands of clinical guidelines searchable by pediatricians.
(Image credit: Future)
Once videos have been generated, the user can fine-tune them with new in-painting controls.
In his ive demo, Bardoliwalla paints around an unwanted stage-hand in a close-up clip of a guitar to seamlessly remove him from the final result.
Next, Bardoliwalla uses Lyria to generate music for the trailer. This can be combined in the platform to create quick clips for advertising and more.
Here to show us all how this works in practice is Nenshad Bardoliwalla, Director, Product Management, Vertex AI, Google Cloud.
We’re told his mission is to create a trailer for the party to end the event – complete with a gag about Kurian not wanting to be able to sing Chappel Roan but not getting permission.
Bardoliwalla opens Vertex Media Studio, in which he can ask for a drone shot of the Vegas skyline and choose specific settings such as frame rate video length.
(Image credit: Future)
Onto some more of that creative content we had tee’d up with the DJ (you see, we said it might come up again).
Kurian highlights Imagen 3, the firm’s image generation model, as well as Veo 2, its video generation model. The latter is now capable of adding new elements into filmed video and producing videos that mimic specific lens types and camera movements.
Finally, we’re also told that Lyria is now available on Google Cloud. The model can turn text prompts into short music outputs – the first tool of its kind in the cloud, Kurian says.
Kurian is back onstage, reminiscing on the large progress Google Cloud made last year with Gemini’s multimodality and large, two million token context window.
Gemini is now included in all Google Workspace subscriptions and Kurian tees up a video to show us how businesses are making good use of the service already. In the video, customers say that Gemini is already cutting down their toil and opening new time for valuable work.
Google Cloud’s close relationship with Nvidia runs throughout its hardware announcements today. To hear more, we’re being shown a video of Jensen Huang.
(Image credit: Future)
Huang describes the Google Distributed Cloud as “utterly gigantic”.
“Google Distributed Cloud with Gemini and Nvidia are going to bring state-of-the-art AI to the world’s regulated industries and countries,” he says.
“Now, if you can’t come to the cloud, Google Cloud will bring AI to you.”
Vahdat runs through the core infrastructure announcements from today including Ironwood, AI Hypercomputer, and data storage announcements. As a reminder, you can read about these in detail announcements here.
It’s not all about running workloads in the cloud, Vahdat says. Google Cloud is also announcing Gemini on Google Distributed Cloud, which allows firms to run Gemini locally – including in air-gapped environments.
This opens the door to government organizations using AI in secret and top secret environments.
With that, Pichai is off and Kurian is back onstage.
He explains how Google Cloud is uniquely positioned to support customers, with a massive range of enterprise tools to build AI agents and an open multi-cloud platform for connecting AI to one’s existing databases.
“Google Cloud offers an enterprise-ready, AI platform built for interoperability,” he says.
“It enables you to adopt AI deeply while addressing the evolving oncerns around sovereignty, security, privacy, and regulatory requirements.”
Finally, Google Cloud’s infrastructure is core to its advantages for customers. To help illustrate this point, Kurian welcomes Amin Vahdat, VP, ML, Systems and Cloud AI Google Cloud to the stage.
It’s always good to hear directly from a customer about how AI is helping their business.
We’ve just been shown a reel from McDonald’s, in which Chris Kempczinski, CEO at McDonald’s, explained how AI can be used to predict when machines will need maintenance in McDonald’s restaurants or provide workers with quick answers to their questions.
The announcements are coming fast here in the arena. Pichai rattles off stats about Gemini 2.5, the firm’s new thinking model which is currently the top-ranked chatbot in the world per the Chatbot Arena Leaderboard.
He also notes that Gemini 2.5 Flash, Google Cloud’s low cost, low latency model that allows organizations to balance reasoning with budget for every output.
Pichai draws a direct line between Ironwood and Google’s quantum chip Willow, which it announced last year.
Both are used as examples of the boundaries Google is pushing within its hardware teams, as well as in divisions such as Google DeepMind to crack problems such as weather prediction.
Next, Pichai announces Google Cloud’s 7th generation TPU, Ironwood which brings sizeable performance and efficiency improvements over its predecessors.
A few key stats about Ironwood: it’s capable of 42.5 exaflops of performance, 24 times the per-pod performance of the world’s fastest supercomputer El Capitan.
Read more in our full coverage of Ironwood here.
First off, Pichai says that Google will make $75 billion investment in capital investment in 2025, directed toward servers and data centers.
To further support its AI-hungry customers, Pichai announces that Google Cloud will make its global network available to Cloud customers via Cloud WAN, a new managed solution for connecting enterprises across a wide area network.
“This builds on a legacy of opening up our technical infrastructure for others to use,” Pichai says.
To give the crowd a taste of what AI can do, Kurian welcomes Sundar Pichai, CEO at Google, to the stage.
Pichai opens by paying tribute to The Wizard of Oz at Sphere and then moves on to make some announcements.
(Image credit: Future)
Now the keynote proper begins, with Thomas Kurian, CEO at Google Cloud, taking to the stage to kick us off.
“Google’s AI momentum is exciting – we’re seeing more than four million developers using Gemini, a 20 times increase in Vertex AI,” says Kurian, noting that the firm processes more than 2 billion AI requests per month in workspace, driven by businesses.
Today’s sizzle reel is peppered with AI-generated video, in a show of sophistication by Google Cloud.
(Image credit: Future)
And we’re off! To begin with, as is normal for keynotes, we’re being shown a sizzle reel of Google Cloud’s impact on the industry and hyping up the potential for AI in the enterprise.
Just one minute left until the keynote begins in earnest. Stay tuned as we bring it to you live.
The music we’re hearing will apparently be played throughout the entire conference – musical group The Meeting Tree have scored an entire soundtrack for the event, with the theme of AI.
Paired with Google Cloud’s work on The Wizard of Oz (details lower down in the live blog), it’s clear that Google Cloud is eager to show what it can offer to industries that have been more reluctant to adopt AI to date.
There’s a clear need to acknowledge fears that AI could damage the livelihoods of artists. A constant refrain at yesterday’s event at the Sphere was that ideally, AI should be used to empower creatives rather than replace them. In the event yesterday, Google Cloud suggested that new roles could appear in the creative sector as a result of AI breakthroughs – it will be interesting to see if this is expanded upon at all in the keynote.
We’re now learning a bit more about how that music has been made for the event, via a behind-the-scenes video.
Human musicians were first recorded and then their samples were fed into Music AI Sandbox, which could produce audio outputs that the producers can edit, alter, and use as the basis for new noises.
As you can see, there’s a huge amount of foot traffic this morning as we pile into the Michelob Ultra Arena at Mandalay Bay. As is usual for tech conferences, we’re being serenaded by a live DJ inside the arena itself – more unusual is the visuals for this morning’s music, which have been generated entirely with Google DeepMind’s video model Veo 2.
(Image credit: Future)
As a reminder, the theme for this morning’s keynote is ‘The new way to cloud’, with a focus on interoperability, unification, and more intelligent automation through Gemini AI.
(Image credit: Future)
Last night, we were given a glimpse into what to expect this week at the Sphere, with preview speeches from Google CEO Sundar Pichai and Google Cloud chief executive Thomas Kurian onstage. You can read all about the goings on from the evening further down the live blog.
We’ve already had a range of big announcements ahead of the opening keynote, including the launch of Google’s new ‘Ironwood’ AI accelerator chip and the launch of Google Unified Security, which aims to drive cloud security capabilities for enterprises and demystify cyber complexity in the cloud.
You can read all about these announcements below:
With that, Kurian officially started Google Cloud Next, with confetti cannons heralding the official start of the event.
“If tonight’s event sets the tone for what we plan to bring you for the next three days, I think it’s safe to say it’s going to be an incredible week,” he said.
(Image credit: Future)
Kurian will be back onstage bright and early tomorrow morning at the opening keynote ‘The new way to cloud’. We’ll be bringing you all the updates from that and throughout the conference, both here and across ITPro so stay right here for all the very latest.
In the meantime, why not read my pre-conference analysis of what Google Cloud can do to set itself apart from competitors at this event and the key story it needs to tell.
Next, it was time to hear from Thomas Kurian, CEO at Google Cloud, and James Dolan, CEO at Sphere Entertainment, on the challenges of bringing The Wizard of Oz to the Sphere.
“I’ve been running companies for 40 years and this is one of the first times I ever felt that I wasn’t a customer – I was a partner,” said Dolan, praising the hands-on collaboration of the Google Cloud, Google DeepMind, and Magnopus teams.
(Image credit: Future)
Kurian noted that a total of twenty different models were needed to bring the Wizard of Oz at Sphere to life, with engineers leveraging Google’s extensive TPU architecture and inventing new techniques to expand and recreate the original film frames. This was an enormous technical challenge, not least because the scale and resolution of the screen makes it hard to hide any mistakes in the final image.
“Most importantly, the camera and this amazing theater here at the Sphere is something that doesn’t exist anywhere else in the world,” he said. “So it’s almost like you were told to do AI and your first project was your PhD thesis.”
After Pichai’s speech, we were treated to an extended video showing the behind the scenes of the project. It included detail on how difficult it is to extend existing video footage to fit the Sphere’s unique aspect ratio and resolution, as well as the complexity of generating entirely new footage of characters when they would otherwise have been offscreen.
Engineers had to work iteratively and study the original plans for the film to recreate the characters without making them generic.
(Image credit: Future)
The final project includes special effects such as wind which is blown on the audience and haptic rumbling under the seats – of which we were given a very interactive example.
After entering the Sphere’s cavernous arena, we were treated to an opening speech by Sundar Pichai, CEO at Google. He played tribute to the efforts of all the engineers and creatives who worked on the project, which required intense research and overcoming numerous technological hurdles. Ultimately, it was created using Google DeepMind’s video generation model Veo 2.
(Image credit: Future)
“We have seen significant improvements: super low latency, incredible video quality, multimodal output, so many things we couldn’t have done with AI even 12 months ago,” Pichai said.
“Beyond the technical capability, it took a whole lot of imagination, creativity, and collaboration. Our goal: giving Dorothy, Toto, and all of these iconic characters new life on a 16k screen in super resolution.”
Good evening from Las Vegas, where choice attendees from the event have just been treated to a sneak peek of a brand new attraction opening at the Sphere in August – The Wizard of Oz at Sphere.
Made in partnership with Warner Bros. Discovery, Google Cloud, and Magnopus, the finished product will run as a multi-sensory, 16k recreation of the original 1939 movie for the Sphere’s 160,000-square-foot screen using Google DeepMind’s video generation models.
(Image credit: Future)
Best smart garage door controllers of 2025
What was the first smart home product? One could argue it was the electric garage door opener. The first such openers with a radio-based remote controls came to market way back in 1931, predating the first TV remote by 20 years. Comfort and convenience were the motivation behind all three technological advances. In the case of the garage door, people were looking for a way to get out of their cars and into their homes while avoiding the weather. Given that history, it’s a wonder that it took so long to bring IoT technology to the biggest door in the house.
Well, the good news is that it’s here now. And the better news is that most existing garage door openers can be integrated with the rest of your smart home, greatly reducing the cost of that convenience. What’s more, these products are rapidly improving in both simplicity and capability. Buy one and you’ll not only be able to open and shut the door from anywhere—letting in guests, relatives, or delivery people—you’ll also know whether the door is open or closed in real time.
Why you should trust us
TechHive’s writers and editors have been reviewing smart home products for decades, and they draw on their deep and wide experience to evaluate every new product that comes to market. We install the products we review in our own homes to gain real-world experience as we evaluate how well they can be integrated into existing systems as well as how they perform on their own.
The best smart garage door opener controllers
Best smart garage door controller — Chamberlain myQ Smart Garage Hub (model MYQ-G0401)
Pros
- Price remains unbeatable
- Attractive styling fits in with the typical garage décor
- Plenty of third-party compatibility
Cons
- Occasional trouble with disconnects
- Still no support for a third garage door (you must buy a second controller)
Price When Reviewed:
$40
Best Prices Today:It’s still a no-brainer: Everything we said about the Chamberlain model MYQ-G0301 myQ Smart Garage Door Hub is now true of the Chamberlain model MYQ-G0401: It’s the easiest smart garage door controller to set up, the most functional controller on the market, and, it’s the least expensive on the market—by a wide margin. That said, there’s little reason to upgrade from the previous model. The myQ app is simple to configure and use, and the system supports a small but growing number of smart home ecosystems, including HomeKit. While it isn’t compatible with every opener—check online before you buy—it’s definitively the one to get.
Read our full
Chamberlain myQ Smart Garage Hub (model MYQ-G0401) review
Best smart garage door controller, runner-up — Meross Smart Wi-Fi Garage Door Opener (model MSG100)
Pros
- Very inexpensive
- Solid performance during our testing
- Lots of extra features to ensure you don’t leave the door open
Cons
- Wired door sensor adds complexity
- Virtually no handholding during installation
Price When Reviewed:
$40
Best Prices Today:Meross smart home products have left us with mixed emotions. They’re all inexpensive, but value is defined by more than a price tag. The Meross Smart Wi-Fi Garage Door Opener is on the better end of that scale. If Chamberlain’s product doesn’t fit your needs, this one is worth your consideration. (Note: This device is not HomeKit compatible, but Meross offers a separate model that is. It wasn’t available at press time, however, for us to evaluate.)
Read our full
Meross Smart Wi-Fi Garage Door Opener (model MSG100) review
Best security camera for garages — LiftMaster myQ Smart Garage Camera
Pros
- Easy to set up and simple to use
- Magnetic base makes it tailor-made for garage mounting
- Good overall video quality
- Integrates well with other myQ gear
Cons
- No recording features unless you pay for a subscription
- Electrical outlet management can be tricky in a garage setting
Price When Reviewed:
$79.99 (as of June 20, 2024); $149.99 when first reviewed
Best Prices Today:We criticized the high price of this effective special-purpose camera when we first reviewed it several years ago, but times have changed and its MSRP–and especially its street price–have dropped substantially. You don’t need to have a myQ smart garage door controller installed (and the myQ camera doesn’t interact with the myQ controller at all); but if you do, your camera feed will appear directly above your garage door controls within the app. It’s a handy way to get one-stop access to everything that’s going on in the least inviting room of your house. Its magnetic base makes it easy to mount on the bottom of any garage door opener with a metal enclosure. The camera also works with the Key by Amazon system and app, which in this case would empower Amazon delivery drivers to open your garage and place your Amazon packages securely inside.
Read our full
LiftMaster myQ Smart Garage Camera review
How to pick the right smart garage door controller
While garage door openers come in a vast range of brands, styles, and capabilities, the good news is that you’ll likely be able to find a smart controller that works with your system without much trouble.
As I mentioned above, the Chamberlain myQ is my top pick for a variety of reasons, but because it exclusively relies on wireless technology, it isn’t compatible with every system on the market. To start, visit myQ’s compatibility tool and check whether your existing opener is supported. If it is, and you don’t care that it’s not compatible with Alexa or Samsung SmartThings, your work is done: Get the myQ. If it isn’t, you can either get an all-new opener as Chamberlain suggests (although that won’t resolve the Alexa and SmartThings issues), or delve into the world of wired smart garage door controllers.
The Nexx NXG-200 must be attached to your garage door opener via wires, and space can be tight depending on your ceiling height.
Actually, upgrading your old, incompatible door opener is not a terrible idea, and new models are more secure and less expensive than you might think. Considering that a wired garage door controller can run you about $100, it’s worth thinking hard about whether you want to pour more money into an outdated system that might be close to failure, or just upgrade it from the start. (Many new openers have smart technology built in, obviating the need for an add-on controller.)
But if you do have an opener that’s incompatible with our top pick, and you want to keep it around, you’ll need a wired controller like the Nexx Garage NXG-200 (be sure to read our April 5 story about a security vulnerability associated with Nexx garage door controllers) or the Garadget Wired controllers. These must be connected to the opener via a pair of wires, so you’ll need to be comfortable with some minor electrical work in order to install them. Like myQ, Nexx offers an online compatibility tool, but here you’re likely to find that Nexx is either compatible straight out of the box, or compatible only with an additional adapter. In other words, wired controllers are generally compatible with everything, or, at least, I haven’t found any openers yet that aren’t compatible with them.
The Garadget fires a laser once a second at the door to determine whether it is open or closed.As with the product from NEXX, the Garadget must be hardwired to your opener.
The catch involves the adapter. Generally speaking, if you have an older garage door opener, Nexx and Garadget will work with it straight out of the box. If you have a newer opener, you’ll need their adapter as well. This is because newer openers often have a more complex encryption system built in, and a standard push-button remote—which is what wired smart controllers emulate—won’t work with them. The solution is to place a button that is compatible with this encryption in between the controller and the opener: The controller tells the button to activate, which in turn tells the opener to open or close. It’s a little wonky, but in my testing, these setups work just as well as the wireless alternative.
The problem is that it’s just a lot more expensive to do it this way. Purchasing a Nexx and an adapter will run you $105 at press time, and a Garadget plus adapter costs $98. Compare that to the less than $40 you’ll spend on the myQ and there’s really no choice.
Again, if myQ isn’t compatible, either Nexx or Garadget will make for an acceptable alternative, provided you’re willing to spend a little extra to get the job done. We’ll review new products in this space as they come to market and will update our top pick as warranted.