Tag: renewable energy innovations

  • Chilkey ND75 LP Review: Impressive performance for $100

    Chilkey ND75 LP Review: Impressive performance for $100


    Why you can trust Tom’s Hardware


    Our expert reviewers spend hours testing and comparing products and services so you can choose the best for you. Find out more about how we test.

    There aren’t a ton of low-profile mechanical keyboards on the market — after all, the best mechanical keyboards are about trying to achieve an amazing typing experience, and low-profile keyboards tend to be about compromising said experience for something slim, lightweight, and travel-friendly. But not everyone wants to travel with a paper-thin Apple Magic Keyboard, so it’s always nice to see a well-built low-profile board that delivers a fantastic typing experience — and it’s even nicer to see one with a sub-$100 price tag.

    Chilkey’s ND75 LP is the brand’s popular ND75 keyboard in low-profile form, and it comes with all the bells and whistles: wireless, with a full-aluminum body, double-shot PBT keycaps, a hot-swappable PCB, and tri-mode wireless connectivity. It even has a little LCD screen that shows you the time, battery life, and various settings like system and Caps Lock (and can, of course, be configured to display a picture or gif of your choosing — because that’s important). The ND75 LP is a little heavy to be a travel-friendly low-profile keyboard, but it’s nice to have the option of traveling with something that prioritizes typing feel and sound over portability.


  • AI has grown beyond human knowledge, says Google’s DeepMind unit

    AI has grown beyond human knowledge, says Google’s DeepMind unit


    abstract ai concept

    worawit chutrakunwanit/Getty Images

    The world of artificial intelligence (AI) has recently been preoccupied with advancing generative AI beyond simple tests that AI models easily pass. The famed Turing Test has been “beaten” in some sense, and controversy rages over whether the newest models are being built to game the benchmark tests that measure performance.

    The problem, say scholars at Google’s DeepMind unit, is not the tests themselves but the limited way AI models are developed. The data used to train AI is too restricted and static, and will never propel AI to new and better abilities. 

    In a paper posted by DeepMind last week, part of a forthcoming book by MIT Press, researchers propose that AI must be allowed to have “experiences” of a sort, interacting with the world to formulate goals based on signals from the environment.

    Also: With AI models clobbering every benchmark, it’s time for human evaluation

    “Incredible new capabilities will arise once the full potential of experiential learning is harnessed,” write DeepMind scholars David Silver and Richard Sutton in the paper, Welcome to the Era of Experience.

    The two scholars are legends in the field. Silver most famously led the research that resulted in AlphaZero, DeepMind’s AI model that beat humans in games of Chess and Go. Sutton is one of two Turing Award-winning developers of an AI approach called reinforcement learning that Silver and his team used to create AlphaZero. 

    The approach the two scholars advocate builds upon reinforcement learning and the lessons of AlphaZero. It’s called “streams” and is meant to remedy the shortcomings of today’s large language models (LLMs), which are developed solely to answer individual human questions.

    deepmind-2025-uses-of-reinforcement-learning

    Google DeepMind

    Silver and Sutton suggest that shortly after AlphaZero and its predecessor, AlphaGo, burst on the scene, generative AI tools, such as ChatGPT, took the stage and “discarded” reinforcement learning. That move had benefits and drawbacks. 

    Also: OpenAI’s Deep Research has more fact-finding stamina than you, but it’s still wrong half the time

    Gen AI was an important advance because AlphaZero’s use of reinforcement learning was restricted to limited applications. The technology couldn’t go beyond “full information” games, such as Chess, where all the rules are known. 

    Gen AI models, on the other hand, can handle spontaneous input from humans never before encountered, without explicit rules about how things are supposed to turn out. 

    However, discarding reinforcement learning meant, “something was lost in this transition: an agent’s ability to self-discover its own knowledge,” they write.

    Instead, they observe that LLMs “[rely] on human prejudgment”, or what the human wants at the prompt stage. That approach is too limited. They suggest that human judgment “imposes “an impenetrable ceiling on the agent’s performance: the agent cannot discover better strategies underappreciated by the human rater.

    Not only is human judgment an impediment, but the short, clipped nature of prompt interactions never allows the AI model to advance beyond question and answer. 

    “In the era of human data, language-based AI has largely focused on short interaction episodes: e.g., a user asks a question and (perhaps after a few thinking steps or tool-use actions) the agent responds,” the researchers write.

    “The agent aims exclusively for outcomes within the current episode, such as directly answering a user’s question.” 

    There’s no memory, there’s no continuity between snippets of interaction in prompting. “Typically, little or no information carries over from one episode to the next, precluding any adaptation over time,” write Silver and Sutton. 

    Also: The AI model race has suddenly gotten a lot closer, say Stanford scholars

    However, in their proposed Age of Experience, “Agents will inhabit streams of experience, rather than short snippets of interaction.”

    Silver and Sutton draw an analogy between streams and humans learning over a lifetime of accumulated experience, and how they act based on long-range goals, not just the immediate task.

    “Powerful agents should have their own stream of experience that progresses, like humans, over a long time-scale,” they write.

    Silver and Sutton argue that “today’s technology” is enough to start building streams. In fact, the initial steps along the way can be seen in developments such as web-browsing AI agents, including OpenAI’s Deep Research. 

    “Recently, a new wave of prototype agents have started to interact with computers in an even more general manner, by using the same interface that humans use to operate a computer,” they write.

    The browser agent marks “a transition from exclusively human-privileged communication, to much more autonomous interactions where the agent is able to act independently in the world.”

    Also: The Turing Test has a problem – and OpenAI’s GPT-4.5 just exposed it

    As AI agents move beyond just web browsing, they need a way to interact and learn from the world, Silver and Sutton suggest. 

    They propose that the AI agents in streams will learn via the same reinforcement learning principle as AlphaZero. The machine is given a model of the world in which it interacts, akin to a chessboard, and a set of rules. 

    As the AI agent explores and takes actions, it receives feedback as “rewards”. These rewards train the AI model on what is more or less valuable among possible actions in a given circumstance.

    The world is full of various “signals” providing those rewards, if the agent is allowed to look for them, Silver and Sutton suggest.

    “Where do rewards come from, if not from human data? Once agents become connected to the world through rich action and observation spaces, there will be no shortage of grounded signals to provide a basis for reward. In fact, the world abounds with quantities such as cost, error rates, hunger, productivity, health metrics, climate metrics, profit, sales, exam results, success, visits, yields, stocks, likes, income, pleasure/pain, economic indicators, accuracy, power, distance, speed, efficiency, or energy consumption. In addition, there are innumerable additional signals arising from the occurrence of specific events, or from features derived from raw sequences of observations and actions.”

    To start the AI agent from a foundation, AI developers might use a “world model” simulation. The world model lets an AI model make predictions, test those predictions in the real world, and then use the reward signals to make the model more realistic. 

    “As the agent continues to interact with the world throughout its stream of experience, its dynamics model is continually updated to correct any errors in its predictions,” they write.

    Also: AI isn’t hitting a wall, it’s just getting too smart for benchmarks, says Anthropic

    Silver and Sutton still expect humans to have a role in defining goals, for which the signals and rewards serve to steer the agent. For example, a user might specify a broad goal such as ‘improve my fitness’, and the reward function might return a function of the user’s heart rate, sleep duration, and steps taken. Or the user might specify a goal of ‘help me learn Spanish’, and the reward function could return the user’s Spanish exam results.

    The human feedback becomes “the top-level goal” that all else serves.

    The researchers write that AI agents with those long-range capabilities would be better as AI assistants. They could track a person’s sleep and diet over months or years, providing health advice not limited to recent trends. Such agents could also be educational assistants tracking students over a long timeframe.

    “A science agent could pursue ambitious goals, such as discovering a new material or reducing carbon dioxide,” they offer. “Such an agent could analyse real-world observations over an extended period, developing and running simulations, and suggesting real-world experiments or interventions.”

    Also: ‘Humanity’s Last Exam’ benchmark is stumping top AI models – can you do any better?

    The researchers suggest that the arrival of “thinking” or “reasoning” AI models, such as Gemini, DeepSeek’s R1, and OpenAI’s o1, may be surpassed by experience agents. The problem with reasoning agents is that they “imitate” human language when they produce verbose output about steps to an answer, and human thought can be limited by its embedded assumptions. 

    “For example, if an agent had been trained to reason using human thoughts and expert answers from 5,000 years ago, it may have reasoned about a physical problem in terms of animism,” they offer. “1,000 years ago, it may have reasoned in theistic terms; 300 years ago, it may have reasoned in terms of Newtonian mechanics; and 50 years ago, in terms of quantum mechanics.”

    The researchers write that such agents “will unlock unprecedented capabilities,” leading to “a future profoundly different from anything we have seen before.” 

    However, they suggest there are also many, many risks. These risks are not just focused on AI agents making human labor obsolete, although they note that job loss is a risk. Agents that “can autonomously interact with the world over extended periods of time to achieve long-term goals,” they write, raise the prospect of humans having fewer opportunities to “intervene and mediate the agent’s actions.” 

    On the positive side, they suggest, an agent that can adapt, as opposed to today’s fixed AI models, “could recognise when its behaviour is triggering human concern, dissatisfaction, or distress, and adaptively modify its behaviour to avoid these negative consequences.”

    Also: Google claims Gemma 3 reaches 98% of DeepSeek’s accuracy – using only one GPU

    Leaving aside the details, Silver and Sutton are confident the streams experience will generate so much more information about the world that it will dwarf all the Wikipedia and Reddit data used to train today’s AI. Stream-based agents may even move past human intelligence, alluding to the arrival of artificial general intelligence, or super-intelligence.

    “Experiential data will eclipse the scale and quality of human-generated data,” the researchers write. “This paradigm shift, accompanied by algorithmic advancements in RL [reinforcement learning], will unlock in many domains new capabilities that surpass those possessed by any human.”

    Silver also explored the subject in a DeepMind podcast this month.




  • Panasonic S1R II review: An excellent hybrid camera that’s cheaper than rivals

    Panasonic S1R II review: An excellent hybrid camera that’s cheaper than rivals


    With the A1, Sony was the first to introduce a high-resolution hybrid camera that was equally adept at stills and video — but boy was it expensive. Nikon and Canon followed that template with the R5 II and Z8 models that offered similar capabilities for less money, but those were still well north of $4,000.

    Enter the S1R II. It’s Panasonic’s first camera that can not only shoot up to 8K video at the company’s usual high standards, but also capture 44-megapixel (MP) photos in rapid bursts. And unlike its rivals, the new model is available at a more reasonable $3,300 — half the price of Sony’s A1 II. At the same time, it’s a massive upgrade over the original S1R.

    The main catch is the lack of a high-speed stacked sensor found in the other models, which can cause some skewing in both images and video. As I discovered, though, that tradeoff is well worth it for the lower price and picture quality that matches its competition. All of that makes the S1R II Panasonic’s best camera yet and a very tempting option in the high-resolution mirrorless category.

    The S1R II is similar to other recent Panasonic models like the GH7 in terms of the design and control layout. It’s much lighter than the original S1R at 1.75 pounds compared to 2.24 pounds, so it’s less tiresome to carry around all day. As for handling, the massive grip has a ridge where your fingertips sit, making it nearly impossible to drop. The rubberized exterior is easy on the hands, though not quite as nice as the R5 II’s softer material.

    I’ve always liked Panasonic’s controls and in that regard the S1R II may be the company’s best model yet. Along with a joystick and dials on the top front, top back and rear, it has lockable mode and burst shooting dials on top. You also get a dedicated button for photos, video and slow and quick (S&Q) modes, each with separate settings. There’s a dedicated autofocus switch, video record buttons both on top and front, a tally light and multiple programmable buttons.

    The menu system is equally good, with logical color-coded menus and submenus. You can also rapidly find your most-used functions in the quick menu. All of that allowed me to shoot photos and video without fumbling for settings. You can also fully program buttons, dials and the quick menu to your own preferences.

    The Panasonic S1R II's versatile tilting and folding display
    Steve Dent for Engadget

    The rear display is great for content creators and photographers alike. It tilts up and down to allow for easy overhead or shoot-from-the hip photography and also swivels out to the side so vloggers can conveniently film themselves. It’s very sharp and bright enough to use on sunny days. The electronic viewfinder is also excellent with 5.76 million dots of resolution and 100 percent magnification, matching Canon’s R5 II and beating the Nikon Z8.

    Battery life isn’t a strong point, though, with 350 shots on a charge or just 280 when using the electronic viewfinder — far below the 640 shots allowed by the R5 II. It also only allows just over an hour of start-and-stop video shooting. However, Panasonic’s optional DMW-BG2 battery grip doubles endurance and also allows for battery hot-swapping.

    The S1R II supports both SDXC UHS II and much faster CFexpress Type B cards, while also supporting SSD capture via the USB-C port like the S5 IIX and GH7. The latter two storage methods enable shooting in high-bandwidth RAW and ProRes to maximize quality.

    Panasonic also included a full-sized HDMI port along with microphone and headphone jacks. For the best possible sound quality, the optional XLR2 accessory lets you capture four channels at up to 32-bit float quality to reduce the possibility of clipped audio. And finally, the S1R II is Panasonic’s first mirrorless model with a protective carbon fiber curtain that comes down to protect the sensor, just like recent Canon and Sony models.

    The Panasonic S1R II offers burst shooting speeds up to 40 fps in electronic shutter mode.
    Steve Dent for Engadget

    Although the original S1R could only manage an anemic 6 fps burst speeds, its successor can hit 40 RAW images per second in silent electronic mode, beating all its rivals — though shooting at that speed limits quality to 12-bit RAW. To get 14-bit quality, you need to use the mechanical shutter for burst shooting which tops out at 9 fps.

    However, the Panasonic S1R II doesn’t have a fast stacked sensor like rivals. The result is rolling shutter that can be a problem in some circumstances, like shooting race cars, propellers or golf swings. However, it does outperform many other non-stacked high-resolution cameras like Sony’s A7R V and Panasonic’s own S5 IIX in that area.

    Pre-burst capture is now available and starts when you half-press the shutter. That lets you save up to 1.5 seconds of photos you might have otherwise missed once you fully press the shutter button.

    With an overhauled phase-detect autofocus system and a new, faster processor, the S1R II features Panasonic’s fastest and smartest AF system yet. It can now lock onto a subject’s face and eyes quicker and follow their movements more smoothly, while also detecting and automatically switching between humans, animals, cars, motorcycles, bikes, trains and airplanes. I found it to be fast and generally reliable, but it’s still not quite up to Sony’s and Canon’s standards for speed and accuracy.

    Panasonic boosted in-body stabilization to 8 stops. That’s nearly on par with rivals, though Canon leads the way with 8.5 stops on the R5 II. Still, it lets you freeze action at shutter speeds as low as a quarter second in case you want to blur waterfalls or moving cars when shooting handheld.

    Photo quality is outstanding with detail as good as rivals, though understandably short of Sony’s 61-megapixel A7R V. Colors are as accurate as I’ve seen on any recent camera, matching or even beating Canon’s excellent R5 II. My pro photographer friends took a number of shots with the S1R II and found it slightly superior to their Sony A1, noting that they rarely needed to white balance in post.

    Thanks to the dual-ISO backside-illuminated sensor, low-light capability is excellent for a high-resolution camera, with noise well controlled up to ISO 12,800. Beyond that, grain becomes more problematic and shadows can take on a green cast. JPEG noise reduction does a good job retaining detail while suppressing noise, but gets overly aggressive above ISO 6,400.

    If 44MP isn’t enough, the S1R II offers a high-resolution mode that captures eight images with a slightly offset sensor position and composes them into a single 177 megapixel file (either RAW or JPEG). It can supposedly be used without a tripod, though I found I had to remain very still to get decent images when doing so.

    The S1R II is Panasonic’s best mirrorless camera yet for video, albeit with some caveats I’ll discuss soon. You can capture up to 8K 30p 10-bit video at a reasonably high 300 Mbps, close to what Sony’s far more expensive A1 can do. Better still, it supports oversampled 5.8K ProRes RAW video internally with no crop for maximum dynamic range, or 4K video at up to 120 fps. Finally, the S1R II is capable of “open gate” 3:2 capture of the full sensor at up to 6.4K (and 8K down the road via a firmware update), making it easy to shoot all types of formats at once, including vertical video for social media.

    The Panasonic S1R II is an excellent vlogging camera thanks to the innovative stabilization system.
    Steve Dent for Engadget

    Some of these resolutions, particularly the 5.9K 60 fps and 4K 120 fps modes come with a slight crop of about 1.1x and 1.04x, respectively. 4K 120 fps also uses pixel binning, which introduces a loss of resolution and other artifacts like rainbow-colored moire.

    That takes us to the main downside: rolling shutter. The S1R II is actually a bit better than the S5 II in that regard, with a total readout speed of about 1/40th of a second, or about 25 milliseconds at any of the full sensor readout resolutions (8K or 5.8K). That can result in wobble or skew if you whip the camera around or film fast-moving objects. However, it’s acceptable for regular handheld shooting.

    One complication is Panasonic’s dynamic range expansion (DRE) that boosts video dynamic range by a stop, mostly in an image’s highlights. Enabling that feature makes rolling shutter worse.

    Should you need to reduce rolling shutter, you can simply disable DRE without a big hit in quality. And shooting 4K at 60p minimizes rolling shutter so that it’s nearly on par with stacked sensor cameras, while still offering high-quality footage with just a slight crop.

    As for video quality, it’s razor sharp and color rendition is accurate and pleasing. Dynamic range is on the high end of cameras I’ve tested at close to 14 stops when shooting with Panasonic’s V-log, allowing excellent shadow and highlight recovery, especially in DRE mode. It’s still very good without DRE though, particularly if you’re not shooting in bright and sunny conditions.

    Video still from the Panasonic S1R II
    Frame grab from Panasonic S1R II 8K video
    Steve Dent for Engadget

    Video AF is also strong, keeping even quick-moving subjects in focus. Face, eye, animal and vehicle detection work well, though again, the system isn’t quite as reliable as what I saw on Sony and Canon’s latest models.

    The S1R II offers more stabilization options than its rivals, though. Optical stabilization provides good results for handheld video, while electronic stabilization (EIS) smooths things further . Cranking that up to the most aggressive high EIS setting provides gimbal-like smoothness but introduces a significant 1.5x crop.

    Along with those, Panasonic introduced something called “cropless” EIS. That setting takes advantage of unused areas of the sensor to correct corner distortion typical with wide angle lenses while also fixing skew. I found it worked very well to reduce rolling shutter even for quick pans and walking, which may help alleviate such concerns for some creators.

    So yes, rolling shutter wobble is worse on this camera than rivals like the R5 II. However, there are ways to work around it. If minimal skewing is a critical feature then don’t buy the S1R II, but it shouldn’t be an issue for most users, particularly at this price.

    The Panasonic S1R II is one of the nicest handling cameras out there.
    Steve Dent for Engadget

    The S1R II is Panasonic’s best hybrid mirrorless camera to date, offering a great balance of photography and video powers. It’s also the cheapest new camera in the high-resolution hybrid full-frame category, undercutting rivals like Canon’s R5 II and the Nikon Z8.

    The main downside is rolling shutter that primarily affects video. As I mentioned, though, it won’t pose a problem for many content creators and there are workarounds. Aside from that, it delivers outstanding photo and video quality while offering innovative features like cropless electronic stabilization.

    If you need even more resolution, Sony’s 61MP A7R V offers slightly better image quality. And if rolling shutter is really an issue then I’d recommend Canon’s R5 II (though that model does cost $1,000 more) or the Nikon Z8. Should you want to spend considerably less, the Canon R6 II or even Panasonic’s S5 II or S5 IIx are solid picks. For other hybrid shooters, though, Panasonic’s S1R II is a great choice.

    This article originally appeared on Engadget at https://www.engadget.com/cameras/panasonic-s1r-ii-review-an-excellent-hybrid-camera-thats-cheaper-than-rivals-163013065.html?src=rss


  • Best Internet Providers in Pueblo, Colorado

    Best Internet Providers in Pueblo, Colorado


    What is the best internet provider in Pueblo?

    Xfinity is the top internet provider in Pueblo, Colorado, according to our CNET broadband experts. The cable provider took the top spot thanks to its extensive local coverage and affordable pricing. Xfinity offers plans starting at just $20 per month for 150Mbps. You can more than double that speed for an additional $10, making it an excellent value.

    CenturyLink is also widely available in Pueblo, but its DSL speeds range from 10 to 140Mbps, which falls short compared to Xfinity. On the other hand, Quantum Fiber — part of the Lumen Technologies family — delivers faster speeds of up to 8,000Mbps over fiber internet and offers symmetrical upload and download speeds, which is ideal for video calls and gaming. However, Quantum Fiber’s availability in Pueblo is limited.

    Secom provides fiber internet in Pueblo as well, but most residents will find the company’s fixed wireless service more accessible. Additional fixed wireless providers in the area include T-Mobile Home Internet, Rise Broadband, and Kellin Communications, with T-Mobile leading in terms of availability, speeds and overall value.

    Best internet in Pueblo, Colorado

    Pueblo, Colorado internet providers compared

    Provider Internet technology Monthly price range Speed range Monthly equipment costs Data cap Contract CNET review score
    CenturyLink
    Read full review
    DSL $55 20-100Mbps $15 (optional) None None 6.7
    Quantum Fiber Fiber $50-$165 500-8,000Mbps None None None 6.7
    Rise Broadband
    Read full review
    Fixed wireless $45-$50 25-100Mbps None 250GB or unlimited None 6.2
    Secom Fiber, fixed wireless $60-$90 fiber, $60-$110 fixed wireless 100-1,000Mbps fiber, 15-100Mbps fixed wireless $5 None Varies N/A
    T-Mobile Home Internet
    Read full review
    Fixed wireless $50-$70 ($35-$55 with eligible mobile plans) 87-415Mbps None None None 7.4
    Verizon 5G Home Internet
    Read full review
    Fixed wireless $50-$70 ($35-$45 for eligible Verizon Wireless customers) 50-1,000Mbps None None None 7.2
    Xfinity
    Read full review
    Cable $20-$85 150-1,300Mbps $15 (included in most plans) 1.2TB None or 1 year 7

    Show more (3 items)

    Source: CNET analysis of provider data.

    What’s the cheapest internet plan in Pueblo?

    Plan Starting price Max download speed Monthly equipment fee
    Xfinity Connect
    Read full review
    $20 150Mbps $15 (optional)
    Xfinity Connect More
    Read full review
    $30 400Mbps $15 (optional)
    Rise Broadband Unlimited
    Read full review
    $45 25Mbps $10
    Quantum Fiber $50 500Mbps None
    T-Mobile Home Internet
    Read full review
    $50 ($35 with eligible mobile plans) 318Mbps None
    Verizon 5G Home Internet
    Read full review
    $50 ($35 with eligible mobile plans) 300Mbps None
    Xfinity Fast
    Read full review
    $55 600Mbps $15 (optional)
    CenturyLink Internet
    Read full review
    $55 20-140Mbps $15 (optional)

    Show more (4 items)

    Source: CNET analysis of provider data.

    Image of a water tower in Pueblo, Colorado

    Getty Images

    How to find internet deals and promotions in Pueblo

    The best internet deals and top promotions in Pueblo depend on what discounts are available during that time. Most deals are short-lived, but we look frequently for the latest offers. 

    How many members of your household use the internet?

    Pueblo internet providers, such as T-Mobile Home Internet and Xfinity, may offer lower introductory pricing or promotions for a limited time. Many, however, including Quantum Fiber and CenturyLink, run the same standard pricing year-round. 

    For a more extensive list of promos, check out our guide on the best internet deals.

    Fastest internet plans in Pueblo

    Plan Starting price Max download speed Max upload speed Data cap Connection type
    Quantum Fiber $165 8,000Mbps 8,000Mbps None Fiber
    Quantum Fiber $100 3,000Mbps 3,000Mbps None Fiber
    Quantum Fiber $75 940Mbps 940Mbps None Fiber
    Xfinity Gigabit Extra
    Read full review
    $85 1,300Mbps 35Mbps 1.2TB Cable
    Secom Fiber 1000 $90 1,000Mbps 1,000Mbps None Fiber
    Xfinity Gigabit
    Read full review
    $65 1,100Mbps 20Mbps 1.2TB Cable
    Verizon 5G Home Plus Internet
    Read full review
    $70 ($45 with eligible mobile plans) 85-1,000Mbps 50-75Mbps None Fixed wireless

    Show more (3 items)

    Source: CNET analysis of provider data.

    What’s a good internet speed?

    Most internet connection plans can now handle basic productivity and communication tasks. If you’re looking for an internet plan that can accommodate video conferencing, streaming video or gaming, you’ll have a better experience with a more robust connection. Here’s an overview of the recommended minimum download speeds for various applications, according to the Federal Communications Commission. Note that these are only guidelines — and that internet speed, service and performance vary by connection type, provider and address.

    For more information, refer to our guide on how much internet speed you really need.

    • 0 to 5Mbps allows you to tackle the basics — browsing the internet, sending and receiving email, streaming low-quality video.
    • 5 to 40Mbps gives you higher-quality video streaming and video conferencing.
    • 40 to 100Mbps should give one user sufficient bandwidth to satisfy the demands of modern telecommuting, video streaming and online gaming. 
    • 100 to 500Mbps allows one to two users to simultaneously engage in high-bandwidth activities like video conferencing, streaming and gaming. 
    • 500 to 1,000Mbps allows three or more users to engage in high-bandwidth activities at the same time.

    How CNET chose the best internet providers in Pueblo

    Internet service providers are numerous and regional. Unlike the latest smartphone, laptop, router or kitchen tool, it’s impractical to personally test every ISP in a given city. So what’s our approach? We start by researching the pricing, availability and speed information drawing on our own historical ISP data, the provider sites and mapping information from FCC.gov.

    But it doesn’t end there. We go to the FCC’s website to check our data and ensure we consider every ISP that provides service in an area. We also input local addresses on provider websites to find specific options for residents. We look at sources, including the American Customer Satisfaction Index and J.D. Power, to evaluate how happy customers are with an ISP’s service. ISP plans and prices are subject to frequent changes; all information provided is accurate as of the time of publication.

    Once we have this localized information, we ask three main questions:

    1. Does the provider offer access to reasonably fast internet speeds?
    2. Do customers get decent value for what they’re paying?
    3. Are customers happy with their service?

    While the answer to those questions is often layered and complex, the providers who come closest to “yes” on all three are the ones we recommend. When it comes to selecting the cheapest internet service, we look for the plans with the lowest monthly fee, though we also factor in things like price increases, equipment fees and contracts. Choosing the fastest internet service is relatively straightforward. We look at advertised upload and download speeds, and also take into account real-world speed data from sources like Ookla and FCC reports. (Ookla is owned by the same parent company as CNET, Ziff Davis.)

    To explore our process in more depth, visit our page on how we test ISPs.

    FAQs on internet providers in Pueblo, Colorado

    What is the best internet service provider in Pueblo?

    Xfinity is the best internet service provider in Pueblo due to its wide availability of high-speed plans and competitive pricing. Xfinity is available to nearly every Pueblo address, offering the cheapest internet plan and the fastest speeds in the area.

    Is fiber internet available in Pueblo?

    According to the most recent FCC data, fiber internet service in Pueblo is available to approximately 30% of households, or roughly 16,200 homes. Serviceability is greatest around CSU Pueblo and in the southwest part of the city. Quantum Fiber is the largest fiber internet provider in Pueblo, though Secom also offers local fiber internet service.

    What is the cheapest internet provider in Pueblo?

    Xfinity offers the cheapest internet plan in Pueblo, with service starting at $20 per month for max download speeds of 150Mbps. For $10 more per month (and still cheaper than service from any other major ISP in Pueblo), Xfinity’s Connect More plan comes with speeds up to 300Mbps. A one-year contract may be required for the lowest pricing, and renting Wi-Fi equipment from Xfinity could add $15 to your monthly bill.

    Which internet provider in Pueblo offers the fastest plan?

    Quantum Fiber offers the fastest download speed in Pueblo, up to 8,000Mbps, starting at $165 per month. After Quantum Fiber’s services, Xfinity comes in second. However, its max upload speeds are significantly slower (35Mbps) due to the use of a cable network. Secom and several other local fiber internet providers in Pueblo don’t offer max download speeds as fast as Xfinity but are capable of delivering much faster upload speeds, often equal to the plan’s max download speeds.




  • Bionic Bay Review: A speedrunners delight

    Bionic Bay Review: A speedrunners delight



    Let’s get this out of the way: Bionic Bay is going to be compared to Limbo and Inside. A lot. It’s inevitable. Psychoflow Studios, in collaboration with Mureena Oy, has delivered what feels like a sci-fi reimagining of Playdead’s moody 2010 classic. The visual storytelling, the shadowy menace, the precisely brutal puzzles — it’s all here, reassembled with a slick, biomechanical sheen.

    But don’t mistake Bionic Bay for a copycat. Beneath the familiar silhouette lies a wildly inventive and occasionally maddening precision platformer that plays like a love letter to physics. This isn’t just puzzle-solving; it’s gravity-bending, object-swapping, mid-air improvisation that can make you feel like a time-warping parkour demigod when it all clicks.

    Clocking in at around 8–10 hours (depending on how reckless or masochistic you are), it’s tightly paced — though not always evenly. I played on PlayStation 5, and somewhere in the middle of its surreal, flesh-and-metal dreamscape, I found myself wondering: How the hell are they going to top this?

    Welcome to the Otherworld

    Character floats midair in a chaotic mechanical environment glowing with orange light.


    Credit: Psychoflow Studios / Mureena Oy / Kepler Interactive

    Bionic Bay technically has a story, but don’t expect much of a narrative to latch onto. Most of it unfolds through cryptic text logs that pop up as you stumble across the corpses of long-dead scientists, scattered like breadcrumbs across this eerie, decaying world.

    From what my very smooth, very confused brain could piece together, you’re the unfortunate scientist who has survived an experiment gone sideways — catapulted into the guts of an ancient, hyper-advanced alien civilization. That’s…pretty much it. And honestly, that’s fine. The “plot” is more ambient than essential — it’s just vibes, bro. Really, it’s just an excuse to hurl yourself over chasms wider than your rent bill.

    Thankfully, you’re not doing it alone, or entirely as a human. Early on, the game zaps you with a genetic upgrade called “elasticity,” essentially turning your character from discount Gordon Freeman into a wall-bouncing, momentum-bending physics god.

    As you progress, Bionic Bay hands you a trio of reality-breaking tools that would make any physics professor sweat. First up: a transporter that lets you swap places with nearby objects. Then there’s the Chronolag, a pair of sunglasses that slows time in a tight radius around you. Finally, the gravitational backpack, a piece of high-tech wizardry that lets you rotate the direction of gravity with a flick of the right stick.

    Naturally, these gadgets come with caveats. The swap tool only works with objects currently on screen (no teleporting cheese here). The Chronolag is limited to a tense 30 seconds and cuts off the second you take damage or go full ragdoll. The gravity backpack allows for two midair uses — after that, you’re out of tricks and headed straight for a hard landing.

    But despite the limitations, or even because of them, each tool is essential to cracking Bionic Bay’s brutally tight puzzle platforming. And I mean tight. These puzzles don’t just flirt with precision; they demand pixel-perfect timing and surgical object placement. Especially in the later levels, success hinges on mastering momentum, nailing swaps mid-fall, and contorting through gaps designed to mock your sense of space and rhythm.

    Even with all the high-tech tools at your disposal, mastering your own movement is essential to solving Bionic Bay’s intricate puzzles. One of the most versatile mechanics is the dash, triggered with the Circle button. It sends your character hurtling forward in a curled, high-speed motion — part movement boost, part crouch — perfect for slipping through tight gaps or gaining momentum.

    The dash can also be chained with jumps for extended traversal. Combining it with the X button allows for long, arcing leaps that feel like controlled bursts of flight. In practice, it’s a rhythmic sequence: dash, jump, dash again. The Circle button also functions as a dive midair, letting you fine-tune your trajectory or squeeze through narrow environmental windows with just the right amount of force.

    A solution for everyone

    Underwater scene with a character being hoisted by a mechanical figure.


    Credit: Psychoflow Studios / Mureena Oy / Kepler Interactive

    The environments in Bionic Bay aren’t just backdrops — they’re fully interactive playgrounds where the rules are loose, and experimentation is everything. Most puzzles don’t lock you into a single solution; instead, they hand you a toolbox and let your grasp of the game’s intricate physics system guide the way. Getting from point A to point B is less about following a path and more about inventing one, usually while avoiding hazards like vaporizing lasers, insta-freeze traps, and an absurd number of explosive land mines.

    Take one scenario: I needed to reach a high cliff from ground level. One option was to roll a barrel into place, launch myself off it, swap positions mid-air, race over to climb the object, jump off it, and grab the ledge. Another route? Use the land mines — delicately timed detonation included — to catapult me skyward using the previously mentioned object as a shield. The game doesn’t just allow for creativity; it thrives on it, practically begging players to break it in the most stylish ways possible. It’s built for the kind of player who sees every mechanic as a potential exploit, and Bionic Bay rewards that mentality at every turn.

    Bionic Bay drips with atmosphere — equal parts decaying alien architecture and rusted industrial labyrinth. In one moment, you’re dwarfed by writhing, root-like structures lit by an amber glow that feels almost biblical in its intensity. In the next, you’re navigating a colossal tangle of mechanical guts like massive gears, broken scaffolding, and planet-sized orbs suspended in shafts of scorching light. It’s biomechanical horror meets cosmic wonder, with every frame soaked in grime, heat, and a strange, almost sacred silence. It’s haunting, oppressive, and stunningly beautiful all at once.

    Bionic Bay walks a fine line visually. Despite the protagonist being mostly a black silhouette, the environments are detailed enough that you never lose track of him, even in the most chaotic moments. And — maybe this dates me — but the contrast between the character and the background instantly brought Vector to mind, that sleek parkour side-scroller from the iOS glory days of 2012. It’s as if Psychoflow took that minimalist, kinetic style and mashed it together with moody pixel art, otherworldly concept design, and the eerie tone of Limbo.

    The result is something familiar yet fresh, a visual identity that feels both nostalgic and completely alien.

    Is Bionic Bay worth it?

    Red-lit hexagonal chamber with a glowing central orb and silhouetted figure observing it.


    Credit: Psychoflow Studios / Mureena Oy / Kepler Interactive

    Performance-wise, there’s not much to complain about. Bionic Bay runs smoothly on PS5, with just a single framerate dip cropping up late in the game. I’m curious to see how the online mode holds up, but since I was playing on a pre-release build, the multiplayer was a ghost town even after I unlocked it by finishing the main campaign.

    As for sound design, I was fully locked in. The soundtrack rarely takes center stage, but when it does, it hits — pulsing synths that creep in and swell at just the right moments, adding a heavy, unnerving layer to the game’s far-future horror vibe. It looks great, it sounds great, and while the single-player campaign does drag a bit in the middle, it’s a gorgeous slog. A stylish, ambient descent into mechanical madness that knows how to hold your attention, even when it’s testing your patience.

    Bionic Bay is absolutely worth your time, especially if you’re the kind of player who thrives on challenge, experimentation, and atmospheric immersion. It doesn’t reinvent the puzzle platformer but pushes the genre in a clever direction with its physics-driven mechanics and open-ended puzzle design. It’s a game that respects your intelligence and rewards your curiosity while looking like a fever dream built from scrap metal and alien roots.

    It’s not perfect — the pacing stumbles in the middle, and the story barely registers — but the overall experience is too striking to ignore. For fans of Limbo, Inside, or even old-school Vector, Bionic Bay is a beautifully harsh evolution of the genre. Just be prepared to die. A lot.

    For more Mashable game reviews, check out our OpenCritic page.


  • 5 games I used to think were 10/10 masterpieces but was wrong about

    5 games I used to think were 10/10 masterpieces but was wrong about


    I’ve been playing video games since the age of two, with the yellow Beetle from Midtown Madness 2 being my first companion in the digital world. Now, 24 years later, I’ve racked up countless gaming experiences — some good, some bad, and some unforgettable. As a teen growing up during a time when gaming rapidly evolved, my benchmarks for a “perfect” game kept shifting. Sure, some of those games have aged like fine wine. But others? Not so much.

    There was a time when a hack-and-slash like Daemon Vector would’ve cracked my top ten, but today it’s barely relevant. Just like that, there were times I went gaga over certain games, calling them masterpieces and handing them a mental 10/10, G.O.A.T. badge without hesitation. But with age and experience, I’ve come to accept that some of those so-called “perfect” games… weren’t really that perfect.

    Related

    I’m still mad about these 5 canceled games we never got

    From Spider-Verse dreams to Boba Fett’s lost origin, here are 5 cancelled games that still haunt gamers.

    5

    Cyberpunk 2077

    A brilliant foundation, but the house is missing rooms

    Cyberpunk 2077 scratched a very specific itch for me — one I hadn’t felt since the golden days of Deus Ex. The prologue alone had me raving to my non-gamer friends. It was that cool. The gameplay is slick, the traversal is fun, and the premise is flat-out bonkers in the best way. But after finishing Elden Ring — arguably a flawless open-world experience — it became impossible to ignore the cracks in Cyberpunk’s design.

    The side quests are insanely fleshed out, but the main story rings emotionally hollow and leaves very little impact. A great story is supposed to have an impact above all, and that’s what I believe Cyberpunk 2077’s central narrative feels like. Worse yet, the “life path” you choose, which defines V’s entire backstory, barely changes anything in the story outside a handful of dialogue options during quests.

    Why couldn’t I have remained a Corpo, playing double agent from within Arasaka? Why did Johnny’s takeover boil down to a binary choice at the very end instead of a steady emotional decline? For someone who stole Arasaka’s most prized tech, the lack of serious consequences throughout the campaign was baffling. The excellent expansion, Phantom Liberty, proves that Cyberpunk 2077 can tell a gripping, focused story, which only makes the base campaign feel more hollow in comparison.

    Cyberpunk 2077 is still an 8/10 for me, and I fully intend to start over the game in the near future. However, it’s just not the 10/10 banger I once believed it to be.

    Cyberpunk 2077

    Related

    10 greatest open world games you can get lost in

    Lose yourself in these 10 unforgettable open world games that make you forget the real world

    4

    Batman: Arkham Knight

    The definitive Batman experience buried beneath a Batmobile obsession

    I loved Batman: Arkham Knight as a teenager. The gritty visuals, the brutal combat, and the rain-drenched city — it was all so Gotham. The story was emotionally impactful, the ending beautiful, and it all came together to make Arkham Knight a solid 10/10 for me. In retrospect, however, I can’t shake off just how over-reliant the game is on the Batmobile, so much so that they left a bad taste in my mouth upon a revisit.

    I spent a major chunk of the game maneuvering the Batmobile, and throughout those moments, I was a mech on wheels, not the world’s greatest detective or the terrifying shadow who stalked evil. When the Batmobile is practically shoehorned into puzzles, combat, boss fights, and stealth segments, it becomes less of a cool tool and more of an overbearing requirement.

    Worse, the true ending is locked behind Riddler trophies that made online guides almost required reading. It’s like buying a box set and being told the finale is in a separate box you don’t have. Today, Arkham Knight is still a solid, highly recommended game for me, but definitely not the flawless superhero sim I used to champion. That mantle has been taken by 2018’s Marvel’s Spider-Man.

    Batman: Arkham Knight

    Related

    7 PC gaming scandals you forgot about

    Sometimes, it’s worth remembering the bad bits

    3

    Assassin’s Creed IV: Black Flag

    Gorgeous but structurally dated

    Assassin’s Creed IV: Black Flag was the first game ever that made me go “holy cow, this is next-gen.” I played it on my brand-new GTX 760 back in 2013, and it was breathtaking. Naval combat finally clicked for me, despite having paid no mind to it in AC III. In Edward, I once again had a handsome, roguish, and charming protagonist after Ezio, and he became my third-favorite protagonist in the entire Assassin’s Creed series, behind Altaïr and Ezio.

    But on a recent revisit, I couldn’t ignore just how much the game leans on repetitive tailing and eavesdropping missions. The world hinted at the open-world RPGs Ubisoft would eventually lean into, but back when it felt expansive yet digestible. I still want that rumored remake — I’d play it day one — but in hindsight, the repetition and lack of real mission variety bring it down from masterpiece territory. My nostalgic glasses may be strong, but they don’t make me blind.

    Assassin’s Creed IV: Black Flag

    Related

    Calm down — Assassin’s Creed Shadows is surprisingly good

    Assassin’s Creed Shadows delivers stunning visuals and tight combat but stumbles under weak writing and pacing issues.

    2

    Forza Horizon 4

    A love letter that forgets to include the reader

    After having played Driveclub on my base PS4, and then mourning the shutdown of its studio, Forza Horizon 4 was the game that reignited my love of racing. My friend and I spent weeks on it, skipping weeks’ worth of lectures to get that H badge. It was everything I wanted—visually stunning, lightning-fast, and packed with content.

    But recently, while introducing my partner to gaming, I noticed how punishing the game can be for newcomers, not to the Horizon series, but to racing in general. The narrow roads in Edinburgh? Brutal. Watching her bounce off walls more than asphalt was heartbreaking. I myself had taken a while to master the game, but the fun factor gets buried when your first impression is so discouraging. Worse still, the beautiful map feels small and likely sacrificed in favor of showcasing seasonal shifts. But not being able to manually change seasons? That was a buzzkill. We started in winter, and it was so cold and unforgiving that I had to literally change my PC’s system date just so she could experience spring evenings in Edinburgh.

    I still love Forza Horizon 4, but it’s not quite the masterpiece I once made it out to be.

    Forza Horizon 4 is now delisted from all online storefronts.

    Related

    Someone connected a racing simulator setup to their RC car using an Arduino, and I’m seriously jealous

    Micro Machines in real life.

    1

    The Last of Us Part II

    An emotionally complex narrative that stumbles in its delivery

    At one point, I believed The Last of Us Part II was the boldest and most powerful narrative ever delivered in a video game. And in many ways, I still admire its raw ambition. It subverted expectations, shattered comfort zones, and forced me to confront the uncomfortable. But on replay — and with the benefit of hindsight — the cracks in its pacing and structure began to show. The early game’s jarring time jumps and tonal imbalance between the prologue and Act 1 feels unrefined, almost unsure of themselves. And then, just as the story regains momentum, it slams the brakes and resets halfway through.

    Yes, the structure serves a purpose — to humanize, challenge bias, make you lose your sense of self, and question the act of revenge. But a day-by-day switching narrative could’ve preserved that emotional duality without draining the impact. The problem isn’t the story it tells — it’s how it tells it. The shifts in gameplay and tone can feel like a grind, with emotional peaks dulled by repetition and uneven pacing. And in a game so dependent on narrative to drive home its weight, that’s a real problem.

    A lot of moments while playing The Last of Us Part II reminded me of the problems I had with seasons seven and eight of Game of Thrones, where everybody and their dog were practically teleporting all across the country, while the first game was all about taking almost a whole year to go across the country. Make no mistake, The Last of Us Part II is still one of the boldest AAA games ever made. But perfect? I used to think so. Now, I think it’s a beautifully flawed experience that aims for greatness and lands just short.

    The Last of Us Part II

    Related

    I reopened an old wound by playing The Last of Us Part II Remastered on PC

    I played The Last of Us Part II Remastered on PC, and it hit harder than ever. A technical triumph and emotional wrecking ball.

    Growing up means looking back

    It’s strange, really. We often think of the games we loved as timeless, untouchable classics — as if our memories of them somehow froze their perfection in place. But just as we grow, so do our expectations. And sometimes, with a bit of distance and a new perspective, we see the cracks in what once felt like masterpieces.

    That’s not to say these games are bad — far from it. I still cherish each of them for what they gave me at the moment. The rush, the wonder, the hours lost to obsession. But a 10/10 game? That’s a rare thing. And the older I get, the more I realize it’s okay to admit that some of my former “perfect” games weren’t really perfect after all. They were just perfect for me at the time.


  • M4 iPad Pro, USB-C Magic Mouse, iPhone 15 Pro, more 9to5Mac

    M4 iPad Pro, USB-C Magic Mouse, iPhone 15 Pro, more 9to5Mac


    Today’s Apple gear deals are headlined by a couple notable open-box listings with full Apple warranties – the most affordable M4 iPad Pro is now $180 off and we have some rare discounts on the USB-C Magic Mouse (including both the black and white models). From there a new low has emerged on the 13-inch M3 iPad Air in brand-new condition alongside unlocked iPhone 15 Pro units at up to $650 off the original listings. All of that and more awaits below. 

    Apple’s most affordable M4 iPad Pro hits one of its best prices at $180 off from $820 (Open-box w/ 1yr. Apple warranty)

    Deals on Apple gear have started to get a little bit tight over the last week or so in the wake of U.S. tariffs, but Best Buy’s open-box program remains a wonderful source of discounts on everything from the deals we spotted yesterday on Apple Pencil Pro to one of the lowest cash prices to date on the most affordable M4 MacBook Air. Today, however, we are looking at the M4 iPad Pro and more specifically the least pricey model in the lineup. We are yet to see this one go any more than $150 off, and those are limited-time holiday on-page coupon offers at Amazon, but you can now land one at $180 off in “excellent” open-box condition with a full warranty. Details below.

    Best Buy is now offering the 11-inch 256GB Space Black M4 iPad Pro down at $819.99 shipped. This is the “excellent” condition open-box listing that also ships with all of the usual accessories and a 1-year warranty – an actual “Apple One (1) Year Limited Warranty.”

    Regularly $999, and currently starting at $919 via Amazon in brand-new condition, this is $179 off the list price, the lowest we have tracked in 2025 from a dealer of Best Buy’s repute, and the lowest price we can find with a 1-year Apple warranty. This model, the most affordable M4 iPad Pro model, almost never drops more than $150, if that.

    Again you can score the “good” and “fair” condition open-box units for less, but it’s hard to recommend something that’s in just good condition at prices like this – we all really want our shiny new iPad Pro to be as shiny as possible if you know what I mean.

    All of that said, it is worth browsing through the rest of the M4 iPad Pro configurations at Best Buy right here – there are open-box deals on just about all of them that are well below what you will find in new condition right now on most models.

    Here’s a look at the best new discounts via Amazon across the lineup:

    M4 iPad Pro 11-inch

    M4 iPad Pro 13-inch

    Upgrade to Apple’s USB-C Magic Mouse with these rare open-box deals: White $61 or Black $72 (1-yr. Apple warranty)

    If you have been holding off for a deal on the new USB-C Apple Magic Mouse, today might be your chance. While, historically speaking, deals are relatively rare on Apple’s official Magic Mouse – there has only been once good chance to score a price drop on the new black variant and the white USB-C model has yet to drop below $78. However, Best Buy now has some “excellent” condition open-box listings with full 1-year Apple warranties in tow at the best prices we have tracked to date on both the black and white modelsfrom a reputable dealer.

    Just as a reminder, the white model carries a $79 list price and the black fetches a premium at $99 from Apple, both of which are fetching as much at Amazon right now. But, as mentioned above, pricing on the Geek Squad-verified open-box units in “excellent” condition at Best Buy are much less than that:

    Alongside the “Apple One (1) Year Limited Warranty” they ship with, as well as being covered by Best Buy’s Return & Exchange Promise, here are the details you need to know about these open-box listings:

    • Works and looks like new. Restored to factory settings.
    • Includes all original parts, packaging and accessories (or suitable replacement).

    The newer USB-C edition of the Apple Magic Mouse is largely identical the Lightning versions, albeit with a USB-C port on the underside so you can finally be rid of those Lightning cables. I don’t know about you, but my Magic Mouse is the only piece of kit I still use that requires one and, while I really don’t need to upgrade, I really can’t wait to finally shed my reliance on the old Apple connector standard.

    Unlocked iPhone 15 Pro now up to $650 off orig. prices from $744 (Amazon Renewed Premium, 1-yr. warranty)

    Apple has already been flying in plane loads of iPhones to get ahead of potential tariff conundrums, but Amazon’s Renewed Premium listings on the existing iPhone 15 Pro and Pro Max units can deliver some serious savings, coming in at hundreds below the original unlocked prices from Apple. They also ship with a full 1-year warranty and deliver units in better condition than the average refurb you might bump into on Amazon. We just spotted a new low on the heavily upgraded 1TB iPhone 15 Pro in Natural Titanium down at 846.45 shipped – that’s more than $650 under the original price or a comparable new condition iPhone 16 Pro – but there are deals on several configurations worth scoping out today down below.

    While the pricing on iPhones (and about a million other things) is still up in the air at this point, Amazon’s Renewed Premium units remain a notable source of savings. We are talking about prices as much $650 under the original listings and Apple Store prices on unlocked iPhone 16 models. These certainly aren’t iPhone 16 models, but they are the only other Apple handsets that support Apple Intelligence features and still deliver a compelling iPhone experience – they were after all arguably the world’s great phone as of September last year before iPhone 16 launched.

    Satechi has now launched a sale eventon its official site featuring a range of its charging gear – everything will drop 30% at checkout using code CHARGE30. However, one of the standout deals is arguably its Qi2 Trio Wireless Charging Pad that will drop to $91 with the code above. That’s a solid price, but you’ll want to completely ignore it and head straight over to the brand’s official Amazon storefront instead where you’ll find it marked down to $87.12 shipped right now, and with Prime shipping benefits. This one carries a regular $130 list price via Satechi but has more recently been sitting down at closer to $100 on Amazon where it is now undercutting the direct sale price.

    The Qi2 Trio Wireless Charging Pad is easily one of the best models with this sort of form-factor I have ever used. The metal rimmed base with the vegan leather wrap up top is simply gorgeous for me. The fully-articulating main Qi2 15W MagSafe pad delivers ideal viewing angles and you can even fold it down flat if you ever needed to stick it in your carry kit. It, of course, also features a magnetic Apple Watch charger and a third Qi pad for AirPods or a second handset. It is a really good one if you ask me and a clear contender for top 5 on the internet.

    Hit up our launch coverage for a closer look.

    There are some notable deals worth browsing through in the direct sale on the Satechi site though, including desktop charger units and some of its higher-end USB-C cable solutions, but we also wanted to direct your attention to its wonderful 15W Qi2 Wireless Car Charger– we loved this one after getting to go hands-on for review and the CHARGE30 code will drop its down from the usual $60 to $42 to deliver the lowest price we can find. This is on par with the lowest price we have tracked on Amazon.

    Browse though the rest of the Satechi eligible for the code above on this landing page.

    Today’s accessories and charging deals:

    Apple’s most affordable new 16GB M4 MacBook Air is now up to $110 off (Open-box w/ 1-yr. Apple warranty)

    We are still tracking some straight up $50 price drops on the new M4 MacBook Air – these are best cash discounts we have tracked for folks without gear to trade-in against one to date in new condition. That said, we love our Best Buy open-box listings with the full 1-year Apple warranty attached and they are now offering the lowest prices to date on the most affordable model at up to $110 off. Details below.

    Now, you will find the entry-level 13-inch model with 16GB of RAM and 256GB storage capacity starting from $950 in brand-new conditionover at Amazon. However, all but the silver model are selling for much less than that as part of Best Buy’s “excellent condition” open-box listings with a full 1-year Apple warranty attached.

    We can certainly understand why some folks would rather a brand new unit, but if you’re looking to score the best deal possible from a reputable dealer in the early part of the year here, these open-box listings are worth a look:

    Now you will find even lower prices on the “good” and “fair” condition units, but we tend to recommend the “excellent” models – if you’re going to be buying a brand new M4 MacBook Air, you’re also likely going to want one in more than just good condition.

    You will also find open-box deals on other configurations in the M4 Air lineup waiting right here – look for the small “Open-Box” link below the “Add to Cart” button.

    Here’s how the brand-new deal pricing works out at Amazon right now for comparison:

    • 13-inch M4 MacBook Air 16GB/256GB $949 (Reg. $999)
    • 13-inch M4 MacBook Air 16GB/512GB from $1,184 (Reg. $1,199)
    • 13-inch M4 MacBook Air 24GB/512GB $1,359 (Reg. $1,399)
    • 15-inch M4 MacBook Air 16GB/512GB from $1,139 (Reg. $1,199)
    • 15-inch M4 MacBook Air 16GB/512GB $1,342 (Reg. $1,399)
    • 15-inch M4 MacBook Air 24GB/512GB $1,549 (Reg. $1,599)

    FTC: We use income earning auto affiliate links. More.


  • How to add a super-fast SSD to your Mac mini M4 without paying Apple’s ridiculous storage prices

    How to add a super-fast SSD to your Mac mini M4 without paying Apple’s ridiculous storage prices


    The Apple Mac mini M4 is arguably the biggest bargain in computing. This (almost) pocket-sized mini Mac is fast, powerful, near-silent and costs around half the price of the cheapest equivalent MacBook Air. It’s almost too good to be true.

    I bought one last month, my first new Mac since the MacBook Air M1 in 2020, and it’s given me that same sense of ‘how did they do that?’ wonder.


  • All the live updates as they happened

    All the live updates as they happened


    Refresh

    To catch up on some of the news from the past week, check out our ITPro Podcast episode on the conference here.

    And with that, we’ve finished the developer keynote! You can refer back to the rest of this blog for all the latest and stay tuned on the ITPro site for more coverage from Google Cloud Next 2025.

    Within the Kanban Board, Densmore can ask Code Assist to add code for specific features. If another team member has changed code and broken something – in this case, Densmore uses Seroter as a negative example – Code Assist can flag the changes to make a fix.

    When a developer notices a bug, they can tag Code Assist directly in their messaging app, or add a comment within their bug tracker.

    Densmore shows us the Gemini Code Assist Kanban Board, which includes something Google Cloud calls a ‘backpack’ – which contains all context for code, security policies, formats, and even previous feedback.

    Rounding us out, we’re welcoming Scott Densmore, senior director, Engineering, Code Assist at Google Cloud, to demo a sneak peek at Google Cloud’s software engineering agent.

    To share the visualization with colleagues, Nelson can press a ‘create data app’ button to quickly generate a link to the interactive forecast.

    The agent uses a new foundation model called TimesFM, which has been built specifically for forecasting, to produce a table with product IDs and dates, as well as a chart with sales over time.

    Within the Colab notebook, Nelson can ask the Gemini data science agent to generate a forecast based on his data.

    Here to explain is Jeff Nelson, developer advocate at Google Cloud. Nelson starts with Colab, where we’ll be shown a demo of Google Cloud’s new data science agent in action.

    We’re moving on to learning about data agents, Google Cloud’s tools for easily analyzing data.

    Gemini can see and make sense of information that isn’t apparent to the human eye, says Wong, showing a video of her basketball throw as an example. She adds that a team of developers recently produced an AI commentator for sport and that X Games is interested in using AI for judging.

    DiBattista notes that Gemini is capable of analyzing multiple frames at once to evaluate motion, rather than just snapshots. He stresses that he built the tool in just one week, with no need to build a custom model or handle complex data sets.

    To demonstrate the amateur pitch, we’re shown a clip of Seroter throwing a baseball outside Google HQ. The system grades him as a ‘C’, with breakdowns or his arm, balance, and stride & drive.

    Via Gemini API, DiBattista created a system that can analyze video and produce text analysis of the pitch in the video – both for pros and amateurs.

    Jake DiBattista, Google Cloud X MLB (TM) Hackathon Winner, onstage at Google Cloud Next to show a custom app he built as part of the Hackathon to analyze baseball pitches.

    (Image credit: Future)

    The winner of the Cloud X MLB (TM) Hackathon was Jake DiBattista, who’s here now to tell us all about his project – measuring pitches using MLB high-speed video.

    What does all this look like in practice? Wong and Seroter say MLB is using Gemini to measure its 25 million data points per game. Google Cloud ran a hackathon to see what innovative use cases people could come up with for Gemini in sports.

    “We’re striving to meet developers where you are,” says Cabrera. “Your team can build great apps using Gemini as your IDE of choice, or you can use Vertex AI Model Garden to call your model of choice. No matter what you use, we’re excited to see what you come up with.”

    Within Model Garden, developers can test out the model’s response to questions like “what capabilities can you offer for designing renovation subjects?” and see how it responds to evaluate which one best suits their purpose.

    Cabrera says while Gemini is her favorite model, Model Garden on Vertex AI offers a range of models from Meta, Mistral, and Anthropic among others.

    We’re really cooking now, as Cabrera moves over to Gemini Copilot to produce unit tests by entering a prompt in Spanish – which it quickly does.

    Cabrera wants to make an agent to help with budgets, powered by Gemini 2.5. Moving over to Cursor, Cabrera adds input validation to the agent.

    For this demo, Cabrera is using the Windsurf IDE, which is intended to support devs with ‘vibe coding’.

    Debi Cabrera, senior developer advocate at Google Cloud is now onstage to show us how developers can use Gemini in their IDE of choice, and then bring their model of choice to Google Cloud for their apps.

    Google Cloud is at pains to stress that it does not require devs to use Gemini – with Vertex AI Model Garden, there’s a wide range of models to choose from.

    Seroter says that Google Cloud is helping developers with its new Agent2Agent, which not only connects agents together but helps developers discover new agents to connect with in the first place.

    Within the tool, Gemini suggests a fix to the problem and Sukumaran can immediately deploy it without having to affect anyone’s access to the agent.

    To fix this issue, Sukumaran shows us Cloud Assist Investigations, a new tool for diagnosing problems in infrastructure and massively cutting down on debugging time.

    Within Agentspace, Sukumaran asks for information related to ordering, expecting a relevant sub-agent to provide the right response. But instead, we’re presented with an error message.

    Once she’s deployed this agent system, she’ll be able to share it within Agentspace, where she can interact with the agent.

    Sukumaran creates a multi-agent system, right here in the keynote. This means creating a ‘root agent’ with a number of sub-agents, which will work together to automate a task.

    Abirami Sukumaran, developer advocate at Google Cloud, is here to show us how to build agents within Vertex AI using ADK with Gemini.

    We’re now learning about Vertex AI Agent Engine, which has recently been made generally available and helps enterprises deploy agents with enterprise-grade security. We’ll also hear about Agentspace, Google Cloud’s new solution for building no-code agents, or for developers to share agents they’ve built with the rest of their company.

    The moment of truth comes – and the agent produces a detailed PDF proposal that Hinkelmann can access right within the prompt window.

    Fran Hinkelmann, developer relations engineering manager, onstage at the developer keynote at Google Cloud Next 2025.

    (Image credit: Future)

    The next step is to select the AI model Hinkelmann wants for the agent. Because ADK is model agnostic, Hinkelmann says she could use Llama 4 or another model – but in this case will use Gemini 2.5.

    Performing RAG requires accessing information from outside the agent, for which model context protocol (MCP) comes in Handy, Hinkelmann says.

    Next, Hinkelmann adds an ‘analyze bulding codes’ tool, which allows the agent to use RAG to check a private dataset for local buildings.

    Hinkelmann says agents need instructions, tools, and a model. So to start, she uses Gemini in Vertex AI to create a custom instruction: in this case, taking a customer request and creating a PDF proposal.

    Here to demo this is Fran Hinkelmann, developer relations engineering manager at Google Cloud.

    Wong and Seroter say Vertex AI’s new Agent Development Kit can create an agent that can verify building codes and go deeper into meeting Bailey’s requirements.

    Next up, Seroter wants to know what an agent can do.

    “An agent is a service that talks to an AI model to perform a goal-based operation using the tools and context it has,” Wong explains.

    Wong asks Bailey to go into more detail on the benefits of long context windows.

    “This example is some things like photos, images, and a few sketches,” Bailey says. “But with long context, you’re able to send full videos to use for your projects.”

    Bailey asks the model to add two globe pendant lights into the image and within seconds, they’ve been added.

    In another tab, we’re shown Bailey has used Gemini to generate a prompt for its image generation capabilities and then used this to produce a concept image for the kitchen. It can produce the image, which is photorealistic, in just a few seconds.

    Straight away, the model’s ‘thinking’ box shows the model has considered the floor plan (based on a sketched floor plan Bailey provided) and local regulations and building codes.

    To start, the pair ask Gemini 2.0 Flash to generate a very detailed plan for remodeling a 1970s style kitchen. Bailey says the model has 65,000 token output window, which is great for generating long plans.

    The two are going to make an AI app to help remodel Bailey’s kitchen, taking into account all the details and laws around doing that.

    Gemini is key here, of course. Here to show us how is Paige Bailey, AI developer experience engineer at Google DeepMind, and Logan Kilpatrick, senior product manager at Google DeepMind.

    Wong says today’s keynote is all about how Google Cloud can help developers build software, from start to scaling, and a sneak peek at the future of development in Google Cloud.

    Here to tell us more is Stephanie Wong, head of developer skills & community at Google Cloud and Richard Seroter, chief evangelist at Google Cloud.

    Finally, Gemini underpins all these innovations with its large context window, multimodality, and advanced reasoning.

    Next, Google Cloud is helping developers be as productive as possible via Gemini Code Assist and Gemini Cloud Assist.

    Here to welcome us to the developer keynote is Brad Calder VP & GM at Google Cloud. He says Google Cloud is innovating in three key areas. First up, helping companies build agents, which can collaborate to achieve goals on behalf of users.

    To count us down for the final 30 seconds, we’re being shown numbers generated by Veo 2, i ncluding some truly abstract clips such as a giant 1 blasting off to a planet shaped like a 0.

    And we’re off! As with yesterday’s keynote, we’re starting with a sizzle reel – this time all about developers, skills, AI, and production.

    We’re now sat in the arena and once again listening to the AI-sampled music of The Meeting Tree onstage, accompanied by abstract visuals generated with Google DeepMind’s video generation model Veo 2.

    DJ group The Meeting Tree, playing a live set onstage at the Google Cloud Next 2025 developer keynote. Behind them, a large screen shows AI video visuals generated with Google DeepMind's model Veo 2.

    (Image credit: Future)

    There are just 30 minutes to go until the developer keynote. Presented under the subtitle ‘You can just build things’, we’re expecting this session to be all about the ease of deploying AI with Google Cloud – expect to hear lots about Agentspace, automation in Workspace powered by Google Workspace Flows, and Google Cloud’s new infrastructure for training custom AI models.

    With the press conference done, all eyes are now on the developer keynote – we’ll be seated and ready to bring you images and updates as they come.

    Finally, he adds that Google Cloud has European partnerships with firms such as TIM and Thales, to operate in a supervisory role and provide trust and verification in Europe.

    He adds that for customers who are worried about long-term survivability, Google Distributed Cloud runs fully detached with no connection to the internet.

    Kurian says that technologically, Google Cloud can prevent this from impacting its customers, because the firm doesn’t have access to its customer’s environments and no ability to reach their encryption keys.

    Now another question on tariffs from Techzine – specifically on the potential risk that American companies could be ordered to stop delivering services to European customers.

    In response, Kurian says Agentspace arose from an observation that organizations struggle with information searches, particularly across different apps. He adds that the service already has 100 connectors live and 300 connectors in development so people can adopt it without ripping out and replacing anything.

    We’ve just had a question on how easy it will be for companies to adopt Agentspace when one’s enterprise has already invested heavily in other AI ecosystems such as Microsoft or Oracle, from Diginomica.

    A question on tariffs, now – which have been a repeated talking point throughout the event. Kurian is asked whether Google Cloud is prepared for their impact and in response says the “tariff discussion is an extremely dynamic one,” and that Google has been through many cycles like this including the 2008 financial crisis and the pandemic.

    Kurian also said Google is working hard to identify opportunities for renewable energy to power data centers and looking to using nuclear as a source of power for its sites.

    “We have done many things over the years to improve the infrastructure – for example, we introduced water cooling many years ago for our processors,” he says.

    Asked a question on how Google Cloud is meeting the increased energy demand from data centers for generative AI, Kurian says the cost of inference has decreased 20 times.

    He adds there’s a competitive advantage to adopting AI and some of the changes in the past few months have changed the European attitude to the technology.

    In response, Brady says that Google Cloud is helping EMEA customers with security and flexibility, which are very important in the region, particularly when it comes to not being locked into long-term contracts.

    Now a question on pressure facing the EMEA region from our sister publication TechRadar Pro.

    The first question is on the challenge of AI adoption in certain countries, to which Kurian says Google Cloud is working hard on its sovereign cloud capabilities. He also highlights the importance of it allowing companies to use its global technology infrastructure in meeting security requirements.

    Kurian begins by highlighting how hard Google Cloud is working to expand across the globe and how it now operates in 42 regions.

    Before the developer keynote later on, we’re getting to hear from Thomas Kurian, CEO at Google Cloud, Tara Brady, president EMEA at Google Cloud, and Eduardo Lopez, president Latin America at Google Cloud in a press conference.

    Thomas Kurian, CEO at Google Cloud, Tara Brady, president EMEA at Google Cloud, and Eduardo Lopez, president Latin America at Google Cloud in at a press conference during Google Cloud Next 2025. Behind the three, the Google Cloud logo is repeatedly displayed on a photo backdrop.

    (Image credit: Future)

    It’s coming up on 8:00 in Las Vegas and we’re back to report on day two of Google Cloud Next 2025. With the developer keynote due to kick off this afternoon, there’s sure to be more detail on all the announcements we’ve heard about so far and more hands-on demos of some of Google Cloud’s newest tools.

    If you’ve ever wondered what it’s like on the ground at an event such as Google Cloud Next 2025, this photo gives a good impression. You can see it’s incredibly busy here, with attendees in the thousands entering and exiting each keynote. Google Cloud has a huge range of partners and customers, many of whom will be looking to reaffirm or expand their business relationship to make the most of AI, so the event is thick with meetings, roundtables, and live demos in the expo hall.

    An eye-level photo of the crowd at Google Cloud Next, with hundreds of people wearing lanyards walking down a corridor at the Mandalay Bay Convention Center and Google Cloud livery up on windows and hanging from the ceiling.

    (Image credit: Future)

    “What an amazing time for all of us to experience and work with these technology advances,” Kurian concludes.

    “We at Google Cloud are committed to helping each of you in effect by delivering the leading enterprise-ready, AI-optimized platform with the best infrastructure, leading models, tools, and agents. By offering an open multi-cloud platform and building for interoperability so we can speed up time to value from your AI tests, we are honored to be building this new way to cloud with you.”

    And with that, the first keynote of the event comes to a close. We’ll keep bringing you all the updates as they happen live from Las Vegas.

    Kurian says Google Cloud is working hard on making its innovations easy to adopt in four key ways:

    • Better cross-cloud networking.
    • Hands-on work with ISVs to improve Google Cloud integration.
    • Working with service partners on agent rollouts.
    • Offering more sovereign cloud compatibility via Google Cloud.

    We’re rounding out now and Kurian is back onstage to bring the keynote to a close.

    He acknowledges Google’s recent acquisition of Wiz as evidence of how seriously it takes cybersecurity.

    In a demo, Payal Chakravarty shows us how Google Unified Security can detect vulnerabilities in code and extensions used within an enterprise’s environment.

    The agentic, autonomous features of the new platform can automatically detect when an AI extension has put sensitive data at risk and flag it to a human in the company’s security team. In addition to providing response advice, it can proactively quarantine the suspicious extension.

    Continuing at pace, we’re now welcoming Sandra Joyce, VP, Google Threat Intelligence, to hear about the security announcements Google Cloud is making today.

    Chief among these announcements is the new Google Unified Security, the new converged security platform for better visibility and faster threat detection.

    Read our detailed write-up on Google Unified Security here.

    A photo of the keynote stage at Google Cloud Next 2025, with onstage screens showing a large diagram of Google Unified Security (GUS) and the various security offerings it converges.

    (Image credit: Future)

    We’re moving onto Gemini Code Assist, Google Cloud’s AI pair programmer, which Calder says is already being used by a wide range of enterprises.

    Google Cloud is today announcing Gemini Code Assist agents, which can help developers to quickly complete tasks such as the generation of software and documentation, as well as AI testing and code migration.

    Via the new Gemini Code Assist Kanban board, developers can interact with agents to get insight into why they’re making the decisions they are and see which tasks they’re still yet to complete.

    Calder says that Google Cloud is announcing new agents for every role in the data team.

    Data engineering agents, embedded within BigQuery pipelines, can perform data preparation and automate metadata generation.

    Meanwhile, data science agents can intelligently select models, flag data anomalies, and clean data to reduce the time teams have to spend manually validating all data.

    Finally, Looker conversational analytics allows users to explore data using natural inputs. This will be made available via a new conversational analytics API, now in preview, so data teams can embed this easy question and answer layer into their existing applications.

    Imagen 3 and Veo 2 models are coming to Adobe Express, we’re told, as the firm pushes forward on AI-generated content.

    Moving onto data agents, we’re now welcoming Brad Calder, VP & GM, Google Cloud, onstage.

    He tees up a video showing that Mattel is using Google Cloud’s AI to reduce the need for its teams to manually analyze customer sentment.

    “We can instantly identify key issues and trends improving growth, efficiency, and innovation,” says Ynon Kreiz, CEO at Mattel.

    “For example, we improved the ride mechanism in the Barbie Dreamhouse elevator.”

    We’re back to creative agents – it seems creative output is a major focus for Google Cloud at this year’s event. We’re being told about Wizard of Oz at Sphere again – find the details for that at the start of this live blog.

    O’Malley is back onstage to discuss purpose-built agents.

    For example, Mercedes Benz is using AI for conversational search and route mapping in a new line of its cars.

    In a demo by Patrick Marlow, product manager for Applied AI at Google Cloud, we’re shown how the suite can be used to get instant answers and assistance at a garden store.

    Marlow is able to hold petunias he has purchased up to a camera and receive real-time, voice output assistance from the agent. For example, he asks if he’s buying the right fertilizer for the plants and the agent is able to recommend an alternative fertilizer and add it to his cart.

    In cases where human assistance is required – such as Marlow asking for a 50% discount on his purchase – the agent escalates to a manager in Salesforce.

    A photo of a live demonstration of Google Cloud's Customer Engagement Suite at Google Cloud Next 2025, in which Patrick Marlow, product manager for Applied AI at Google Cloud, holds petunias he has purchased up to a camera and receive real-time, voice output assistance from an AI agent.

    (Image credit: Future)

    O’Malley says Google Cloud’s Customer Engagement Suite is already helping organizations meet customer knowledge demand.

    She gives the example of Verizon, which adopted the Customer Engagement Suite. The firm uses the offering to provide its 28,000 customer assistants with up-to-date data and move customers to resolution even quicker.

    O’Malley announces new feaures for Customer Engagement Suite, including human-like voices, integration with CRM systems and popular communications platforms, and the ability to comprehend customer emotions.

    Customers are using all kinds of agents to unlock new value in their enterprise environment – but what are these different kinds?

    Kurian welcomes Lisa O’Malley, leader of Product Management, Cloud AI at Google, to explain more.

    O’Malley says we’ll start with customer agents, showing us a video of how Reddit is using Gemini for Reddit Answers, a new conversational layer on the message board website.

    Next, we’re told about how Vertex AI Search is helping healthcare and retail organizations to deliver more relevant results to their customers and boost their conversion rates.

    “Agentspace is the only hyperscaler platform on the market that can connect third-party data and tools, and offers interoperability with third-party agent models,” says Weiss.

    Here to show us more is Gabe Weiss, Developer Advocate Manager, Google Cloud.

    Weiss shows us how he can simply identify potential issues with his business’ customers within Agentspace. Based on this, he can ask for an agent to identify client opportunities in the future. He can then iterate on this prompt by asking for an audio summary of its findings, to be delivered to him every morning – creating an in-depth, analytical agent with a few sentences of code.

    Finally, he can ask for the agent to write an email within Agentspace, which once approved is automatically sent via Outlook without him having to open the app himself.

    It’s time to talk about agents – sound the klaxon. These advanced AI assistants work to automate tasks autonomously, as Kurian explains.

    To hear more about the potential of agents, we’re shown a clip of Marc Benioff, CEO at Salesforce.

    Marc Benioff, CEO at Salesforce, shown via an onstage video at the opening keynote of Google Cloud Next 2025.

    (Image credit: Future)

    “Right now, we’re really at the start of the biggest shift any of us have ever seen in our careers,” Benioff says.

    “That’s why we’re so excited about Agentforce and our expanded partnership with Google. I just love Gemini, I use it every single day whether it’s Gemini inside Agentforce, whether it’s all the integrations between Google and Salesforce.”

    Starting today, Kurian announces, customers can scale agents across their environment, deploy ready-made agents, and connect agents together.

    This will largely be driven by the Agent Development Kit, a new open source framework for widespread systems of agents interacting with one another.

    Agent2Agent, a newly-announced protocol. will allow disparate agents to communicate across enterprise ecosystems regardless of which vendor built them and which framework they are built on.

    “This protocol is supported by many leading partners who share a vision to allow agents to work across the agent ecosystem,” Kurian says.

    Already, more than 50 partners including Box, Deloitte, Salesforce, and UiPath are working with Google Cloud on the protocol.

    Within Google Agentspace, enterprises can have Google-made agents, as well as third-party agents and custom-built agents easily communicate with one another.

    Vertex AI provides customers with all of Google’s internally-made models as well as open models such as Meta’s Llama 4.

    “With Vertex AI, you can be sure your model has access to the right information at the right time,” he says.

    “You can connect any data source or any vector database on any cloud, and announcing today you can build agents directly on your existing NetApp storage without requiring any duplication.”

    Kurian adds that Google Cloud has the most comprehensive approach to grounding on the market.

    Promising Kurian will crowd-surf at tomorrow’s concert, he welcomes the CEO back onstage.

    Kurian moves quickly onto Vertex AI, with a look at how it helps customers.

    “Tens of thousands of companies are building with companies in Gemini,” he says, giving examples such as Nokia buiding a tool to speed up application code development, Wayfair updating product attributes five times faster, and Seattle Children’s Hospital making thousands of clinical guidelines searchable by pediatricians.

    The Vertex AI logo shown onstage at Google Cloud Next 2025 via large screens, with Thomas Kurian, CEO at Google Cloud, stood onstage beneath the screens.

    (Image credit: Future)

    Once videos have been generated, the user can fine-tune them with new in-painting controls.

    In his ive demo, Bardoliwalla paints around an unwanted stage-hand in a close-up clip of a guitar to seamlessly remove him from the final result.

    Next, Bardoliwalla uses Lyria to generate music for the trailer. This can be combined in the platform to create quick clips for advertising and more.

    Here to show us all how this works in practice is Nenshad Bardoliwalla, Director, Product Management, Vertex AI, Google Cloud.

    We’re told his mission is to create a trailer for the party to end the event – complete with a gag about Kurian not wanting to be able to sing Chappel Roan but not getting permission.

    Bardoliwalla opens Vertex Media Studio, in which he can ask for a drone shot of the Vegas skyline and choose specific settings such as frame rate video length.

    A live demo of Google Cloud's Veo 2 model within Vertex Media Studio onstage at the opening keynote of Google Cloud Next 2025.

    (Image credit: Future)

    Onto some more of that creative content we had tee’d up with the DJ (you see, we said it might come up again).

    Kurian highlights Imagen 3, the firm’s image generation model, as well as Veo 2, its video generation model. The latter is now capable of adding new elements into filmed video and producing videos that mimic specific lens types and camera movements.

    Finally, we’re also told that Lyria is now available on Google Cloud. The model can turn text prompts into short music outputs – the first tool of its kind in the cloud, Kurian says.

    Kurian is back onstage, reminiscing on the large progress Google Cloud made last year with Gemini’s multimodality and large, two million token context window.

    Gemini is now included in all Google Workspace subscriptions and Kurian tees up a video to show us how businesses are making good use of the service already. In the video, customers say that Gemini is already cutting down their toil and opening new time for valuable work.

    Google Cloud’s close relationship with Nvidia runs throughout its hardware announcements today. To hear more, we’re being shown a video of Jensen Huang.

    Jensen Huang, CEO at Nvidia, shown via an onstage video at Google Cloud Next 2025.

    (Image credit: Future)

    Huang describes the Google Distributed Cloud as “utterly gigantic”.

    “Google Distributed Cloud with Gemini and Nvidia are going to bring state-of-the-art AI to the world’s regulated industries and countries,” he says.

    “Now, if you can’t come to the cloud, Google Cloud will bring AI to you.”

    Vahdat runs through the core infrastructure announcements from today including Ironwood, AI Hypercomputer, and data storage announcements. As a reminder, you can read about these in detail announcements here.

    It’s not all about running workloads in the cloud, Vahdat says. Google Cloud is also announcing Gemini on Google Distributed Cloud, which allows firms to run Gemini locally – including in air-gapped environments.

    This opens the door to government organizations using AI in secret and top secret environments.

    With that, Pichai is off and Kurian is back onstage.

    He explains how Google Cloud is uniquely positioned to support customers, with a massive range of enterprise tools to build AI agents and an open multi-cloud platform for connecting AI to one’s existing databases.

    “Google Cloud offers an enterprise-ready, AI platform built for interoperability,” he says.

    “It enables you to adopt AI deeply while addressing the evolving oncerns around sovereignty, security, privacy, and regulatory requirements.”

    Finally, Google Cloud’s infrastructure is core to its advantages for customers. To help illustrate this point, Kurian welcomes Amin Vahdat, VP, ML, Systems and Cloud AI Google Cloud to the stage.

    It’s always good to hear directly from a customer about how AI is helping their business.

    We’ve just been shown a reel from McDonald’s, in which Chris Kempczinski, CEO at McDonald’s, explained how AI can be used to predict when machines will need maintenance in McDonald’s restaurants or provide workers with quick answers to their questions.

    The announcements are coming fast here in the arena. Pichai rattles off stats about Gemini 2.5, the firm’s new thinking model which is currently the top-ranked chatbot in the world per the Chatbot Arena Leaderboard.

    He also notes that Gemini 2.5 Flash, Google Cloud’s low cost, low latency model that allows organizations to balance reasoning with budget for every output.

    Pichai draws a direct line between Ironwood and Google’s quantum chip Willow, which it announced last year.

    Both are used as examples of the boundaries Google is pushing within its hardware teams, as well as in divisions such as Google DeepMind to crack problems such as weather prediction.

    Next, Pichai announces Google Cloud’s 7th generation TPU, Ironwood which brings sizeable performance and efficiency improvements over its predecessors.

    A few key stats about Ironwood: it’s capable of 42.5 exaflops of performance, 24 times the per-pod performance of the world’s fastest supercomputer El Capitan.

    Read more in our full coverage of Ironwood here.

    First off, Pichai says that Google will make $75 billion investment in capital investment in 2025, directed toward servers and data centers.

    To further support its AI-hungry customers, Pichai announces that Google Cloud will make its global network available to Cloud customers via Cloud WAN, a new managed solution for connecting enterprises across a wide area network.

    “This builds on a legacy of opening up our technical infrastructure for others to use,” Pichai says.

    To give the crowd a taste of what AI can do, Kurian welcomes Sundar Pichai, CEO at Google, to the stage.

    Pichai opens by paying tribute to The Wizard of Oz at Sphere and then moves on to make some announcements.

    Sundar Pichai, CEO at Google, onstage at Google Cloud Next 2025 during the opening keynote.

    (Image credit: Future)

    Now the keynote proper begins, with Thomas Kurian, CEO at Google Cloud, taking to the stage to kick us off.

    “Google’s AI momentum is exciting – we’re seeing more than four million developers using Gemini, a 20 times increase in Vertex AI,” says Kurian, noting that the firm processes more than 2 billion AI requests per month in workspace, driven by businesses.

    Today’s sizzle reel is peppered with AI-generated video, in a show of sophistication by Google Cloud.

    A photo of the opening keynote screen at Google Cloud Next 2025, live in Las Vegas.

    (Image credit: Future)

    And we’re off! To begin with, as is normal for keynotes, we’re being shown a sizzle reel of Google Cloud’s impact on the industry and hyping up the potential for AI in the enterprise.

    Just one minute left until the keynote begins in earnest. Stay tuned as we bring it to you live.

    The music we’re hearing will apparently be played throughout the entire conference – musical group The Meeting Tree have scored an entire soundtrack for the event, with the theme of AI.

    Paired with Google Cloud’s work on The Wizard of Oz (details lower down in the live blog), it’s clear that Google Cloud is eager to show what it can offer to industries that have been more reluctant to adopt AI to date.

    There’s a clear need to acknowledge fears that AI could damage the livelihoods of artists. A constant refrain at yesterday’s event at the Sphere was that ideally, AI should be used to empower creatives rather than replace them. In the event yesterday, Google Cloud suggested that new roles could appear in the creative sector as a result of AI breakthroughs – it will be interesting to see if this is expanded upon at all in the keynote.

    We’re now learning a bit more about how that music has been made for the event, via a behind-the-scenes video.

    Human musicians were first recorded and then their samples were fed into Music AI Sandbox, which could produce audio outputs that the producers can edit, alter, and use as the basis for new noises.

    As you can see, there’s a huge amount of foot traffic this morning as we pile into the Michelob Ultra Arena at Mandalay Bay. As is usual for tech conferences, we’re being serenaded by a live DJ inside the arena itself – more unusual is the visuals for this morning’s music, which have been generated entirely with Google DeepMind’s video model Veo 2.

    A hallway full of attendees at Google Cloud Next 2025.

    (Image credit: Future)

    As a reminder, the theme for this morning’s keynote is ‘The new way to cloud’, with a focus on interoperability, unification, and more intelligent automation through Gemini AI.

    The words 'The new way to cloud' on a large wall mural at Google Next 2025, at Mandalay Bay in Las Vegas.

    (Image credit: Future)

    Last night, we were given a glimpse into what to expect this week at the Sphere, with preview speeches from Google CEO Sundar Pichai and Google Cloud chief executive Thomas Kurian onstage. You can read all about the goings on from the evening further down the live blog.

    We’ve already had a range of big announcements ahead of the opening keynote, including the launch of Google’s new ‘Ironwood’ AI accelerator chip and the launch of Google Unified Security, which aims to drive cloud security capabilities for enterprises and demystify cyber complexity in the cloud.

    You can read all about these announcements below:

    With that, Kurian officially started Google Cloud Next, with confetti cannons heralding the official start of the event.

    “If tonight’s event sets the tone for what we plan to bring you for the next three days, I think it’s safe to say it’s going to be an incredible week,” he said.

    A photo of the Google Cloud logo on a screen at the Sphere in Las Vegas, next to another video feed showing Thomas Kurian, CEO at Google Cloud overlaid on top of a macro shot of a server rack in a Google data center.

    (Image credit: Future)

    Kurian will be back onstage bright and early tomorrow morning at the opening keynote ‘The new way to cloud’. We’ll be bringing you all the updates from that and throughout the conference, both here and across ITPro so stay right here for all the very latest.

    In the meantime, why not read my pre-conference analysis of what Google Cloud can do to set itself apart from competitors at this event and the key story it needs to tell.

    Next, it was time to hear from Thomas Kurian, CEO at Google Cloud, and James Dolan, CEO at Sphere Entertainment, on the challenges of bringing The Wizard of Oz to the Sphere.

    “I’ve been running companies for 40 years and this is one of the first times I ever felt that I wasn’t a customer – I was a partner,” said Dolan, praising the hands-on collaboration of the Google Cloud, Google DeepMind, and Magnopus teams.

    Thomas Kurian, CEO at Google Cloud, and James Dolan, CEO at Sphere Entertainment, onstage at Google Cloud Next within the Sphere.

    (Image credit: Future)

    Kurian noted that a total of twenty different models were needed to bring the Wizard of Oz at Sphere to life, with engineers leveraging Google’s extensive TPU architecture and inventing new techniques to expand and recreate the original film frames. This was an enormous technical challenge, not least because the scale and resolution of the screen makes it hard to hide any mistakes in the final image.

    “Most importantly, the camera and this amazing theater here at the Sphere is something that doesn’t exist anywhere else in the world,” he said. “So it’s almost like you were told to do AI and your first project was your PhD thesis.”

    After Pichai’s speech, we were treated to an extended video showing the behind the scenes of the project. It included detail on how difficult it is to extend existing video footage to fit the Sphere’s unique aspect ratio and resolution, as well as the complexity of generating entirely new footage of characters when they would otherwise have been offscreen.

    Engineers had to work iteratively and study the original plans for the film to recreate the characters without making them generic.

    A shot of Dorothy being recreated in AI onscreen at the Sphere in Las Vegas, at Google Cloud Next.

    (Image credit: Future)

    The final project includes special effects such as wind which is blown on the audience and haptic rumbling under the seats – of which we were given a very interactive example.

    After entering the Sphere’s cavernous arena, we were treated to an opening speech by Sundar Pichai, CEO at Google. He played tribute to the efforts of all the engineers and creatives who worked on the project, which required intense research and overcoming numerous technological hurdles. Ultimately, it was created using Google DeepMind’s video generation model Veo 2.

    Sundar Pichai, CEO at Google, speaking onscreen at the Sphere in Las Vegas at Google Cloud Next 2025.

    (Image credit: Future)

    “We have seen significant improvements: super low latency, incredible video quality, multimodal output, so many things we couldn’t have done with AI even 12 months ago,” Pichai said.

    “Beyond the technical capability, it took a whole lot of imagination, creativity, and collaboration. Our goal: giving Dorothy, Toto, and all of these iconic characters new life on a 16k screen in super resolution.”

    Good evening from Las Vegas, where choice attendees from the event have just been treated to a sneak peek of a brand new attraction opening at the Sphere in August – The Wizard of Oz at Sphere.

    Made in partnership with Warner Bros. Discovery, Google Cloud, and Magnopus, the finished product will run as a multi-sensory, 16k recreation of the original 1939 movie for the Sphere’s 160,000-square-foot screen using Google DeepMind’s video generation models.

    A photo of the Wizard of Oz at Sphere, showing Dorothy et al walking down the yellow brick road toward Emerald City, with the Google Cloud logo above them.

    (Image credit: Future)


  • Best smart garage door controllers of 2025

    Best smart garage door controllers of 2025