I’m quite the tech enthusiast, especially when it comes to Google (I really really want to work there in the future). I know it’s a bit late but there was a lot to unpack, so here’s my take on Google I/O 2025.
I/O 2025 was all about AI. The event did literally start with an AI generated video, showcasing their new model’s (Veo 3) capabilities. It was a bit rough, but pretty good for AI.
XR
Personally, my favourite thing was Project Aura, namely because it was the evolution of Project Astra into something that you can wear. But also, Google’s back with some new smart glasses. I’m not surprised it took them this long, considering the disaster the initial Google Glass was, but it does introduce some very interesting ideas. Essentially, Gemini would be always with you, and always able to help. This would add to one of Google’s huge advantages - it already knows everything about you. They’ve also been quite busy creating a new version of Android for these XR devices… and you guessed it, it’s called Android XR!
However, they are dependent on developers making their apps available on the Android XR platform, ideally before the first devices ship. Whether this will become a reality or not, we will only know in the future, but it will be pretty interesting to see what happens.
AI
This was the main focus of the event. Google announced a lot of new AI features and models, so lets just quickly list off the main ones:
- Google Search AI Mode
- Google Beam
- Gemini Live
- Google Veo 3
- Flow (Only usable in the AI Pro and AI Ultra? Subscriptions - more on that later)
- Gemini Deep Think
- Gemini in Chrome
- Gemini Smart-replies in Gmail
‘AI Ultra’? What’s that?
Well it’s Google’s new subscription plan, for the eye-watering price of $250 a month… - I wonder who’s gonna buy that, not me for sure.
At this point, there are very few things that Google has not just placed AI into, but how many of these are actually useful?
Well, I think they all have some use, but personally, my favourite ones have to be Gemini Live and Gemini in Chrome.
Gemini Live feels like the natural next step for virtual assistants. It’s not just voice commands and basic tasks anymore - this is real-time conversation, contextual memory, and fluid back-and-forth interaction. It even responds to non-verbal cues if you’re using it on a phone, or on an Android XR device. It’s like having your own personal assistant in your pocket who actually knows what you’re talking about, even if you barely explain it, and this is made even more powerful through the sheer amount of data Google has on you.
Gemini in Chrome is also a game changer. It helps you summarise pages, rewrite text, translate stuff on the fly, and even debug your code. It’s like having Clippy but smart, and not annoying. Whoever you are, most of our lives tend to resolve around the browser we use, so this seems like a pretty powerful feauture to me.
The Big Picture
So, what’s Google really trying to do here? I think they’re trying to make Gemini the center of your digital life - not just something you talk to, but something that’s everywhere: in your browser, in your inbox, on your phone, and soon, on your face. And with all the AI-first features and devices they’re introducing, they’re betting big on this idea.
There’s definitely some skepticism to be had. The $250/month plan is obviously aimed at enterprise or prosumers, but it still makes you wonder: how much of this AI future will be behind a paywall? And how much of it will actually make life better vs. just adding more complexity?
Still, this all looks pretty interesting. This I/O made it clear that we’re stepping into a new phase of computing - one where assistants don’t just answer questions, they think with you, and this is the next stage in AI evolution. So, let’s see where this all goes.