The headlining occasion of Google I/O 2025, the dwell keynote, is formally within the rearview. Nevertheless, should you’ve adopted I/O earlier than, you could know there’s much more occurring behind the scenes than what you could find live-streamed on YouTube. There are demos, hands-on experiences, Q&A classes, and extra occurring at Shoreline Amphitheatre close to Google’s Mountain View headquarters.
We have recapped the Google I/O 2025 keynote, and given you hands-on scoops about Android XR glasses, Android Auto, and Venture Moohan. For these within the nitty-gritty demos and experiences occurring at I/O, listed here are 5 of my favourite issues I noticed on the annual developer convention right this moment.
Controlling robots along with your voice utilizing Gemini
Google briefly talked about throughout its predominant keynote that its long-term aim for Gemini is to make it a “common AI assistant,” and robotics must be part of that. The corporate says that its Gemini Robotics division “teaches robots to understand, comply with directions and regulate on the fly.” I bought to check out Gemini Robotics myself, utilizing voice instructions to direct two robotic arms and transfer object hands-free.
The demo is utilizing a Gemini mannequin, a digicam, and two robotic arms to maneuver issues round. The multimodal capabilities — like a dwell digicam feed and microphone enter — make it straightforward to regulate Gemini robots with easy directions. In a single occasion, I requested the robotic to maneuver the yellow brick, and the arm did precisely that.
It felt responsive, though there have been some limitations. In a single occasion, I attempted to inform Gemini to maneuver the yellow piece the place it was earlier than, and rapidly discovered that this model of the AI mannequin would not have a reminiscence. However contemplating Gemini Robotics continues to be an experiment, that is not precisely shocking.
I want Google would’ve centered a bit extra on these purposes throughout the keynote. Gemini Robotics is precisely the form of AI we should always need. There isn’t any want for AI to exchange human creativity, like artwork or music, however there’s an abundance of potential for Gemini Robotics to get rid of the mundane work in our lives.
Making an attempt on garments utilizing Store with AI Mode
As somebody who refuses to strive on garments in dressing rooms — and hates returning garments from on-line shops that do not match as anticipated simply as a lot — I used to be skeptical however excited by Google’s announcement of Store with AI Mode. It makes use of a customized picture technology mannequin that understands “how totally different supplies fold and stretch in keeping with totally different our bodies.”
In different phrases, it ought to offer you an correct illustration of how garments will look on you, somewhat than simply superimposing an outfit with augmented actuality (AR). I am a glasses-wearer that often tries on glasses just about utilizing AR, hopeful that it will give me an thought of how they will look on my face, solely to be dissatisfied by the end result.
I am glad to report that Store with AI Mode’s digital try-on expertise is nothing like that. It rapidly takes a full-length photograph of your self and makes use of generative AI so as to add an outfit in a method that appears shockingly sensible. Within the gallery under, you possibly can see every a part of the method — the completed end result, the advertising photograph for the outfit, and the unique image of me used for the edit.
Is it going to be good? Most likely not. With that in thoughts, this digital try-on instrument is well one of the best I’ve ever used. I would really feel far more assured shopping for one thing on-line after attempting this instrument — particularly if it is an outfit I would not usually put on.
Creating an Android Bot of myself utilizing Google AI
A number of demos at Google I/O are actually enjoyable, easy actions with so much of technical stuff happening within the background. There isn’t any higher instance of that than Androidify, a instrument that turns a photograph of your self into an Android Bot. To get the end result you see under, a posh Android app stream used AI and picture processing. It is a glimpse of how an app developer would possibly use Google AI in their very own apps to supply new options and instruments.
Androidify begins with a picture of an individual, ideally a full-length photograph. Then, it analyses the picture and generates a textual content description of it utilizing the Firebase AI Logic SDK. From there, that description is shipped to a customized Imagen mannequin optimized particularly for creating Android Bots. Lastly, the picture is generated.
That is a bunch of AI processing to get from a real-life photograph to a customized Android Bot. It is a neat preview of how builders can use instruments like Imagen to supply new options, and the excellent news is that Androidify is open-source. You may study extra about all that goes into it right here.
Making music with Lyria 2
Music is not my favourite medium to include AI, however alas, the Lyria 2 demo station at Google I/O was fairly neat. For these unfamiliar, Lyria Realtime “leverages generative AI to provide a steady stream of music managed by person actions.” The thought is that builders can incorporate Lyria into their apps utilizing an API so as to add soundtracks to their apps.
On the demo station, I attempted a lifelike illustration of the Lyria API in motion. There have been three music management knobs, solely they had been as massive as chairs. You would sit down and spin the dial to regulate the proportion of influence every style had on the sound created. As you modify the genres and their prominence, the audio enjoying modified in actual time.
The cool half about Lyria Realtime is that, because the title suggests, there is not any delay. Customers can change the music technology instantly, giving individuals that are not musicians extra management over sound than ever earlier than.
Producing customized movies with Movement and Veo
Lastly, I used Movement — an AI filmmaking instrument — to create customized video clips utilizing Veo video-generation fashions. In comparison with primary video mills, Movement is used to allow creators to have constant and seamless themes and types throughout clips. After making a clip, you possibly can change the video’s traits as “components,” and use that as prompting materials to maintain producing.
I gave Veo 2 (I could not strive Veo 3 as a result of it takes longer to generate) a difficult immediate: “generate a video of a Mets participant hitting a house run in comedian model.” In some methods, it missed the mark — considered one of my movies had a participant with two heads and none of them really confirmed a house run being hit. However setting Veo’s struggles apart, it was clear that Movement is a useful gizmo.
The flexibility to edit, splice, and add to AI-generated movies is nothing wanting a breakthrough for Google. The very nature of AI technology is that each creation is exclusive, and that is a foul factor should you’re a storyteller utilizing a number of clips to create a cohesive work. With Movement, Google appears to have solved that drawback.
If you happen to discovered AI speak throughout the principle keynote boring, I do not blame you. The phrase Gemini was spoken 95 instances and AI was uttered barely fewer on 92 events. The cool factor about AI is not what it could actually do, however how it could actually change the best way you full duties and work together along with your gadgets. Thus far, the demo experiences at Google I/O 2025 did a strong job at displaying the how to attendees on the occasion.