-
Posts
10695 -
Joined
-
Last visited
-
Days Won
188
Content Type
Profiles
Forums
Store
Gallery
Events
module__cms_records1
Downloads
Everything posted by CodeCanyon
-
Google says it will commit up to $150 million to the consumer eyewear company Warby Parker to jointly develop AI-powered smart glasses based on Android XR, the companies said on Tuesday during Google I/O 2025. Google has already committed $75 million to Warby Parker’s product development and commercialization costs, the companies said in a press release. Google says it will invest an additional $75 million, taking an equity stake in Warby Parker, should the eyewear manufacturer meet certain milestones. At Google I/O 2025, Google also announced partnerships with several companies to develop smart glasses with Gemini AI and AR capabilities, including Samsung and Gentle Monster. Google seems to be taking a page out of Meta’s smart glasses strategy. Meta has found success partnering and investing in the Ray-Ban maker EssilorLuxottica to develop its smart glasses. Part of the reason behind Ray-Ban Meta’s success is that the smart glasses have an attractive, familiar design and they’re being sold in Ray-Ban stores. It seems likely that Google will have a similar relationship with Warby Parker, taking advantage of the eyewear company’s popular frame designs and perhaps its retail stores. In the press release, Warby Parker and Google said they intend to launch a series of products over time. Their first line of eyewear will launch “after 2025” and will incorporate multimodal AI with prescription and non-prescription glasses.
-
Amanda Scales, the former xAI HR exec who helped lead billionaire Elon Musk’s Department of Government Efficiency initiative while working at the U.S. Office of Personnel Management, recently returned to xAI, according to The New York Times. Scales used to work on talent acquisition at xAI. Since April, she’s worked on the talent side of the company once again, per her LinkedIn profile. Previously, Scales was in human resources and talent at San Francisco-based VC firm Human Capital, as well as at Uber. Scales’ return to xAI comes as Musk has indicated that he plans to pull back from DOGE. The tech billionaire has made himself increasingly scarce in Washington, according to the Times. Last month, Musk told Wall Street analysts that he planned to spend less time on politics and more on his companies like Tesla. Sales at Tesla have suffered since Musk devoted more time and resources to DOGE. Last month, a group of employees published an open letter calling for his removal as CEO.
-
The first test of Tesla’s long-promised robotaxi service in Austin, Texas next month will initially be limited to specific areas the company deems “the safest,” CEO Elon Musk told CNBC in an interview Tuesday. Tesla’s cars are “not going to take intersections unless we are highly confident [they’re] going to do well with that intersection, or it’ll just take a route around that intersection,” Musk said. “We’re going to be extremely paranoid about the deployment, as we should be. It would be foolish not to be.” Using a geofence represents a major strategy shift for Musk, who spent years claiming his company would be able to create a general purpose self-driving solution that could be dropped in to any location and work without human supervision. (Geofence is a jargon term used in the autonomous vehicle industry that means vehicle is restricted to a certain area.) Musk has claimed Tesla will attempt to launch similar trials for its robotaxi service in California and possibly other states later this year. Musk telegraphed the idea of using a geofence on Tesla’s first-quarter earnings call in April, though he did not explicitly say that was the approach the company would use. The CEO said at the time that it was “increasingly likely that there will be a localized parameter set” for its early robotaxi operations. As part of Tesla’s “paranoid” approach, Musk said Tuesday the company will have employees remotely monitor the initial fleet of around 10 Model Y SUVs equipped with the “unsupervised” version of its Full Self-Driving software. Musk also claimed those vehicles will be driving without any safety operator inside. “I think it’s prudent for us to start with a small number, confirm that things are going well, and then scale it up proportionate to how well we see it’s doing,” he said. It is common practice for autonomous vehicle companies like Waymo to have an operations center staffed with people who are monitoring their robotaxis and providing remote guidance, if needed. Waymo, which posted a blog on the topic in 2024, doesn’t take take control of the vehicles though. Instead human employees primarily communicate through questions and answers with the self-driving system to give it proper context and help it problem solve. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW
-
If you’ve been waiting for the right moment to showcase your newest innovation to a massive AI community, this is it! You have until this Friday, May 23 at 11:59 p.m. PT, to secure one of the few remaining exhibit tables at TechCrunch Sessions: AI and position your brand at the center of the conversation shaping the future of the AI industry. TC Sessions: AI is bringing together the brightest builders, boldest thinkers, and sharpest investors to UC Berkeley’s Zellerbach Hall on June 5. At a venue nestled right in the heart of the AI community, your company will get the chance to rub elbows with a crowd of investors and innovators looking for the next standout partner, which could be you! Benefits of exhibiting Even if you’ve built something powerful, the market is noisy, and visibility is everything. Exhibiting gives you: Direct access to 1,000+ engaged AI decision-makers. Powerful positioning as a startup or company on the cutting edge. Face time with VCs, enterprise buyers, and press looking for what’s next. And beyond access to the Sessions: AI community, you get the following benefits: 6′ x 3′ exhibit space with a branded tabletop sign. 5 event passes for your team or guests. Company logo placement on the event website. Wi-Fi, chairs, and lead capture tools to keep networking friction-free. And much, much more. Deadline: Friday, May 23 at 11:59 p.m. PT Image Credits:Halo Creative Don’t miss your golden opportunity to make a brand impact This is your moment to go from being seen to being remembered. Exhibit at TC Sessions: AI and earn the attention your innovation deserves. Reserve your table now before the opportunity closes. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW
-
If you’re a founder looking to grow your startup, chances are you’re wrestling with more than just product or capital. Talent, scale, and smart execution are the real battlegrounds. That’s exactly what TechCrunch All Stage 2025 is built to address on July 15 at Boston’s SoWa Power Station. Rob Biederman, managing partner at Asymmetric Capital Partners and one of the sharpest minds in talent, tech, and scaling strategy, will share his insights in a roundtable session. This is THE place where you can ask him directly what it takes to scale. See many more top startup leaders taking the stage to share honest insights, hands-on strategies, and lessons learned in the trenches. Early Bird pricing is still available for TC All Stage, with Founder passes discounted at $210, Investor passes are available at a $200 discount, and students get a chance to attend for just $99. Check out the best option for you and your team right here to learn how to secure VC funding, recruit the right early hires, manage founder finances, navigate the messy middle of growth at all stages of scaling, and more. What Biederman brings to TechCrunch All Stage Simply put, he’s built solutions where most startups get stuck and is set on sharing those fixes to those in need. Before launching Asymmetric, Biederman co-founded Catalant Technologies, where he spent eight years as co-CEO, turning the company into the market leader for on-demand, high-skill talent. Today, Catalant powers how major companies deploy workforces, connecting them with more than 70,000 consultants and 1,000 boutique firms. He now serves as chairman of Catalant, is the co-author of “Reimagining Work,” and teaches scaling technology ventures at Harvard Business School, where he’s an executive fellow. In short, Biederman doesn’t just talk about scale — he teaches it, builds it, and funds it. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW With a background that includes private equity at Goldman Sachs and Bain Capital, and a Harvard MBA earned with Baker Scholar honors, Biederman brings both operational experience and investment discipline to every conversation. At TC All Stage, Biederman will break down what most founders overlook when it comes to scaling: how to evolve your thinking about talent, execution, and long-term growth. Join the event where founders go to grow TC All Stage isn’t just another startup conference — it’s a strategy session for people building real companies. You’ll walk away with tools, frameworks, and stories from top operators who’ve scaled beyond the early-stage maze. And Biederman’s insights on hiring, leadership, and operational leverage could easily reshape how you think about growth. Join us in Boston on July 15 TC All Stage tickets at these low rates are going fast, and there is limited seating available in the sessions, so it’s time to get your ticket now and be in the room where seeds can scale and startups go IPO.
-
Looking to make a splash at TechCrunch All Stage 2025? Our Side Events initiative is a fantastic opportunity to engage with Boston’s tech community in a dynamic and memorable manner. Plus, we’ll assist in promoting your event at no cost to you! Submit your event here by June 10. Throw your own Side Event at TechCrunch All Stage 2025 What exactly are Side Events? Side Events give you a chance to grow your brand and network with 1,500 conference attendees and the local Boston tech community by hosting your own event as part of “All Stage Week” from July 13 to July 19. Whether it’s a mixer, a career showcase, or a thought-provoking panel, the choice is yours. There’s no application fee, so why not apply now? Applications are currently open until June 10 at 11:59 p.m. PT. We’ll be reviewing and approving applications on a rolling basis, so submit your event for consideration today! The sooner you’re approved, the sooner we can kickstart promotion for your event. Approved Side Events will benefit from complimentary promotion across TechCrunch.com and the All Stage 2025 website and will also be highlighted to All Stage 2025 attendees through diverse channels like emails, posts, and the agenda. There’s no cost to apply and no participation fee. However, hosts are responsible for all aspects of their event, including expenses, promotions, and operations. For detailed guidance, planning tips, and the nitty-gritty details, refer to our Side Events Guide. Embrace the opportunity to host at TechCrunch All Stage 2025 and prepare to elevate your brand, expand your network, and forge meaningful connections within the tech community. Apply to host a Side Event and let’s kick off the festivities! Don’t miss out on the Side Events, alongside the phenomenal All Stage programming. Why wait? Secure your pass now and enjoy savings of up to $210. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW Is your company interested in sponsoring or exhibiting at TechCrunch All Stage 2025? Reach out to our sponsorship sales team by completing this form.
-
Intel CEO Lip-Bu Tan continues to consider ways to help semiconductor giant refocus on its core business. Intel is allegedly considering selling its networking and edge unit, according to reporting to Reuters, as Tan looks to shed business divisions that aren’t considered critical. This unit builds makes chips for telecom equipment and was responsible for $5.8 billion of revenue in 2024. Intel has started engaging with potential buyers but has not launched a formal sales process, Reuters reported. If Intel does end up pursuing a sale, it wouldn’t be surprising. Tan has made it clear that he wants the company to refocus on its core business units — PCs and data center chips. In March, Tan told Intel’s customers that the company would spin off its non-core assets at the company’s Intel Vision conference. TechCrunch has reached out to Intel for more information.
-
Google announced new features for Wear OS 6 at Google I/O 2025 today, giving the operating system a design makeover with Material 3 Expressive, the design language the company unveiled earlier this month. Google released the developer preview of Wear OS 6, which is based on Android 16, for testing on Tuesday. With Wear OS 6, Google is trying to make apps look more cohesive, so app tiles will adopt the default system font. Plus, the dynamic theming library will adjust your app or tile’s color to match the color scheme of the watch face on Pixel Watches. The core promise of the new design reference platform is to let developers build better customization in apps along with seamless transitions. The company is releasing a design guideline for developers along with Figma design files. In addition, Google is releasing The Wear Compose Material 3 and brand-new Wear Protolayout libraries for developers with extended color schemes, typography, and shapes for more dynamic-looking apps. Image Credits: Google To match the circular look of a Wear OS 6 watch, Google is introducing a three-slot tile layout with a title slot, a main content slot, and a bottom slot that would make for a consistent look across different-sized screens. The company is also adding newly designed components, including buttons, progress indicators, and scroll indicators, to better suit circular watch displays. Developers can display collapsing components for scrolling in different ways. For instance, components can scale down when they near the top or bottom of a watch’s screen. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW Image Credits: Google. Wear OS 6 adds ways to change the look of a watch face through animated state transitions from ambient to interactive and photo watch faces. Plus, there is now a new API to build watch marketplaces. Image Credits: Google Google is adding a CredentialManager API to Wear OS, starting with Pixel Watches on Wear OS 5.1, which will enable passkey authentication through the watch. Wear OS 5.1 on Pixel also gets richer media controls that allow users to fast-forward or rewind podcasts. What’s more, compatible apps will also get a new menu for playlist and controls, including shuffle, “like,” and repeat.
-
Google announced at Google IO 2025 that it is adding multiple AI-powered features for online shoppers, including a new visual panel in Google Search’s AI Mode, personalized price tracking notifications with agentic checkout, and virtual try-ons. The new shopping experience is rolling out in AI Mode, where shoppers can view product visuals and other AI-powered guidance that leverages product data. For instance, if you search for a travel bag, the results will show you a panel of listings and images matched to your tastes that you can easily scroll through. And, if you want to narrow things down using a more specific query, such as “bags suitable for a trip to Portland, Oregon, in May,” AI Mode will run multiple simultaneous queries — which Google describes as a query “fan-out” — to figure out the best option for both long journeys and rainy weather. Image Credits: Google It will then update the panel of options to showcase those that are waterproof and those with easy access to pockets. As you continue to refine your query or add other specific filters for shopping, the visual panel on the right-hand will update to show new options. These features will arrive in the U.S. in the months ahead, says Google. The company is also adding a new price tracking feature with agentic checkout in the months ahead. Soon, consumers will be able to tap “track price” on any Google product listing. You’ll then be able to select a product, filtering for things like colors and size, and specify the amount they want to spend on that item. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW Image Credits: Google Google will track the price and send you a notification when the price matches what you were looking to spend. After confirming the details, you can opt to buy the item by tapping a new “buy for me” option. After clicking the buy button, Google adds the item to the checkout cart on the merchant’s website and uses your Google Pay details to secure the purchase. This feature will initially launch with product listings in the U.S., Google says. Another new feature will allow shoppers to try on clothes virtually. While Google has offered something similar to virtual on tech, the feature only involved showing items on a diverse range of models’ bodies. Now, the company will allow you to try on clothes on yourself, too. To do so, you’ll upload a full-length photo under good lighting where you’re wearing fitted clothing. Google says it’s using a new diffusion model for fashion to understand the human body and how different materials would fold and stretch on different people. Image Credits: Google Google’s virtual try-on feature is available to users in the U.S. under Google’s Search Labs starting today. Users who opt in will see a virtual try-on button next to product listings for shirts, pants, skirts, and dresses. You’ll also be able to share your look with friends or tap to shop similar style. Google’s range of new AI shopping technology will challenge the work of other startups to various degrees. Notably, companies like Thrive-backed Doji and Stellation Capital-backed Vybe are working on technology to make virtual try-on easy and fun for users. Meanwhile, startups like Daydream, Cherry, and Deft have used AI to solve for product discovery. Plus, general-purpose AI-powered chatbots like ChatGPT and Perplexity have also added shopping features in recent months.
-
Google announced at Google I/O 2025 on Tuesday that it’s launching a Gemini integration in Chrome. The tech giant says the new integration will give users access to a new AI browsing assistant that will help them quickly understand the context of a page and get tasks done. Gemini in Chrome will be accessible through typing or talking with Gemini Live. You can start chatting with the AI assistant by clicking the Gemini icon in the top right corner of your Chrome window. At launch, the integration will allow users to ask Gemini to clarify complex information on a page that they’re visiting. It will also be able to summarize information. Image Credits:Google For example, you could open up a page that features a banana bread recipe and ask Gemini to make the recipe gluten free. Or, you could use Gemini to help you pick out the perfect plan for your bedroom depending on the lighting conditions. Another use case could be asking Gemini to create a pop quiz based on the topics covered in the webpage you’re visiting. Starting Wednesday, Gemini in Chrome will begin rolling out on desktop to Google AI Pro and Google AI Ultra subscribers in the U.S. who use English as their Chrome language on Windows and macOS. It will also be available to Chrome Beta, Dev, and Canary users. The idea behind the feature is to give users easy access to Gemini right in Chrome, since Google is likely looking for ways to get people to use Gemini instead of OpenAI’s ChatGPT for these kinds of questions, inquiries, and summaries. Image Credits:Google In the future, Gemini in Chrome will be able to work across multiple tabs at once. This means that you could get Gemini to do things like compare two different sleeping bags that you have open on separate tabs. Gemini in Chrome will also be able to navigate websites on your behalf. For example, you could ask Gemini to scroll to a specific portion of a recipe with a single command. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW Google says users can imagine Gemini in Chrome helping to automate more of their least favorite online tasks in the future, noting that it believes users will be able to turn 30-minute tasks into three-click journeys.
-
At Google I/O 2025, the tech giant unveiled new capabilities coming to NotebookLM, its AI-based note-taking and research assistant. Most notably, the company is launching Video Overviews. The company says users will soon be able to turn dense multimedia, such as raw notes, PDFs, and images, into digestible visual presentations. Since its launch, NotebookLM has been about helping users understand and interact with complex documents. With this new capability, NotebookLM will be taking a more visual approach to helping users understand different topics and ideas. NotebookLM has already taken an audio approach to helping users understand materials with Audio Overviews, a feature that gives users the ability to generate a podcast with AI virtual hosts based on documents they have shared with NotebookLM, such as course readings or legal briefs. Image Credits:Google Now, Google is rolling out more flexibility to Audio Overviews, as it’s letting users select the ideal length for their audio summaries. For example, you can choose to have an Audio Overview at the default length, or longer or shorter. The new features announced today come a day after Google officially released NotebookLM apps for Android and iOS. Up until now, NotebookLM has only been accessible via desktop. Google has now made the service available on the go. The apps feature background playback and offline support for Audio Overviews, along with support for dark mode. The apps also allow people to create new notebooks and view the ones they’ve already created. Plus, when you’re viewing a website, PDF, or YouTube video on your device, you can tap the share icon and select NotebookLM to add it as a new source. Users can also view sources that they have already uploaded in each of the notebooks. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW
-
Google is taking on Meta’s Ray-Ban Meta glasses by announcing new partnerships at Google I/O 2025 with Gentle Monster and Warby Parker to create smart glasses based on Android XR. Google launched the Android XR platform with Qualcomm and Samsung last year. While it didn’t talk about specific devices then, the latest announcement indicates the company wants to build multiple iterations of glasses and headsets with different partners. On Tuesday, the company also said it would expand its partnership with Samsung to XR glasses. It added that both companies are developing a software and a hardware reference platform for developers to build solutions on. Samsung, Qualcomm, and Google announced a mixed reality project in 2023. Samsung subsequently unveiled a headset, Project Moohan, in late 2024. It’s set to debut later this year, Google says. At I/O, Google also showed off concept glasses based on the Android XR platform augmented by its Gemini AI. The glasses have a camera, a microphone, and speakers, just like the Ray-Ban Meta glasses. But they also have an optional display to view notifications, affixed to the lens of the glasses. Google demoed several different use cases, such as messaging, turn-by-turn navigation, scheduling appointments, live language translations, and taking photos. The search giant said it is seeding units to select testers to gather feedback.
-
Less than one week left to save big on TechCrunch Disrupt 2025 passes! Disrupt 2025 prices increase on May 25 at 11:59 p.m. PT. Grab your pass now and: Save up to $900 on your ticket Bring a friend, colleague, co-founder, or tech enthusiast for 90% off The clock is ticking — lock in your massive savings here. Why attend TechCrunch Disrupt 2025? From October 27–29, join the ultimate gathering of startups, VCs, product leaders, and tech enthusiasts at Moscone West in San Francisco. It’s Disrupt’s 20th anniversary, and we’re doubling down on what matters: Six main stages packed with tech and VC pioneers who’ll share next-gen insights 250+ sessions with industry icons 200+ expert-led discussions Startup Battlefield 200 — where selected Pre Series A startups pitch competitively live Epic networking with the 10,000+ decision-makers and investors attending 200+ innovations to explore in the Expo Hall Image Credits:Eric Slomonson, The Photo Group Speaker sneak peek For 20 years, TechCrunch Disrupt has spotlighted the innovation that drives startups forward. In 2025, we’re going bigger. Join us to hear firsthand from the founders, executives, and investors leading the next wave of tech. Explore our initial speaker lineup — and check back on the speaker page as new pioneers are announced weekly. Adam Bain, 01 Advisors Astro Teller, X, The Moonshot Factory David Cramer, Sentry David George, Andreessen Horowitz Gale Wilkinson, VITALIZE Venture Capital Nikola Todorovic, Wonder Dynamics, an Autodesk Company Nirav Tolia, Nextdoor Ryan Petersen, Flexport Sangeen Zeb, GV (Google Ventures) Zeya Yang, IVP Ryan Peterson of Flexport will take the Builders Stage at TechCrunch Disrupt 2025, taking place from October 27-29, 2025 in Moscone West, San Francisco.Image Credits:Slava Blazer / TechCrunch Stay ahead of the curve and save big before time’s up Don’t miss your chance to save big for Disrupt — prices jump after May 25 at 11:59 p.m. PT! Lock in up to $900 in savings, bring a colleague for 90% off, and experience one of the year’s top tech events. Register now to secure your spot and your savings. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW
-
Google on Tuesday announced a new AI-powered video tool geared towards filmmaking called Flow at its Google I/O 2025 developer conference. The company said it’s using a trio of its AI models — Veo for video generation, Imagen for image generation, and Gemini for text and prompting — to power the new tool. Flow is following the footsteps of other tools by letting users import characters or scenes, or let them create those artifacts within the tool. For instance, Google itself launched a video generation tool called VideoFX last year under Google Labs. Flow, however, may be able to reach a wider user base. The new tool also offers features like camera control, to change the angle of a camera or a view within the scene; a scene builder to edit or extend shots and direct the flow of the scene; as well as tools for asset management. Plus, the company is launching “Flow TV,” a curated stream of clips and content where others can see exact prompts behind these videos to understand other users’ creative flows. Startups like Moonvalley, D-ID, Cheehoo, and Hedra have also been trying to create video solutions in similar realms to help people have access to certain filmmaking tools to create AI-generated features. While Google’s models are present in a lot of these tools, Flow demonstrates that Google also wants to enter the application layer of the AI-video generation process. Flow will initially be available to users in the U.S. on Google AI Pro and the new Google AI Ultra plan. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW Pro users can access 100 generations per month, while Ultra users will receive a higher, but currently unspecified limit, along with access to Google’s latest video models.
-
Google announced at Google I/O 2025 that it is making the AI model that powers its experimental music production app, MusicFX DJ, available via an API. The model, Lyria RealTime, is now in Google’s Gemini API and AI Studio platform. It can mix musical genres, change instruments, and alter the mood of AI-generated music, giving users control over the key, tempo, brightness, and other song characteristics and elements. Image Credits:Google DeepMind The launch of Lyria RealTime via API comes amidst an explosion of AI-powered music apps. There’s Udio and Suno, as well as the recently debuted Riffusion — among many others. Many of these have proven to be controversial, particularly those driven by models trained on copyrighted music without permission. But Google pitches Lyria RealTime as a collaborative tool — one that can be used to interactively create, control, and even perform high-quality (48kHz stereo) songs. Google didn’t announce pricing for Lyria RealTime ahead of its API rollout.
-
Google is rolling out a new image-generating AI model, Imagen 4, that the company claims delivers higher-quality results than its previous image generator, Imagen 3. Unveiled at Google I/O 2025 on Tuesday, Imagen 4 is capable of rendering “fine details” like fabrics, water droplets, and animal fur, Google says. The model can handle both photorealistic and abstract styles, creating images in a range of aspect ratios and up to 2K resolution. “Imagen 4 is [a] huge step forward in quality,” Josh Woodward, who leads Google’s Labs group, said during a press briefing. “We’ve also [paid] a lot of attention and fixes around how it generates text and topography, so it’s wonderful for creating slides or invitations, or any other thing where you might need to blend imagery and text.” A sample from Imagen 4.Image Credits:Google There’s no shortage of AI image generators out there, from ChatGPT’s viral tool to Midjourney’s V7. They’re all relatively sophisticated, customizable, and capable of creating high-quality AI artwork. So what makes Imagen 4 stand out from the crowd? Another Imagen 4 sample.Image Credits:Google According to Google, Imagen 4 is fast — faster than Imagen 3. And it’ll soon get faster. In the near future, Google plans to release a variant of Imagen 4 that’s up to 10x quicker than Imagen 3. Imagen 4 is available as of this morning in the Gemini app, Google’s Whisk and Vertex AI platforms, and across Google Slides, Vids, Docs, and more in Google Workspace.
-
Google announced on Tuesday during Google I/O 2025 that Project Astra — the company’s low latency, multimodal AI experience — will power an array of new experiences in Search, the Gemini AI app, and products from third-party developers. Most notably, Project Astra is powering a new Search Live feature in Google Search. When using AI Mode, Google’s AI-powered search feature, or Lens, the company’s visual search feature, users can click the “Live” button to ask questions about what they’re seeing through their smartphone’s camera. Project Astra streams live video and audio into an AI model, and responds with answers to users’ questions with little to no latency. First unveiled at Google I/O 2024 through a viral smart glasses demo, Project Astra was born out of Google DeepMind as a way to showcase nearly real-time, multimodal AI capabilities. Google now says it’s building those Project Astra glasses with partners including Samsung and Warby Parker, but the company doesn’t have a set launch date yet. What the company does have is a variety of Project Astra-powered features for consumers and developers. Google says Project Astra is powering a new feature in its Live API, a developer-facing endpoint that enables low-latency voice interactions with Gemini. Starting Tuesday, developers can build experiences that support audio and visual input, and native audio output — much like Project Astra. Google says the updated Live API also has enhanced emotion detection, meaning the AI model will respond more appropriately, and includes thinking capabilities from Gemini’s reasoning AI models. In the Gemini app, Google says Project Astra’s real-time video and screen-sharing capabilities are coming to all users. While Project Astra already powers Gemini Live’s low-latency conversations, this visual input was previously reserved for paid subscribers. Google seems confident that Project Astra is the future for many of its products, and even can power an entirely new product category: smart glasses. While that may be true, Google still hasn’t set a launch date for the Project Astra smart glasses it demoed last year. The company has offered a few more details on what those smart glasses will look like, but they still seem far from reality.
-
Google is launching a way to quickly check whether an image, video, audio file, or snippet of text was created using one of its AI tools. SynthID Detector, announced Tuesday at Google I/O 2025, is a verification portal that uses Google’s SynthID watermarking technology to help identify AI-generated content. Users can upload a file, and SynthID Detector will determine whether the whole sample — or just a part of it — is AI-created. The debut of SynthID Detector comes as AI-generated media floods the web. The number of deepfake videos alone skyrocketed 550% from 2019 to 2024, according to one estimate. Per The Times, of the top 20 most-viewed posts on Facebook in the U.S. last fall, four were “obviously created by AI.” Image Credits:DeepMind Of course, SynthID Detector has its limitations. It only detects media created with tools that use Google’s SynthID specification — mainly Google products. Microsoft has its own content watermarking technologies, as do Meta and OpenAI. SynthID also isn’t a perfect technology. Google admits that it can be circumvented, particularly in the case of text. To that first point, Google is arguing that its SynthID standard is already used at a massive scale. According to the tech giant, more than 10 billion pieces of media have been watermarked with SynthID since it launched in 2023.
-
Google’s family of “open” AI models, Gemma, is growing. During Google I/O 2025 on Tuesday, Google took the wraps off Gemma 3n, a model designed to run “smoothly” on phones, laptops, and tablets. Available in preview starting Tuesday, Gemma 3n can handle audio, text, images, and videos, according to Google. Models efficient enough to run offline and without the need for computing in the cloud have gained steam in the AI community in recent years. Not only are they cheaper to use than large models, but they preserve privacy by eliminating the need to transfer data to a remote data center. In addition to Gemma 3n, Google is releasing MedGemma through its Health AI Developer Foundations program. According to the company, MedGemma is its most capable open model for analyzing health-related text and images. Also on the horizon is SignGemma, an open model to translate sign language into spoken-language text. Google says that SignGemma will enable developers to create new apps and integrations for deaf and hard-of-hearing users. Worth noting is that Gemma has been criticized for its custom, non-standard licensing terms, which some developers say have made using the models commercially a risky proposition. That hasn’t dissuaded developers from downloading Gemma models tens of millions of times collectively, however. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW
-
Google is upgrading its most capable Gemini AI models. On Tuesday at Google I/O 2025, the company announced Deep Think, an “enhanced” reasoning mode for its flagship Gemini 2.5 Pro model. Deep Think allows the model to consider multiple answers to questions before responding, boosting its performance on certain benchmarks. “[Deep Think] pushes model performance to its limits,” said Demis Hassabis, head of Google DeepMind, Google’s AI R&D org, during a press briefing. “It uses our latest cutting-edge research in thinking and reasoning, including parallel techniques.” Google was vague on the inner workings of Deep Think, but the technology could be similar to OpenAI’s o1-pro and upcoming o3-pro models, which likely use an engine to search for and synthesize the best solution to a given problem. Google says that Deep Think enabled Gemini 2.5 Pro to top LiveCodeBench, a challenging coding evaluation. Gemini 2.5 Pro Deep Think also beat OpenAI’s o3 on MMMU, a test for skills like perception and reasoning. Image Credits:Google DeepMind Deep Think is available to “trusted testers” via the Gemini API as of this week. Google said that it’s taking additional time to conduct safety evaluations before rolling out Deep Think widely. Alongside Deep Think, Google has introduced an update to its budget-oriented Gemini 2.5 Flash model that allows the model to perform better on tasks involving coding, multimodality, reasoning, and long context. The new 2.5 Flash, which is also more efficient than the version it replaces, is available for preview in Google’s AI Studio and Vertex AI platforms as well as the company’s Gemini apps. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW Google says that the improved Gemini 2.5 Flash will become generally available for developers sometime in June. Lastly, Google is introducing a model called Gemini Diffusion, which the company claims is “very fast” — delivering output 4-5 times quicker than comparable models and rivaling the performance of models twice its size. Gemini Diffusion is available beginning today to “trusted testers.”
-
Google is launching a new subscription plan called Google AI Ultra to bring more of its AI products under one roof. The new plan, announced at Google I/O 2025 on Tuesday, delivers the “highest level of access” to Google’s AI-powered apps and services, the tech giant says. Priced at $249.99 per month, AI Ultra includes Google’s Veo 3 video generator, the company’s new Flow video editing app, and a powerful AI capability called Gemini 2.5 Pro Deep Think mode (which hasn’t launched yet). “[Ultra is] for people that want to be on the absolute cutting edge of AI from Google,” Josh Woodward, VP of Google Labs and Gemini, said during a press briefing. AI Ultra, which is U.S.-only for now, joins a growing group of ultra-premium AI subscriptions. Late last year, OpenAI unveiled ChatGPT Pro, a $200-per-month plan with increased ChatGPT rate limits and certain exclusive capabilities. Anthropic followed suit a few months later with Claude Max, which also costs up to $200 per month. Google hopes to sweeten the pot by throwing in lots of extras. In addition to Flow, Veo 3, and Gemini 2.5 Pro Deep Think, AI Ultra comes with higher limits in Google’s NotebookLM platform and Whisk, the company’s image remixing app. Subscribers to AI Ultra also get access to Google’s Gemini chatbot in Chrome, certain “agentic” tools powered by the company’s Project Mariner tech, YouTube Premium, and 30TB of storage across Google Drive, Google Photos, and Gmail. One of those agentic tools is Agent Mode, which will arrive on desktop “soon.” Google says that Agent Mode will be able to browse the web, perform research, and integrate with Google apps to handle specific tasks. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW Coinciding with the debut of AI Ultra, Google is replacing its old Google One AI Premium plan with Google AI Pro. AI Pro will include products like Flow, NotebookLM, and the Gemini app formerly known as Gemini Advanced, all with special features and higher rate limits. Image Credits:Google AI Pro subscribers also get Flow and early access to Gemini in Chrome, as well as real-time speech translation in Google Meet in English and Spanish (with additional languages to come). Speech translation in Google Meet, which is also available for AI Ultra customers, translates spoken words into a listener’s preferred language while preserving the voice, tone, and expression.
-
Google announced at Google I/O 2025 that it is rebranding Project Starline, its corporate-focused teleconferencing platform that uses 3D imaging, and recommitting to shipping it this year. Starline, now called Google Beam, will come to “early customers” like Deloitte, Salesforce, Citadel, NEC, and Duolingo later in 2025 via Google’s previously announced partnership with HP, Google said.. When Beam launches, it’ll integrate with Google Meet and other popular videoconferencing services, like Zoom, the company said. Beam uses a combination of software and hardware, including a six-camera array and custom light field display, to let a user converse with someone as if they were in the same meeting room. An AI model converts video from the cameras, which are positioned at different angles and pointed toward the user, into a 3D rendering. Google claims that Beam is capable of “near-perfect” millimeter-level head tracking and 60-frames-per-second video streaming. With Google Meet, Beam also offers an AI-powered real-time speech translation mode that maintains the voice, tone, and expressions of the original speaker. “The result [is that Beam is] a very natural and a deeply immersive conversational experience,” Google CEO Sundar Pichai said during a press briefing. The question is, with many businesses transitioning to fully in-office setups post-pandemic, will there be much demand for Beam, which initially seemed aimed mainly at hybrid offices that frequently conference with remote workers? Despite the fact that research has failed to draw definitive conclusions about remote workers’ productivity, the perception among many in senior management — especially in tech — is that work-from-home is something of a failed experiment. Some customers may be able to justify Beam for office-to-office virtual conferences alone, that being said. In 2023, Google claimed that around 100 companies, including WeWork and T-Mobile, were testing prototype versions of the tech. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW Google said Tuesday it’s working with channel partners such as Diversified and AVI-SPL, as well as Zoom, to bring Beam to organizations “worldwide.”
-
Google’s Gemini AI app now has more than 400 million monthly active users, CEO Sundar Pichai said during a press briefing ahead of Google I/O 2025. Google’s AI chatbot app is now approaching a similar scale to OpenAI’s ChatGPT app. According to recent court filings, Google estimated in March that ChatGPT had around 600 million MAUs, whereas Gemini only had 350 million MAUs. The rise of ChatGPT presents a significant threat to Google’s Search business, offering users with a new, more interactive way to access information on the web. The Gemini app is Google’s most direct challenge to compete with OpenAI in the chatbot era, and it seems to be working out so far — the Gemini app seems to be successfully pulling users away from ChatGPT. In recent months, Google has shaken up the ranks behind Gemini. The Google leader behind the viral NotebookLM app, Josh Woodward, is now in charge of Gemini, part of an effort to generate some buzz around Google’s AI chatbot. Of course, the Gemini app is just one way Google puts its AI in front of users. Pichai also said during the call that Google’s AI overviews now reach more than 1.5 billion users every month. The company also announced during Google I/O 2025 that it’s putting AI mode in front of more users, as Google tries to update Search with more conversational experiences powered by generative AI. While OpenAI and Google have the most widely used AI chatbot apps, Meta is trying to break into the space as well. CEO Mark Zuckerberg recently said Meta’s AI products have more than a billion monthly active users across Facebook, Instagram, and WhatsApp, and it recently launched an AI chatbot app to compete with ChatGPT and Gemini. While the ChatGPT app was the only game in town a few years ago, it’s now got a healthy dose of competition from Big Tech’s largest players. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW
-
At Google I/O 2025, the company unveiled a slew of new AI Workspace features coming to Gmail, Google Docs, and Google Vids. Most notably, Gmail is getting personalized smart replies and a new inbox-cleaning feature, while Vids is getting new ways to create and edit content. Personalized smart replies in Gmail will help you draft emails that match your context and tone. The feature will show details from your past emails and Google Drive to provide response suggestions with relevant information. Google says the feature will get rid of the need to dig through your inbox and files for yourself. Personalized smart replies will also adapt to your tone, whether it’s formal or conversational. Image Credits:Google As for the new inbox cleanup feature, Gemini can now help you delete or archive emails you no longer need. For example, you can tell Gemini to “Delete all of my unread emails from The Groomed Paw from last year.” Gmail is also getting a new feature designed to help you quickly schedule appointments and meetings with people outside of your organization. With this feature, you can easily offer times for customers or clients to book a meeting or appointment with you. Gemini will detect when you’re trying to set up a meeting and surface the new feature. Google says the new capability will reduce the time and effort spent coordinating schedules. Image Credits:Google All of these new Gmail features will be generally available in a few months. Over on Docs, you can now link any decks, data, and reports directly into a Google Doc and Gemini will only pull from these sources when providing writing assistance. Google says this will keep suggestions focused on trusted content. This way, whether you’re working on a research summary or a business plan, you are writing with the correct and relevant context. This new feature is generally available starting today. Google also announced that Google Vids is getting the ability to turn existing Google Slides into videos. With this feature, you could turn a sales deck or a quarterly business review presentation into a video. Google notes that Gemini can help generate scripts, voiceovers, animations, and more. The feature will be generally available in a few months. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW Image Credits:Google Plus, for companies that don’t have the budget to film videos or the right spokesperson, Vids is launching AI avatars that will deliver their message. You can write a script and then choose an avatar to present that message for you in a polished video. Google says that the new feature could be used to create videos for onboarding, announcements, product explainers, and more. AI avatars will be available in Google Labs next month. In addition, Vids is getting a new “transcript trim” tool that will automatically remove filler words, such as “um” and “ah,” from your videos. Users will also be able to adjust sound levels across their entire video with a new “balance sound” feature. Balance sound will be generally available next month, while transcript trim will be available in Labs in a few months. Google also revealed that users will be able to create more detailed visuals with Imagen 4, its latest image-generating model, in Workspace. The tech giant says this will allow users to create more detailed visuals in Slides, Vids, Docs, and more.
-
Google announced several updates to the Gemini AI chatbot app during Google I/O 2025, including more broadly available multimodal AI features, updated AI models, and deeper integrations with Google’s suite of products. Starting Tuesday, Google is rolling out Gemini Live’s camera and screen-sharing capabilities to all users on iOS and Android. The feature, powered by Project Astra, allows users to have near-real time verbal conversations with Gemini, while simultaneously streaming video from their smartphone’s camera or screen to the AI model. For example, while walking around a new city, users could point their phone at a building and ask Gemini Live about the architecture or history behind it, and get answers with little to no delay. In the coming weeks, Google says Gemini Live will also start to integrate more deeply with its other apps. The company says Gemini Live will soon be able to offer directions from Google Maps, create events in Google Calendar, and make to-do lists with Google Tasks. The slew of updates to Google’s Gemini are part of the company’s efforts to compete with OpenAI’s ChatGPT, Apple’s Siri, and other digital assistant providers. The rise of AI chatbots have given users a new way to interact with the internet and their devices. This has put pressure on several Big Tech businesses, including Google Search and Google Assistant. Google announced during I/O 2025 that Gemini now has 400 million monthly active users, and the company surely hopes to grow that user base with these updates. Google introduced two new AI subscriptions: Google AI Pro, a rebrand for its $20-per-month Gemini Advanced plan, as well as Google AI Ultra, a $250-per-month plan that competes with ChatGPT Pro. The Ultra plan gives users very high rate limits, early access to new AI models, and exclusive access to certain features. U.S. subscribers to Pro and Ultra, that have English selected as their language in Chrome, will also get access to Gemini in their Chrome browser, Google announced Tuesday. The integration aims to let users ask Gemini to summarize information or answer questions about what appears on their screen. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW Google says it’s updating Deep Research, Gemini’s AI agent that generates thorough research reports, by allowing users to upload their own private PDFs and images. Deep Research will cross reference these private PDFs with public data to create more personalized reports. Soon, the company says users will be able to directly integrate Drive and Gmail to Deep Research. Free users of Gemini are getting an updated AI image model, Imagen 4, which Google says delivers better text outputs. Subscribers to the company’s new $250-per-month AI Ultra plan will also get access to Google’s latest AI video model, Veo 3, which also generates sound that corresponds to video scenes through native audio generation. Google is also updating the default model in Gemini to be Gemini 2.5 Flash, which the company says will offer higher quality responses with lower latency. To cater to the growing number of students that use AI chatbots, Google says Gemini will now create personalized quizzes focused on areas that users find challenging. When users answer questions wrong, Gemini will help create additional quizzes and action plans to strengthen those areas.