-
Posts
10730 -
Joined
-
Last visited
-
Days Won
188
Content Type
Profiles
Forums
Store
Gallery
Events
module__cms_records1
Downloads
Everything posted by CodeCanyon
-
Google is rolling out a new image-generating AI model, Imagen 4, that the company claims delivers higher-quality results than its previous image generator, Imagen 3. Unveiled at Google I/O 2025 on Tuesday, Imagen 4 is capable of rendering “fine details” like fabrics, water droplets, and animal fur, Google says. The model can handle both photorealistic and abstract styles, creating images in a range of aspect ratios and up to 2K resolution. “Imagen 4 is [a] huge step forward in quality,” Josh Woodward, who leads Google’s Labs group, said during a press briefing. “We’ve also [paid] a lot of attention and fixes around how it generates text and topography, so it’s wonderful for creating slides or invitations, or any other thing where you might need to blend imagery and text.” A sample from Imagen 4.Image Credits:Google There’s no shortage of AI image generators out there, from ChatGPT’s viral tool to Midjourney’s V7. They’re all relatively sophisticated, customizable, and capable of creating high-quality AI artwork. So what makes Imagen 4 stand out from the crowd? Another Imagen 4 sample.Image Credits:Google According to Google, Imagen 4 is fast — faster than Imagen 3. And it’ll soon get faster. In the near future, Google plans to release a variant of Imagen 4 that’s up to 10x quicker than Imagen 3. Imagen 4 is available as of this morning in the Gemini app, Google’s Whisk and Vertex AI platforms, and across Google Slides, Vids, Docs, and more in Google Workspace.
-
Google announced on Tuesday during Google I/O 2025 that Project Astra — the company’s low latency, multimodal AI experience — will power an array of new experiences in Search, the Gemini AI app, and products from third-party developers. Most notably, Project Astra is powering a new Search Live feature in Google Search. When using AI Mode, Google’s AI-powered search feature, or Lens, the company’s visual search feature, users can click the “Live” button to ask questions about what they’re seeing through their smartphone’s camera. Project Astra streams live video and audio into an AI model, and responds with answers to users’ questions with little to no latency. First unveiled at Google I/O 2024 through a viral smart glasses demo, Project Astra was born out of Google DeepMind as a way to showcase nearly real-time, multimodal AI capabilities. Google now says it’s building those Project Astra glasses with partners including Samsung and Warby Parker, but the company doesn’t have a set launch date yet. What the company does have is a variety of Project Astra-powered features for consumers and developers. Google says Project Astra is powering a new feature in its Live API, a developer-facing endpoint that enables low-latency voice interactions with Gemini. Starting Tuesday, developers can build experiences that support audio and visual input, and native audio output — much like Project Astra. Google says the updated Live API also has enhanced emotion detection, meaning the AI model will respond more appropriately, and includes thinking capabilities from Gemini’s reasoning AI models. In the Gemini app, Google says Project Astra’s real-time video and screen-sharing capabilities are coming to all users. While Project Astra already powers Gemini Live’s low-latency conversations, this visual input was previously reserved for paid subscribers. Google seems confident that Project Astra is the future for many of its products, and even can power an entirely new product category: smart glasses. While that may be true, Google still hasn’t set a launch date for the Project Astra smart glasses it demoed last year. The company has offered a few more details on what those smart glasses will look like, but they still seem far from reality.
-
Google is launching a way to quickly check whether an image, video, audio file, or snippet of text was created using one of its AI tools. SynthID Detector, announced Tuesday at Google I/O 2025, is a verification portal that uses Google’s SynthID watermarking technology to help identify AI-generated content. Users can upload a file, and SynthID Detector will determine whether the whole sample — or just a part of it — is AI-created. The debut of SynthID Detector comes as AI-generated media floods the web. The number of deepfake videos alone skyrocketed 550% from 2019 to 2024, according to one estimate. Per The Times, of the top 20 most-viewed posts on Facebook in the U.S. last fall, four were “obviously created by AI.” Image Credits:DeepMind Of course, SynthID Detector has its limitations. It only detects media created with tools that use Google’s SynthID specification — mainly Google products. Microsoft has its own content watermarking technologies, as do Meta and OpenAI. SynthID also isn’t a perfect technology. Google admits that it can be circumvented, particularly in the case of text. To that first point, Google is arguing that its SynthID standard is already used at a massive scale. According to the tech giant, more than 10 billion pieces of media have been watermarked with SynthID since it launched in 2023.
-
Google’s family of “open” AI models, Gemma, is growing. During Google I/O 2025 on Tuesday, Google took the wraps off Gemma 3n, a model designed to run “smoothly” on phones, laptops, and tablets. Available in preview starting Tuesday, Gemma 3n can handle audio, text, images, and videos, according to Google. Models efficient enough to run offline and without the need for computing in the cloud have gained steam in the AI community in recent years. Not only are they cheaper to use than large models, but they preserve privacy by eliminating the need to transfer data to a remote data center. In addition to Gemma 3n, Google is releasing MedGemma through its Health AI Developer Foundations program. According to the company, MedGemma is its most capable open model for analyzing health-related text and images. Also on the horizon is SignGemma, an open model to translate sign language into spoken-language text. Google says that SignGemma will enable developers to create new apps and integrations for deaf and hard-of-hearing users. Worth noting is that Gemma has been criticized for its custom, non-standard licensing terms, which some developers say have made using the models commercially a risky proposition. That hasn’t dissuaded developers from downloading Gemma models tens of millions of times collectively, however. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW
-
Google is upgrading its most capable Gemini AI models. On Tuesday at Google I/O 2025, the company announced Deep Think, an “enhanced” reasoning mode for its flagship Gemini 2.5 Pro model. Deep Think allows the model to consider multiple answers to questions before responding, boosting its performance on certain benchmarks. “[Deep Think] pushes model performance to its limits,” said Demis Hassabis, head of Google DeepMind, Google’s AI R&D org, during a press briefing. “It uses our latest cutting-edge research in thinking and reasoning, including parallel techniques.” Google was vague on the inner workings of Deep Think, but the technology could be similar to OpenAI’s o1-pro and upcoming o3-pro models, which likely use an engine to search for and synthesize the best solution to a given problem. Google says that Deep Think enabled Gemini 2.5 Pro to top LiveCodeBench, a challenging coding evaluation. Gemini 2.5 Pro Deep Think also beat OpenAI’s o3 on MMMU, a test for skills like perception and reasoning. Image Credits:Google DeepMind Deep Think is available to “trusted testers” via the Gemini API as of this week. Google said that it’s taking additional time to conduct safety evaluations before rolling out Deep Think widely. Alongside Deep Think, Google has introduced an update to its budget-oriented Gemini 2.5 Flash model that allows the model to perform better on tasks involving coding, multimodality, reasoning, and long context. The new 2.5 Flash, which is also more efficient than the version it replaces, is available for preview in Google’s AI Studio and Vertex AI platforms as well as the company’s Gemini apps. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW Google says that the improved Gemini 2.5 Flash will become generally available for developers sometime in June. Lastly, Google is introducing a model called Gemini Diffusion, which the company claims is “very fast” — delivering output 4-5 times quicker than comparable models and rivaling the performance of models twice its size. Gemini Diffusion is available beginning today to “trusted testers.”
-
Google is launching a new subscription plan called Google AI Ultra to bring more of its AI products under one roof. The new plan, announced at Google I/O 2025 on Tuesday, delivers the “highest level of access” to Google’s AI-powered apps and services, the tech giant says. Priced at $249.99 per month, AI Ultra includes Google’s Veo 3 video generator, the company’s new Flow video editing app, and a powerful AI capability called Gemini 2.5 Pro Deep Think mode (which hasn’t launched yet). “[Ultra is] for people that want to be on the absolute cutting edge of AI from Google,” Josh Woodward, VP of Google Labs and Gemini, said during a press briefing. AI Ultra, which is U.S.-only for now, joins a growing group of ultra-premium AI subscriptions. Late last year, OpenAI unveiled ChatGPT Pro, a $200-per-month plan with increased ChatGPT rate limits and certain exclusive capabilities. Anthropic followed suit a few months later with Claude Max, which also costs up to $200 per month. Google hopes to sweeten the pot by throwing in lots of extras. In addition to Flow, Veo 3, and Gemini 2.5 Pro Deep Think, AI Ultra comes with higher limits in Google’s NotebookLM platform and Whisk, the company’s image remixing app. Subscribers to AI Ultra also get access to Google’s Gemini chatbot in Chrome, certain “agentic” tools powered by the company’s Project Mariner tech, YouTube Premium, and 30TB of storage across Google Drive, Google Photos, and Gmail. One of those agentic tools is Agent Mode, which will arrive on desktop “soon.” Google says that Agent Mode will be able to browse the web, perform research, and integrate with Google apps to handle specific tasks. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW Coinciding with the debut of AI Ultra, Google is replacing its old Google One AI Premium plan with Google AI Pro. AI Pro will include products like Flow, NotebookLM, and the Gemini app formerly known as Gemini Advanced, all with special features and higher rate limits. Image Credits:Google AI Pro subscribers also get Flow and early access to Gemini in Chrome, as well as real-time speech translation in Google Meet in English and Spanish (with additional languages to come). Speech translation in Google Meet, which is also available for AI Ultra customers, translates spoken words into a listener’s preferred language while preserving the voice, tone, and expression.
-
Google announced at Google I/O 2025 that it is rebranding Project Starline, its corporate-focused teleconferencing platform that uses 3D imaging, and recommitting to shipping it this year. Starline, now called Google Beam, will come to “early customers” like Deloitte, Salesforce, Citadel, NEC, and Duolingo later in 2025 via Google’s previously announced partnership with HP, Google said.. When Beam launches, it’ll integrate with Google Meet and other popular videoconferencing services, like Zoom, the company said. Beam uses a combination of software and hardware, including a six-camera array and custom light field display, to let a user converse with someone as if they were in the same meeting room. An AI model converts video from the cameras, which are positioned at different angles and pointed toward the user, into a 3D rendering. Google claims that Beam is capable of “near-perfect” millimeter-level head tracking and 60-frames-per-second video streaming. With Google Meet, Beam also offers an AI-powered real-time speech translation mode that maintains the voice, tone, and expressions of the original speaker. “The result [is that Beam is] a very natural and a deeply immersive conversational experience,” Google CEO Sundar Pichai said during a press briefing. The question is, with many businesses transitioning to fully in-office setups post-pandemic, will there be much demand for Beam, which initially seemed aimed mainly at hybrid offices that frequently conference with remote workers? Despite the fact that research has failed to draw definitive conclusions about remote workers’ productivity, the perception among many in senior management — especially in tech — is that work-from-home is something of a failed experiment. Some customers may be able to justify Beam for office-to-office virtual conferences alone, that being said. In 2023, Google claimed that around 100 companies, including WeWork and T-Mobile, were testing prototype versions of the tech. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW Google said Tuesday it’s working with channel partners such as Diversified and AVI-SPL, as well as Zoom, to bring Beam to organizations “worldwide.”
-
Google’s Gemini AI app now has more than 400 million monthly active users, CEO Sundar Pichai said during a press briefing ahead of Google I/O 2025. Google’s AI chatbot app is now approaching a similar scale to OpenAI’s ChatGPT app. According to recent court filings, Google estimated in March that ChatGPT had around 600 million MAUs, whereas Gemini only had 350 million MAUs. The rise of ChatGPT presents a significant threat to Google’s Search business, offering users with a new, more interactive way to access information on the web. The Gemini app is Google’s most direct challenge to compete with OpenAI in the chatbot era, and it seems to be working out so far — the Gemini app seems to be successfully pulling users away from ChatGPT. In recent months, Google has shaken up the ranks behind Gemini. The Google leader behind the viral NotebookLM app, Josh Woodward, is now in charge of Gemini, part of an effort to generate some buzz around Google’s AI chatbot. Of course, the Gemini app is just one way Google puts its AI in front of users. Pichai also said during the call that Google’s AI overviews now reach more than 1.5 billion users every month. The company also announced during Google I/O 2025 that it’s putting AI mode in front of more users, as Google tries to update Search with more conversational experiences powered by generative AI. While OpenAI and Google have the most widely used AI chatbot apps, Meta is trying to break into the space as well. CEO Mark Zuckerberg recently said Meta’s AI products have more than a billion monthly active users across Facebook, Instagram, and WhatsApp, and it recently launched an AI chatbot app to compete with ChatGPT and Gemini. While the ChatGPT app was the only game in town a few years ago, it’s now got a healthy dose of competition from Big Tech’s largest players. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW
-
At Google I/O 2025, the company unveiled a slew of new AI Workspace features coming to Gmail, Google Docs, and Google Vids. Most notably, Gmail is getting personalized smart replies and a new inbox-cleaning feature, while Vids is getting new ways to create and edit content. Personalized smart replies in Gmail will help you draft emails that match your context and tone. The feature will show details from your past emails and Google Drive to provide response suggestions with relevant information. Google says the feature will get rid of the need to dig through your inbox and files for yourself. Personalized smart replies will also adapt to your tone, whether it’s formal or conversational. Image Credits:Google As for the new inbox cleanup feature, Gemini can now help you delete or archive emails you no longer need. For example, you can tell Gemini to “Delete all of my unread emails from The Groomed Paw from last year.” Gmail is also getting a new feature designed to help you quickly schedule appointments and meetings with people outside of your organization. With this feature, you can easily offer times for customers or clients to book a meeting or appointment with you. Gemini will detect when you’re trying to set up a meeting and surface the new feature. Google says the new capability will reduce the time and effort spent coordinating schedules. Image Credits:Google All of these new Gmail features will be generally available in a few months. Over on Docs, you can now link any decks, data, and reports directly into a Google Doc and Gemini will only pull from these sources when providing writing assistance. Google says this will keep suggestions focused on trusted content. This way, whether you’re working on a research summary or a business plan, you are writing with the correct and relevant context. This new feature is generally available starting today. Google also announced that Google Vids is getting the ability to turn existing Google Slides into videos. With this feature, you could turn a sales deck or a quarterly business review presentation into a video. Google notes that Gemini can help generate scripts, voiceovers, animations, and more. The feature will be generally available in a few months. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW Image Credits:Google Plus, for companies that don’t have the budget to film videos or the right spokesperson, Vids is launching AI avatars that will deliver their message. You can write a script and then choose an avatar to present that message for you in a polished video. Google says that the new feature could be used to create videos for onboarding, announcements, product explainers, and more. AI avatars will be available in Google Labs next month. In addition, Vids is getting a new “transcript trim” tool that will automatically remove filler words, such as “um” and “ah,” from your videos. Users will also be able to adjust sound levels across their entire video with a new “balance sound” feature. Balance sound will be generally available next month, while transcript trim will be available in Labs in a few months. Google also revealed that users will be able to create more detailed visuals with Imagen 4, its latest image-generating model, in Workspace. The tech giant says this will allow users to create more detailed visuals in Slides, Vids, Docs, and more.
-
Google announced several updates to the Gemini AI chatbot app during Google I/O 2025, including more broadly available multimodal AI features, updated AI models, and deeper integrations with Google’s suite of products. Starting Tuesday, Google is rolling out Gemini Live’s camera and screen-sharing capabilities to all users on iOS and Android. The feature, powered by Project Astra, allows users to have near-real time verbal conversations with Gemini, while simultaneously streaming video from their smartphone’s camera or screen to the AI model. For example, while walking around a new city, users could point their phone at a building and ask Gemini Live about the architecture or history behind it, and get answers with little to no delay. In the coming weeks, Google says Gemini Live will also start to integrate more deeply with its other apps. The company says Gemini Live will soon be able to offer directions from Google Maps, create events in Google Calendar, and make to-do lists with Google Tasks. The slew of updates to Google’s Gemini are part of the company’s efforts to compete with OpenAI’s ChatGPT, Apple’s Siri, and other digital assistant providers. The rise of AI chatbots have given users a new way to interact with the internet and their devices. This has put pressure on several Big Tech businesses, including Google Search and Google Assistant. Google announced during I/O 2025 that Gemini now has 400 million monthly active users, and the company surely hopes to grow that user base with these updates. Google introduced two new AI subscriptions: Google AI Pro, a rebrand for its $20-per-month Gemini Advanced plan, as well as Google AI Ultra, a $250-per-month plan that competes with ChatGPT Pro. The Ultra plan gives users very high rate limits, early access to new AI models, and exclusive access to certain features. U.S. subscribers to Pro and Ultra, that have English selected as their language in Chrome, will also get access to Gemini in their Chrome browser, Google announced Tuesday. The integration aims to let users ask Gemini to summarize information or answer questions about what appears on their screen. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW Google says it’s updating Deep Research, Gemini’s AI agent that generates thorough research reports, by allowing users to upload their own private PDFs and images. Deep Research will cross reference these private PDFs with public data to create more personalized reports. Soon, the company says users will be able to directly integrate Drive and Gmail to Deep Research. Free users of Gemini are getting an updated AI image model, Imagen 4, which Google says delivers better text outputs. Subscribers to the company’s new $250-per-month AI Ultra plan will also get access to Google’s latest AI video model, Veo 3, which also generates sound that corresponds to video scenes through native audio generation. Google is also updating the default model in Gemini to be Gemini 2.5 Flash, which the company says will offer higher quality responses with lower latency. To cater to the growing number of students that use AI chatbots, Google says Gemini will now create personalized quizzes focused on areas that users find challenging. When users answer questions wrong, Gemini will help create additional quizzes and action plans to strengthen those areas.
-
Google announced during Google I/O 2025 that it’s rolling out Project Mariner, the company’s experimental AI agent that browses and uses websites, to more users and developers. Google also says it’s significantly updated how Project Mariner works, allowing the agent to take on nearly a dozen tasks at a time. U.S. subscribers to Google’s new $249.99-per-month AI Ultra plan will get access to Project Mariner, and the company says support for more countries is coming soon. Google also says it’s bringing Project Mariner’s capabilities to the Gemini API and Vertex AI, allowing developers to build out applications powered by the agent. First unveiled in late 2024, Project Mariner represents Google’s boldest effort yet to revamp how users interact with the internet through AI agents. At launch, Google Search leaders said they viewed Project Mariner as part of a fundamental user experience shift, in which people will delegate more tasks to an AI agent, instead of visiting websites and completing those tasks themselves. For example, Project Mariner users can purchase tickets to a baseball game or buy groceries online without ever visiting a third-party website — they just chat with Google’s AI agent, and it visits websites and takes actions for them. Image Credits:Google Project Mariner competes with other web-browsing AI agents, such as OpenAI’s Operator, Amazon’s Nova Act, and Anthropic’s Computer Use. These tools are all in an experimental stage, and TechCrunch’s experience has proven the prototypes to be slow and prone to mistakes. However, Google says it’s taken feedback from early testers to improve Project Mariner’s capabilities. A Google spokesperson tells TechCrunch the company updated Project Mariner to run on virtual machines in the cloud, much like agents from OpenAI and Amazon. This means users can work on other projects while Project Mariner completes tasks in the background — Google says the new Project Mariner can handle up to 10 tasks simultaneously. This update makes Project Mariner significantly more useful compared to its predecessor, which ran on a user’s browser. As I noted in my initial review, Project Mariner’s early design meant users couldn’t use other tabs or apps on their desktop while the AI agent was working. This kind of defeated the purpose of an AI agent — it would work for you, but you couldn’t do anything else while it was working. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW In the coming months, Google says users will be able to access Project Mariner in AI Mode, the company’s AI-powered Google Search experience. When it launches, the feature will be limited to Search Labs, Google’s opt-in testing ground for search features. Google says it’s working with Ticketmaster, StubHub, Resy, and Vagaro to power some of these agentic flows. Separately today, Google unveiled an early demo of another agentic experience called “Agent Mode.” The company says this feature combines web browsing with research features and integrations, as well as with other Google apps. Google says Ultra subscribers will gain access to Agent Mode on desktop soon. At this year’s I/O, Google finally seems willing to ship the agentic experiences it’s been talking about for years. Project Mariner, Agent Mode, and AI Mode all seem poised to change how users navigate the web, and how vendors interact with their customers online. Web-browsing agents have big implications for the internet economy, but Google seems ready to put all these agents out in the world.
-
Google is rolling out a series of upgrades to its Play Store to help Android app developers better market their software and services to consumers. Among the highlights are new tools for managing subscription apps, topic pages that let users explore a specific subject on the Play Store, a new feature that allows people to sample audio from developers’ apps, and a more flexible checkout experience that will make it easier for developers to sell add-ons, among other things. Google last week shared a number of Android-related announcements at a pre-show before the start of its annual developer conference, Google I/O. However, it saved the Play Store improvements for Tuesday’s keynote at Google I/O, highlighting their importance to Google’s bottom line. Today, tech giants like Apple and Google are facing a market where app developers have more choice in how they price and sell their mobile software, thanks to anticompetitive regulations, new laws, and recent court victories. As a result, Google has been working to make its own Android app marketplace more compelling for developers, who now collectively sell over a quarter-billion subscriptions. For starters, the company said it’s giving developers the useful ability to halt fully live app releases if the developer identifies a problem that needs to be quickly addressed. Another feature, initially only available in the U.S., will add new “topic browse” pages for media and entertainment on the Play Store, allowing users to connect with apps related to over 100,000 different shows and movies. For example, you could look up a favorite show, movie, or sports event and find out which apps you could use to stream them. (Plus, the existing “Where to Watch” feature that deep links users to their subscribed apps will roll out to the U.K., Korea, Indonesia, and Mexico after it initially launched in the U.S. last year.) These pages will be accessible from multiple places within the Play Store, including the Apps Home page, store listing pages, and search. Developers will also be able to add a hero content carousel and YouTube playlist carousel to app listings on the Play Store. For apps that have audio content, the Play Store will soon launch audio samples on the Apps Home page. (This feature is already live for Health & Wellness app developers in the U.S. Google says in early tests, audio samples helped improve app installs by 3x.) Curated spaces, a feature launched last year to Google Play users to connect with their interests — like comics or soccer, for instance — will also roll out to more locations and categories this year. Google noted that the curated space for comics was fairly popular, reaching over 920,000 users in Japan per month. In the Play Console for managing apps, a new asset library will help developers organize their visual assets, including uploading them from Google Drive, tagging them, and cropping them for re-use. Other new metrics will offer insights into apps’ listing performance. New dedicated overview pages for testing and releasing software will arrive, as well as pages focused on monitoring and improving app releases. Both of these will include additional metrics and actionable advice for developers in a new “Take Action” section. Subscription management tools are getting an upgrade, too, with added support for multi-product checkout for subscriptions, which means developers will be able to sell subscription add-ons alongside their base subscriptions under a single payment schedule. Image Credits:Google For users, this leads to a simplified checkout experience, while also letting them better control their subscriptions when it comes time to upgrade or downgrade their add-ons. The Play Store will remind users of their subscription benefits in more places, including in the Subscription Center, in reminder emails, and during purchase and cancellation flows. The changes, which have already rolled out, are decreasing voluntary churn by 3%, Google claims. Developers will additionally be able to choose to offer a grace period of 30 days or account hold of up to 60 days when their customers’ payment method declines, giving the users time to fix the problem before their account is cancelled. The Engage SDK, launched last year, has offered developers a way to send personalized content recommendations to users’ home screens on Android devices. Now, it’s adding support for more categories, like Travel, and is rolling out to more markets, including Brazil, India, Indonesia, Japan, and Mexico. Image Credits:Google Plus, content created with the SDK will be featured on the Play Store later this summer, in addition to existing spaces like Collections on users’ Android smartphones or the Entertainment Space on select Android tablets. Google notes that the Play Integrity API, which is designed to help combat emerging threats on the Play Store, has been enhanced with stronger abuse detection for all developers and device security update checks to safeguard an app’s more sensitive actions, like transfers or data access. It will also be able to detect if a device is being reused for abuse or repeated actions, even after a device reset. This latter feature will be offered in beta.
-
Google I/O 2025, Google’s biggest developer conference of the year, takes place Tuesday and Wednesday at the Shoreline Amphitheatre in Mountain View. We’re on the ground bringing you the latest updates from the event. I/O showcases product announcements from across Google’s portfolio. We’ve got plenty of news relating to Android, Chrome, Google Search, YouTube, and — of course — Google’s AI-powered chatbot, Gemini. Google hosted a separate event dedicated to Android updates: The Android Show. The company announced new ways to find lost Android phones and other items, additional device-level features for its Advanced Protection program, security tools to protect against scams and theft, and a new design language called Material 3 Expressive. Here are all the things announced at Google I/O 2025. Gemini Ultra Gemini Ultra (only in the U.S. for now) delivers the “highest level of access” to Google’s AI-powered apps and services, according to Google. It’s priced at $249.99 per month and includes Google’s Veo 3 video generator, the company’s new Flow video editing app, and a powerful AI capability called Gemini 2.5 Pro Deep Think mode, which hasn’t launched yet. AI Ultra comes with higher limits in Google’s NotebookLM platform and Whisk, the company’s image remixing app. AI Ultra subscribers also get access to Google’s Gemini chatbot in Chrome; some “agentic” tools powered by the company’s Project Mariner tech; YouTube Premium; and 30TB of storage across Google Drive, Google Photos, and Gmail. Deep Think in Gemini 2.5 Pro Deep Think is an “enhanced” reasoning mode for Google’s flagship Gemini 2.5 Pro model. It allows the model to consider multiple answers to questions before responding, boosting its performance on certain benchmarks. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW Google didn’t go into detail about Deep Think works, but it could be similar to OpenAI’s o1-pro and upcoming o3-pro models, which likely use an engine to search for and synthesize the best solution to a given problem. Deep Think is available to “trusted testers” via the Gemini API. Google said that it’s taking additional time to conduct safety evaluations before rolling out Deep Think widely. Veo 3 video-generating AI model Google claims that Veo 3 can generate sound effects, background noises, and even dialogue to accompany the videos it creates. Veo 3 also improves upon its predecessor, Veo 2, in terms of the quality of footage it can generate, Google says. Veo 3 is available beginning Tuesday in Google’s Gemini chatbot app for subscribers to Google’s $249.99-per-month AI Ultra plan, where it can be prompted with text or an image. Imagen 4 AI image generator According to Google, Imagen 4 is fast — faster than Imagen 3. And it’ll soon get faster. In the near future, Google plans to release a variant of Imagen 4 that’s up to 10x quicker than Imagen 3. Imagen 4 is capable of rendering “fine details” like fabrics, water droplets, and animal fur, according to Google. It can handle both photorealistic and abstract styles, creating images in a range of aspect ratios and up to 2K resolution. Both Veo 3 and Imagen 4 will be used to power Flow, the company’s AI-powered video tool geared towards filmmaking. A sample from Imagen 4.Image Credits:Google Gemini app updates Google announced that Gemini apps have more than 400 monthly active users. Gemini Live’s camera and screen-sharing capabilities will roll out this week to all users on iOS and Android. The feature, powered by Project Astra, lets people have near-real time verbal conversations with Gemini, while also streaming video from their smartphone’s camera or screen to the AI model. Google says Gemini Live will also start to integrate more deeply with its other apps in the coming weeks: It will soon be able to offer directions from Google Maps, create events in Google Calendar, and make to-do lists with Google Tasks. Google says it’s updating Deep Research, Gemini’s AI agent that generates thorough research reports, by allowing users to upload their own private PDFs and images. Stitch Stitch is an AI-powered tool to help people design web and mobile app front ends by generating the necessary UI elements and code. Stitch can be prompted to create app UIs with a few words or even an image, providing HTML and CSS markup for the designs it generates. Stitch is a bit more limited in what it can do compared to some other vibe coding products, but there’s a fair amount of customization options. Google has also expanded access to Jules, its AI agent aimed at helping developers fix bugs in code. The tool helps developers understand complex code, create pull requests on GitHub, and handle certain backlog items and programming tasks. Project Mariner Project Mariner is Google’s experimental AI agent that browses and uses websites. Google says it has significantly updated how Project Mariner works, allowing the agent to take on nearly a dozen tasks at a time, and is now rolling it out to users. For example, Project Mariner users can purchase tickets to a baseball game or buy groceries online without ever visiting a third-party website. People can just chat with Google’s AI agent, and it visits websites and takes actions for them. Project Astra Google’s low latency, multimodal AI experience, Project Astra, will power an array of new experiences in Search, the Gemini AI app, and products from third-party developers. Project Astra was born out of Google DeepMind as a way to showcase nearly real-time, multimodal AI capabilities. The company says it’s now building those Project Astra glasses with partners including Samsung and Warby Parker, but the company doesn’t have a set launch date yet. Image Credits:Google AI Mode Google is rolling out AI Mode, the experimental Google Search feature that lets people ask complex, multi-part questions via an AI interface, to users in the U.S. this week. AI Mode will support the use of complex data in sports and finance queries, and it will offer “try it on” options for apparel. Search Live, which is rolling out later this summer, will let you ask questions based on what your phone’s camera is seeing in real-time. Gmail is the first app to be supported with personalized context. Beam 3D teleconferencing Beam, previously called Starline, uses a combination of software and hardware, including a six-camera array and custom light field display, to let a user converse with someone as if they were in the same meeting room. An AI model converts video from the cameras, which are positioned at different angles and pointed toward the user, into a 3D rendering. Google’s Beam boasts “near-perfect” millimeter-level head tracking and 60fps video streaming. When used with Google Meet, Beam provides an AI-powered real-time speech translation feature that preserves the original speaker’s voice, tone, and expressions. And speaking of Google Meet, Google announced that Meet is getting real-time speech translation. More AI updates Google is launching Gemini in Chrome, which will give people access to a new AI browsing assistant that will help them quickly understand the context of a page and get tasks done. Gemma 3n is a model designed to run “smoothly” on phones, laptops, and tablets. It’s available in preview starting Tuesday; it can handle audio, text, images, and videos, according to Google. The company also announced a ton of AI Workspace features coming to Gmail, Google Docs, and Google Vids. Most notably, Gmail is getting personalized smart replies and a new inbox-cleaning feature, while Vids is getting new ways to create and edit content. Video Overviews are coming to NotebookLM, and the company rolled out SynthID Detector, a verification portal that uses Google’s SynthID watermarking technology to help identify AI-generated content. Lyria RealTime, the AI model that powers its experimental music production app, is now available via an API. Wear OS 6 Wear OS 6 brings a unified font to tiles for a cleaner app look, and Pixel Watches are getting dynamic theming that syncs app colors with watch faces. The core promise of the new design reference platform is to let developers build better customization in apps along with seamless transitions. The company is releasing a design guideline for developers along with Figma design files. Image Credits:Google Google Play Google is beefing up the Play Store for Android developers with fresh tools to handle subscriptions, topic pages so users can dive into specific interests, audio samples to give folks a sneak peek into app content, and a new checkout experience to make selling add-ons smoother. “Topic browse” pages for movies and shows (U.S. only for now) will connect users to apps tied to tons of shows and movies. Plus, developers are getting dedicated pages for testing and releases, and tools to keep an eye on and improve their app rollouts. Developers using Google can also now halt live app releases if a critical problem pops up. Subscription management tools are also getting an upgrade with multi-product checkout. Devs will soon be able to offer subscription add-ons alongside main subscriptions, all under one payment. Android Studio Android Studio is integrating new AI features, including “Journeys,” an “agentic AI” capability that coincides with the release of the Gemini 2.5 Pro model. And an “Agent Mode” will be able to handle more-intricate development processes. Android Studio will receive new AI capabilities, including an enhanced “crash insights” feature in the App Quality Insights panel. This improvement, powered by Gemini, will analyze an app’s source code to identify potential causes of crashes and suggest fixes.
-
Google’s AI Mode, the experimental Google Search feature that lets users ask complex, multi-part questions via an AI interface, will roll out to everyone in the U.S. starting this week, the company announced at its annual developer conference, Google I/O 2025, on Tuesday. The feature builds on Google’s existing AI-powered search experience, AI Overviews, which display AI-generated summaries at the top of its search results page. Launched last year, AI Overviews saw mixed results as Google’s AI offered questionable answers and advice, like a suggestion to use glue on pizza, among other things. Image Credits:Google However, Google claims AI Overviews is a success in terms of adoption, if not accuracy, as over 1.5 billion monthly users have used the AI feature. It will now exit Labs. expand to over 200 countries and territories and become available in more than 40 languages, the company says. AI Mode, meanwhile, lets users ask complex questions and ask follow-ups. Initially available in Google’s Search Labs for testing, the feature arrived as other AI companies, like Perplexity and OpenAI, expanded into Google’s territory with web search features of their own. Worried about potentially ceding search market share to rivals, AI Mode represents Google’s pitch for what the future of search will look like. Image Credits:Google As AI Mode rolls out more broadly, Google is touting some of its new capabilities, including Deep Search. While AI Mode takes a question and breaks it up into different subtopics to answer your query, Deep Search does so at scale. It can issue dozens or even hundreds of queries to provide your answers, which will also include links so you can dig into the research yourself. Image Credits:Google The result is a fully cited report generated in minutes, potentially saving you hours of research, Google says. The company suggested using the Deep Search feature for things like comparison shopping, whether that’s for a big-ticket home appliance or a summer camp for the kids. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW Image Credits:Google Another AI-powered shopping feature coming to AI Mode is a virtual “try it on” option for apparel, which uses an uploaded picture of yourself to generate an image of yourself wearing the item in question. The feature will have an understanding of 3D shapes and fabric types, and stretch, Google notes, and will begin rolling out in Search Labs today. In the months ahead, Google says it will offer a shopping tool for U.S. users that will purchase items on your behalf after it hits a specific price. (You’ll still have to click “buy for me” to kick off this agent, however.) Both AI Overviews and AI Mode will now use a custom version of Gemini 2.5, and Google says that AI Mode’s capabilities will gradually roll out to AI Overviews over time. AI Mode will also support the use of complex data in sports and finance queries, available through Labs sometime “soon.” This lets users ask complex questions — like “compare the Philies and White Sox’ home game win percentages by year for the past five seasons.” The AI will search across multiple sources, put that data together in a single answer, and even create visualizations on the fly to help you better understand the data. Image Credits:Google Another feature leverages Project Mariner, Google’s agent that can interact with the web to take actions on your behalf. Initially available for queries involving restaurants, events, and other local services, AI Mode will save you time researching prices and availability across multiple sites to find the best option — like affordable concert tickets, for instance. Search Live, rolling out later this summer, will let you ask questions based on what your phone’s camera is seeing in real-time. This goes beyond the visual search capabilities of Google Lens, as you can have an interactive back-and-forth conversation with the AI using both video and audio, similar to Google’s multimodal AI system, Project Astra. Image Credits:Google Search results will also be personalized based on your past searches, and if you choose to connect your Google Apps using a feature that will roll out this summer. For instance, if you connect your Gmail, Google could know about your travel dates from a booking confirmation email, then use that to recommend events in the city you’re visiting that will be taking place while you’re there. (Expecting some pushback over privacy concerns, Google notes that you can connect or disconnect your apps at any time.) Gmail is the first app to be supported with personalized context, the company notes.
-
Google’s latest video-generating AI model, Veo 3, can create audio to go along with the clips that it generates. On Tuesday during the Google I/O 2025 developer conference, Google unveiled Veo 3, which the company claims can generate sound effects, background noises, and even dialogue to accompany the videos it creates. Veo 3 also improves upon its predecessor, Veo 2, in terms of the quality of footage it can generate, Google says. Veo 3 is available beginning Tuesday in Google’s Gemini chatbot app for subscribers to Google’s $249.99-per-month AI Ultra plan, where it can be prompted with text or an image. “For the first time, we’re emerging from the silent era of video generation,” Demis Hassabis, the CEO of Google DeepMind, Google’s AI R&D division, said during a press briefing. “[You can give Veo 3] a prompt describing characters and an environment, and suggest dialogue with a description of how you want it to sound.” The wide availability of tools to build video generators has led to such an explosion of providers that the space is becoming saturated. Startups including Runway, Lightricks, Genmo, Pika, Higgsfield, Kling, and Luma, as well as tech giants such as OpenAI and Alibaba, are releasing models at a fast clip. In many cases, little distinguishes one model from another. Audio output stands to be a big differentiator for Veo 3, if Google can deliver on its promises. AI-powered sound-generating tools aren’t novel, nor are models to create video sound effects. But Veo 3 uniquely can understand the raw pixels from its videos and sync generated sounds with clips automatically, per Google. Here’s a sample clip from the model: Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW cooking up something tasty for tomorrow… pic.twitter.com/wyIRMsXkFG — Demis Hassabis (@demishassabis) May 19, 2025 Veo 3 was likely made possible by DeepMind’s earlier work in “video-to-audio” AI. Last June, DeepMind revealed that it was developing AI tech to generate soundtracks for videos by training a model on a combination of sounds and dialogue transcripts as well as video clips. DeepMind won’t say exactly where it sourced the content to train Veo 3, but YouTube is a strong possibility. Google owns YouTube, and DeepMind previously told TechCrunch that Google models like Veo “may” be trained on some YouTube material. To mitigate the risk of deepfakes, DeepMind says it’s using its proprietary watermarking technology, SynthID, to embed invisible markers into frames Veo 3 generates. While companies like Google pitch Veo 3 as powerful creative tools, many artists are understandably wary of them — they threaten to upend entire industries. A 2024 study commissioned by the Animation Guild, a union representing Hollywood animators and cartoonists, estimates that more than 100,000 U.S.-based film, television, and animation jobs will be disrupted by AI by 2026. Google also today rolled out new capabilities for Veo 2, including a feature that lets users give the model images of characters, scenes, objects, and styles for better consistency. The latest Veo 2 can understand camera movements like rotations, dollies, and zooms, and it allows users to add or erase objects from videos or broaden the frames of clips to, for example, turn them from portrait into landscape. Google says that all of these new Veo 2 capabilities will come to its Vertex AI API platform in the coming weeks.
-
Android Studio adds ‘agentic AI’ with Journeys feature, Agent Mode
CodeCanyon posted a topic in News
Android Studio, the integrated development environment (IDE) for Android app developers, is getting an AI upgrade, Google announced at its developer conference, Google I/O 2025, on Tuesday. In addition to the rollout of the latest Gemini 2.5 Pro model, Android Studio is gaining a new “agentic AI” capability called Journeys and will soon introduce an “Agent Mode” for more complex development tasks. Using Gemini, journeys will allow developers to test their app by describing the actions and assertions in natural language for the user journeys across the app. Gemini will then perform the tests for you. Image Credits:Google The feature, explains Google, will let developers test their apps more easily, without having to write extensive code to do so. The company cautioned that this is still experimental, but ultimately, the goal is to increase the speed of shipping high-quality code while reducing the time it takes to test, validate, or reproduce issues. The tests can run on physical or virtual Android devices, and their results will appear directly in the IDE, Google says. Image Credits:Google Another soon-to-arrive addition involves an autonomous AI feature powered by Gemini called Agent Mode. This will allow developers to use various tools to handle more complex, multi-stage development tasks. For instance, if a developer is trying to integrate a new API, the agent may come up with an execution plan that adds the necessary dependencies, edits files, and fixes bugs. Other AI features coming to Android Studio include a Gemini-powered improvement to the App Quality Insights panel’s “crash insights” feature, which can now use AI to help determine what in an app’s source code may have caused the app to crash, and suggest a fix. Image Credits:Google Plus, Google will now allow developers to try out its still experimental AI features through a new “Studio Labs” menu in the Settings menu of Android Studio. This option will be available in stable releases only, starting with the release codenamed Narwhal. Image Credits:Google Another experiment now available is the public preview of Android Studio Cloud. Accessed through Firebase Studio, the new service streams a Linux machine running Android Studio to your web browser, enabling Android development anywhere you have access to an internet connection. A Version Upgrade Agent will soon arrive as part of Gemini in Android Studio to help automate dependency upgrades. Gemini will also help developers automatically generate Jetpack Compose preview code, transform UI code within the Compose Preview environment using natural language, attach image files (like UI mockups or screenshots) to AI prompts, attach project files as context in chats with Gemini, and set up preferred coding styles or output formats with a new “Rules in Gemini” feature. Image Credits:Google The company is also rolling out an enterprise-ready version of its AI-powered Android Studio with the launch of Gemini in Android Studio for businesses, which lets teams deploy AI while keeping data safe when subscribing to Gemini Code Assist in Standard or Enterprise editions, it says. Other updates include resizable previews in Compose Preview and navigation improvements, an embedded Android XR emulator that launches by default in the embedded state, and upgrades to Backup and Restore and Backup and Sync, among other things. Android’s Kotlin Multiplatform will also see a small handful of improvements. In addition, Google says it will help developers prepare for Android’s 16KB page sizes — a change to Android’s underlying architecture — with early warnings and tools for testing apps in the new environment. -
Google announced at Google I/O 2025 that it’s bringing real-time speech translation to Google Meet. The feature leverages a large language audio model from Google DeepMind to allow for a natural, free-flowing conversation with someone in a different language, Google says. Speech translation in Meet translates spoken words into the listener’s preferred language in real time. Voice, tone, and expression are all preserved in the translation. The tech giant says the new feature has a variety of use cases. For instance, it can be used for English-speaking grandchildren talking to their Spanish-speaking grandparents. Or, it could be used by companies that operate across different regions to allow global colleagues to connect and chat in real-time. The latency for speech translation is very low, according to Google, allowing for multiple people to chat together, which the company says hasn’t been possible until now. Image Credits:Google When the person on the other side speaks, you will still faintly hear their original voice, with the translated speech overlaid on top. Speech translation in Google Meet will begin rolling out to consumer AI subscribers in beta starting Tuesday. The feature will first be available in English and Spanish, with more languages coming in the next few weeks, including Italian, German, and Portuguese. Google says it’s building out the speech translation in Meet for businesses, with early testing coming to Workspace customers this year. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW
-
At the Google I/O 2025 developer conference, Google launched Stitch, an AI-powered tool to help design web and mobile app front ends by generating the necessary UI elements and code. Stitch can be prompted to create app UIs with a few words or even an image, providing HTML and CSS markup for the designs it generates. Users can choose between Google’s Gemini 2.5 Pro and Gemini 2.5 Flash AI models to power Stitch’s code and interface ideation. Stitch lets users choose between Gemini 2.5 Flash and Gemini 2.5 Pro modelsImage Credits:Jagmeet Singh / TechCrunch Stitch arrives as so-called vibe coding — programming using code-generating AI models — continues to grow in popularity. There’s a number of large tech startups going after the burgeoning market, including Cursor maker Anysphere, Cognition, and Windsurf. Just last week, OpenAI launched a new assistive coding service called Codex. And yesterday during its Build 2025 kickoff, Microsoft rolled out a series of updates to its GitHub Copilot coding assistant. Stitch is a bit more limited in what it can do compared to some other vibe coding products, but there’s a fair amount of customization options. The tool supports directly exporting to Figma and can expose code so that it can be refined and worked on in an IDE. Stitch also lets users fine-tune any of the app design elements it generates. In a demo with TechCrunch, Google product manager Kathy Korevec showed two projects created using Stitch: a responsive mobile UI design for an app for bookworms and a web dashboard for beekeeping. “[Stitch is] where you can come and get your initial iteration done, and then you can keep going from there,” said Korevec. “What we want to do is make it super, super easy and approachable for people to do that next level of design thinking or that next level of software building for them.” Soon after I/O, Google plans to add a feature that’ll allow users to make changes in their UI designs by taking screenshots of the object they want to tweak and annotating it with the modifications they want, Korevec said. She added that while Stitch is reasonably powerful, it isn’t meant to be a full-fledged design platform like Figma or Adobe XD. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW Stitch lacks the elements that could’ve made it a full-fledged design platformImage Credits:Jagmeet Singh / TechCrunch Alongside Stitch, Google has expanded access to Jules, its AI agent aimed at helping developers fix bugs in their code. Now in public beta, the tool helps developers understand complex code, create pull requests on GitHub, and handle certain backlog items and programming tasks. In a separate demo, Korevec showed Jules upgrading a website running the deprecated Node.js version 16 to Node.js 22. Jules cloned the site’s codebase in a clean virtual machine and shared a “plan” for the upgrade, which Korevec was then prompted to approve. Once the upgrade was completed, Korevec asked Jules to verify that the website still worked correctly — which Jules did. Jules currently uses Gemini 2.5 Pro, but Korevec told TechCrunch that users will be able to switch between different models in the future.
-
Elon Musk has spent decades building a universe of companies that has served as an incubator for up-and-coming engineers and a proving ground for his inner circle. That universe — an ecosystem of Silicon Valley tech titans, veterans of his companies like Tesla and SpaceX, and a crop of fresh-faced hackers and software engineers — has now collided with the U.S. federal government. The dozens of individuals who work under, or advise, Musk and the Trump-ordered Department of Government Efficiency — individuals who TechCrunch has identified or confirmed independently — reflects more than the billionaire’s proclivity to collect talent. They are a real-life illustration of Musk’s web-like reach in the tech industry. TechCrunch set out to report on or confirm the individuals working as Musk representatives in the U.S. government. Importantly, we’ve sought to show the connections between them, and how and when they entered Musk’s orbit. Along the way, TechCrunch has made some new discoveries, including new details about DOGE and its workers and an xAI-powered chatbot on a DOGE-related website subdomain that is hosted on a Musk acolyte’s website. Since publishing, TechCrunch has learned that DOGE cut cybersecurity staff at CISA, and confirmed several more Musk-linked DOGE associates, including a former VC who now works at the Social Security Administration. This article has been updated multiple times since it was first published February 18, 2025. How we got here TechCrunch interviewed people who have worked with Musk and DOGE staffers. We used public and open source data, such as historical internet records and chat logs, to confirm parts of our reporting. We also relied on services like the Internet Archive’s Wayback Machine to access archived copies of websites that are no longer online. Public information, such as court records, payments transactions, other media reports, and past TechCrunch reporting were also used. TechCrunch reached out to all those named for an opportunity to comment. For those whose contact information we did not have, TechCrunch contacted known representatives, including the Trump administration. When reached for comment, a White House spokesperson provided a statement to TechCrunch. (The spokesperson sent the email “on background,” but we are publishing in full as we were given no opportunity to decline the terms.) “DOGE is fulfilling President Trump’s commitment to making government more accountable, efficient, and, most importantly, restoring proper stewardship of the American taxpayer’s hard-earned dollars. Those leading this mission with Elon Musk are doing so in full compliance with federal law, appropriate security clearances, and as employees of the relevant agencies, not as outside advisors or entities. The ongoing operations of DOGE may be seen as disruptive by those entrenched in the federal bureaucracy, who resist change. While change can be uncomfortable, it is necessary and aligns with the mandate supported by more than 77 million American voters,” the statement read. INNER CIRCLE SENIOR FIGURES WORKER BEES AIDES Elon Musk Akash Bobba Marc Andreessen Steve Davis Jehn Balajadia Edward Coristine George Cooper Nicole Hollander Riccardo Biasini Scott Coulter Vinay Hiremath Brian Bjelde Marko Elez Anthony Jancso Amanda Scales Luke Farritor Michael Kratsios Branden Spikes Amy Gleason Gautier ‘Cole’ Killian Katie Miller Michael Russo Gavin Kliger Christopher Stanley Tom Krause Jeremy Lewin Aram Moghaddassi Nikhil Rajpal Kyle Schutt Ethan Shaotran Thomas Shedd Jordan Wick Christopher Young Scroll down to learn about the individuals in the DOGE universe, which have been broken down by type: Musk’s inner circle; senior figures; worker bees; and aides, some of whom are advising and recruiting for DOGE. If you know more about DOGE, contact TechCrunch securely. Inner circle Elon Musk Role: DOGE Lead, Unpaid “Special Government Employee” Gravitas. Obsession. Ambition. Risk-taker. Elon Musk observers and sycophants have pointed to these traits to explain his rise from a bright-eyed immigrant landing in the United States to one of the world’s richest and most powerful people, and now a right-hand to President Donald Trump. The overlooked secret sauce is Musk’s ability to get talented people to sign on to “the mission.” That there are multiple, overlapping, and evolving missions — saving the planet through sustainable transport and energy, solving traffic, making humanity multiplanetary, or protecting public conversation — doesn’t really matter. The biggest carrot was, and has always been, Musk’s “us versus them” framing, according to five sources who had lengthy stints at Tesla and spoke to TechCrunch on condition of anonymity. In the past, the “them” might be local regulators, the press, or legacy automakers. Today, the people working closely with Musk to complete his next mission via the Department of Government Efficiency have a new “them” to battle: waste and bureaucrats. Musk has talked at length about government agencies that should be “deleted entirely.” Meanwhile, Musk’s companies have benefited from government contracts and incentives. His company SpaceX has been awarded more than $20 billion in contracts from NASA, the Department of Defense, and other federal agencies, according to data from USASpending.gov. Musk is an unpaid special government employee, according to the White House. Per a filing on February 17, the Trump administration said Musk is a White House employee and a senior advisor to the president. According to the Trump administration’s executive order, DOGE is a temporary government organization whose authorities are set to expire in July 2026. In April, Musk made public statements that he was backing away from DOGE to focus on his own companies, namely Tesla. And yet, Musk also said he may keep doing work with the department through the remainder of Trump’s second term. (back to top) Steve Davis Role: Long-time Musk insider Steve Davis is a long-time Musk confidant. He began working at SpaceX as one of its earliest employees in 2003 after earning a master’s degree in aerospace engineering from Stanford University, according to the Los Angeles Times. By 2016, Musk tapped Davis to run what at the time was his newest far-fetched idea: An underground transit play known as The Boring Company. On his watch, The Boring Company raised hundreds of millions of dollars and built a few short so-called “Tesla tunnels” in Las Vegas. But the company has also ended up dropping its previously announced plans for tunnels in cities like Los Angeles and Chicago, according to The Wall Street Journal. Davis has a reputation as a relentless negotiator, and Musk brought him on board at Twitter to help with the takeover and subsequent slash-and-burn. Davis and his family reportedly slept in a makeshift bedroom at the company’s headquarters during this time. In a 2023 lawsuit against Twitter filed by former employees, the plaintiffs alleged that Davis said to the effect of, “we don’t have to follow those rules,” in response to a request to get permits to install a bathroom for Musk. Davis has served as a member of the board of advisors of the Atlas Society, a group centered around the philosophy of Ayn Rand, per Bloomberg. Davis is now helping Musk slash government headcount, which Rand once called the “worst part” of the “producers’ burden.” (back to top) Nicole Hollander Role: X employee Nicole Hollander garnered public attention in the aftermath of Elon Musk’s $44 billion acquisition of Twitter in early 2022. The George Washington University alum and former employee of real estate developer JGB Smith (per her LinkedIn profile) was part of the polarizing Twitter transition team that upended the company and slashed its workforce. As part of that transition, Hollander and Musk ally Steve Davis moved into the company’s headquarters with their infant, according to a civil lawsuit filed in 2023. The lawsuit was filed by several former Twitter employees against X Corp., the Musk entity that took over. The plaintiffs said in the complaint Hollander was not employed by any of Musk’s companies at that time. Hollander’s relationship with Davis — and her current employment at X — has kept her in Musk’s circle. Her role at DOGE doesn’t have an official title, at least one that is public. However, Wired reported in late January that Hollander has high-level access to federal agencies and an official government email address. Hollander is working at the GSA, per Wired, where she oversaw the accidental disclosure of a secret CIA facility, per the publication’s follow-up reporting. (back to top) Brian Bjelde Role: Senior Advisor, Office of Personnel Management Brian Bjelde is a true Elon Musk veteran, with 21 years at SpaceX, where he was employee No. 14 and continues to work today. While he started as an avionics engineer, Bjelde has spent the last decade running the company’s human resources department. In a 2014 Reddit “ask me anything,” Bjelde said that at SpaceX, “We try not to limit our thinking except by the limits imposed by physics.” Several former employees sued SpaceX and Musk in 2024 alleging sexual harassment and a hostile work environment. According to the complaint, Bjelde once starred in a video for the space company where a staffer spanked him — an apparent attempt at tongue-in-cheek humor — and he also was involved in the firings of several employees who spoke up about the company’s culture in 2022. Prior to SpaceX, Bjelde spent a year at NASA’s Jet Propulsion Laboratory after graduating from the University of Southern California with a master’s degree in astronautical engineering. At DOGE, Bjelde is reportedly a senior advisor at OPM. (back to top) Amanda Scales Role: Chief of Staff, Office of Personnel Management Amanda Scales doesn’t directly work for DOGE, but is now chief of staff at the Office of Personnel Management, the federal government’s main human resources department, according to an OPM memo. Scales used to work on talent acquisition at Musk’s company x.AI until January 2025, according to her LinkedIn profile. She also worked in human resources and talent at San Francisco-based VC firm Human Capital, as well as at Uber. Scales graduated from University of California, Davis in 2012 with degrees in psychology and economics, per her LinkedIn. (back to top) Branden Spikes Role: Head of IT at X; Former DOGE Operative Spikes serves as the head of IT at X since February 2025, according to his LinkedIn profile. Prior to that, he said in his X bio that he had “most recently” worked for DOGE. Spikes recently confirmed to journalist Brian Krebs that he worked for DOGE for two months in Washington, D.C. “to help save [the country] from certain bankruptcy.” Spikes is a longtime Musk insider, touting on his LinkedIn profile that he was the fourth hire at SpaceX and was among the first employees at PayPal. Spikes’ ex-wife is married to Musk’s cousin, Krebs reported. (back to top) Senior figures Jehn Balajadia Role: Long-time Musk assistant Jehn Balajadia first joined the Musk ecosystem in 2017, according to her LinkedIn profile, which lists her title as operations coordinator at The Boring Company. But LinkedIn records don’t always reflect actual positions — and their evolution — in Musk’s world. In 2018, Balajadia took over the executive assistant to the office of CEO position, a role that included managing all of Musk’s activities and often those of his family members, according to interactions between a TechCrunch reporter and SpaceX and Musk employees at the time. Today, Balajadia has a role within DOGE. According to The New York Times, she is listed in the employee directory of the Education Department. The book “Breaking Twitter” claims Balajadia once told another Musk official that her job was to “take care” of him, and she reportedly often travels with Musk. When Musk took over Twitter, Balajadia was named chief of staff, and she was the one who delivered letters of dismissal to several Twitter executives, per Walter Isaacson’s book on Elon Musk. Prior to joining Tesla, Balajadia worked at Red Bull, NBCUniversal, and Walt Disney. (back to top) Riccardo Biasini Role: Senior Advisor to the Director of the Office of Personnel Management Riccardo Biasini entered into Musk’s orbit in 2011 when he joined Tesla as an engineer after completing his master’s degree in automotive engineering from the University of Pisa, Italy the previous year. During his five years at Tesla, Biasini focused much of his attention on Autopilot, the company’s branded advanced driver assistance system, according to his own account outlined in a Medium post. He led development of Autopilot’s traffic-aware cruise control and other driver assistance features before taking responsibility for the architecture of controls, safety, and functional behavior of the electric propulsion system. Biasini left Tesla and joined Comma.ai in 2016, where he developed the automated lateral and longitudinal controls for the startup’s first self-driving car system. He later became VP of quality and eventually was named CEO in 2018 after founder George Hotz stepped down from the leadership role. Biasini went back to work for Musk in 2019 as director of electrical and software engineering at The Boring Company. At DOGE, Biasini is senior advisor to the director of the Office of Personnel Management, according to a lawsuit filed against the OPM in the U.S. District Court for the District of Columbia, which provides his title as listed on an OPM document entitled “Privacy Impact Assessment for Government-Wide Email System.” (back to top) Amy Gleason Role: Acting DOGE administrator Amy Gleason is the acting DOGE administrator, according to the White House, making Gleason the official albeit ostensible head of the Department of Government Efficiency, even if Elon Musk is largely calling the shots. The White House confirmed Gleason’s position to TechCrunch. Gleason previously served at the U.S. Digital Service — now DOGE — between October 2018 and December 2021, according to her LinkedIn profile, which TechCrunch has seen. Gleason also previously worked as chief product officer at Russell Street Ventures between December 2021 and November 2024. Gleason, who was in Mexico on vacation at the time when she learned of her appointment, reports to the White House chief of staff, Susie Wiles, per the executive order establishing DOGE. President Trump has continued to refer to Musk as heading and leading DOGE. Gleason has not made any public comments since her appointment, but was court-ordered to answer questions about DOGE’s role and whether it is a government agency subject to Freedom of Information laws. (back to top) Michael Russo Role: Senior Advisor to the Commissioner and former Chief Information Officer, Social Security Administration Michael Russo, a former technology executive, began serving as the chief information officer at the Social Security Administration in early February. As CIO, Russo oversaw the agency’s IT systems and information security. In late March, DOGE associate Scott Coulter replaced Russo as the SSA CIO, according to the agency’s website. Russo began serving as a senior advisor to the Social Security Administration’s Commissioner. During his brief time as CIO, Russo, who is aligned with DOGE, reportedly quickly gave access to several DOGE staffers at the agency, including Akash Bobba and Coulter, among others, according to a lawsuit brought by unions representing workers and a person familiar with personnel matters. Several other DOGE staffers were listed as Russo’s direct reports in the department’s staff directory, the person familiar said. Russo was previously the chief technology officer at ecommerce firm Shift4 and was a senior director at cloud giant Oracle, headed by Larry Ellison, a close ally of President Trump. (back to top) Christopher Stanley Role: Former DOGE staffer with an unspecified role at White House Stanley began working for Musk in October 2022, per his LinkedIn profile, when he was hired for the “core transition team” at Twitter after Musk’s takeover. Stanley can be seen at the time taking this widely seen selfie at Twitter’s headquarters with other people who were not fired or quit when Musk took control of Twitter. Stanley served in an unspecified role at the White House, according to The New York Times. Stanley himself has hinted at working in the Trump administration on his X account. Stanley “returned to the private sector” in February, per an OPM spokesperson. On January 20, the day of Trump’s inauguration, Stanley posed next to two January 6 convicts, brothers Matthew and Andrew Valentin, who were pardoned by Trump. Stanley wrote in an X post that he was “boots on ground to ensure this was executed.” Trump’s Department of Justice liaison Paul Ingrassia wrote on X that the Valentin brothers were the first January 6 prisoners to be released. Stanley currently serves as the head of security engineering at X and the principal security engineer at SpaceX, according to his website. On his LinkedIn, Stanley says he is also the chief information security officer at X Payments, a payment service that Musk has wanted to launch as part of his “everything app” aspiration for X. Before entering Musk’s orbit, Stanley had his own cybersecurity firm, named Stanley Networks, and worked as a contractor at the state of Kentucky. He also worked at Kentucky health provider Baptist Health, which includes hospitals and other facilities. TechCrunch found an xAI-powered chatbot on a DOGE-related website subdomain on Stanley’s website, called the “Department of Government Efficiency AI Assistant,” which says it is “here to help government personnel like you identify and eliminate waste, improve efficiency, and streamline processes using a first principles approach.” (back to top) Worker bees at DOGE Akash Bobba Role: Expert, Office of Personnel Management Akash Bobba is a DOGE engineer who is reportedly a student at the University of California, Berkeley, according to Wired. The New Jersey native graduated high school in 2021, according to public records seen by TechCrunch. It’s unclear how his experiences brought him to DOGE. But he has had some interactions with the tech world. According to a since-deleted podcast with Aman Manazir, Bobba said he previously interned at Meta and Palantir. He also worked at Bridgewater Associates. Bobba’s website as of February 2025 points to a specific point in a YouTube video, titled, “How Tech Billionaires Plan to Destroy America,” in which Elon Musk says, “I’m not just MAGA. I’m dark gothic MAGA.” Per Wired, Bobba is listed as an “expert” in internal OPM correspondence, and reports directly to OPM’s chief of staff, Amanda Scales. According to a lawsuit filed by union workers, Bobba also has a presence at the Social Security Administration and worked under the agency’s chief information officer, Michael Russo, according to the lawsuit and a person familiar with personnel matters. Bobba was sworn into his post “over the phone, contrary to standard practice,” per the lawsuit, implying Bobba worked for DOGE at least in part remotely. (back to top) Edward Coristine Role: Special Government Employee Edward Coristine, a former intern at Neuralink and now known by his infamous LinkedIn profile handle “bigballs,” is one of the core members of the DOGE team and the youngest-known Musk aide at age 19, TechCrunch has confirmed. Since arriving in Washington, D.C., Coristine has been actively involved in accessing federal systems at several government departments, including Office of Personnel Management, the SBA, GSA and USAID, the State Department, Homeland Security, and FEMA. Coristine reportedly also has physical access to buildings at the U.S. cybersecurity agency CISA. Prior to DOGE, Coristine ran several companies under his name from his family home in New York, including DiamondCDN and Packetware, both which offered forms of DDoS protection. Coristine also used to work for DDoS mitigation company Path Network until he was fired in June 2022 following an alleged “leaking of proprietary company information that coincided with his tenure,” Path CEO Marshal Webb told TechCrunch in an email. Coristine said in a later Discord post under his handle “Rivage,” seen by TechCrunch and per other news reports, in response to his firing that he had done “nothing contractually wrong.” It was around May 2024 that Coristine went to work for Elon Musk’s Neuralink. Coristine is also a mechanical engineering and physics student at Northeastern University, and expected to graduate in 2028. (back to top) Scott Coulter Role: Chief Information Officer and former IT Specialist, Social Security Administration Scott Coulter is a DOGE staffer who serves as the chief information officer at the Social Security Administration. Coulter was initially listed as an IT specialist in the Social Security Administration’s staff directory as of mid-February, according to a person familiar. Coulter succeeded Michael Russo in March 2025. Prior to working for DOGE, Coulter headed Cowbird Capital, a New York investment fund founded in 2018, which listed assets of around $171 million as of March 2024, per a regulatory filing. Both Coulter and Cowbird were named in court documents during the Twitter v. Musk lawsuit in 2022 during Musk’s takeover of Twitter, though it’s not clear why either were named. Coulter did not respond to a request for comment sent to his government email address. (back to top) Marko Elez Role: Special Government Employee, U.S. Treasury Since joining DOGE, Marko Elez has become a central figure in a legal battle over DOGE’s access to some of the federal government’s most sensitive systems. Now as a senior Treasury employee, Elez has access to the U.S. Treasury’s payments systems responsible for disbursing around $6 trillion in federal funds to Americans, such as Social Security checks and federal tax refunds. Named in a lawsuit challenging DOGE’s access, Elez is a “special government employee” and reportedly had wide data access privileges to the department’s systems before that access was curtailed by a federal court. He works closely with Tom Krause, another DOGE staffer and senior Treasury employee. Per a February 11 court filing, Elez is the only DOGE staffer with access to payment systems. Before government, the 25-year-old Rutgers University graduate worked at SpaceX, where he focused on vehicle telemetry, Starship, and satellite software, according to an archived copy of his website seen by TechCrunch. Elez later worked on search AI at Musk’s social media company X, per an archived copy of his website. Elez does not list any prior government experience. On February 6, Elez briefly resigned from his position at DOGE, according to the White House press secretary Karoline Leavitt, after The Wall Street Journal surfaced racist posts from Elez’s social media accounts. Elez returned to government after Musk posted a poll on X asking whether Elez should be rehired. Elez was reinstated at DOGE, a Washington Post reporter posted on February 18. (back to top) Luke Farritor Role: Senior Advisor, DOGE Luke Farritor is listed as a senior advisor in several U.S. government department employee directories, including the State Department, USAID, and the Department of Energy. He also requested access to data held by Medicare and Medicaid, as well as the Consumer Finance Protection Bureau. Before government, Farritor, 23, was a student at the University of Nebraska, Lincoln and was well known for decoding the writings on ancient Roman scrolls, for which he won a $700,000 prize. Later, Farritor was among the 2024 Thiel Fellowship class, an annual award given by the billionaire Peter Thiel. An archived copy of Farritor’s website says he worked for Nat Friedman and Daniel Gross, who he helped to “invest a large, multistage VC fund and help run AI Grant.” (Neither Friedman or Gross responded to a request for comment.) Farritor worked as an intern at Elon Musk’s satellite internet company Starlink in mid-2022, then went on to work at SpaceX between May 2022 and July 2023, where he worked on “several mission-critical projects” leading up to Starship Flights 1 and 2, per his website. (back to top) Gautier ‘Cole’ Killian Role: DOGE “Volunteer”; Federal Detailee Gautier “Cole” Killian is described as a DOGE “volunteer” who was designated a “federal detailee” at the U.S. Environmental Protection Agency in early February. A federal detailee is a federal employee usually seconded from another government agency. Killian was a student at McGill University in Canada where he studied math and computer science, and was a member of McGill’s AI team between 2021 and 2022. His personal website was scrubbed from the internet in late 2024, according to his website’s public DNS records. (back to top) Gavin Kliger Role: Special Advisor to the Director of the Office of Personnel Management Gavin Kliger is an alum of University of California, Berkeley, and works at Databricks. Kliger joined the DOGE team earlier in 2025. Kliger is listed as special advisor to the director of the OPM on his LinkedIn profile, per Reuters, though much of Kliger’s online life, including his X account, has since been scrubbed from the internet, including the Wayback Machine, which archives copies of webpages in case they later become unavailable. A copy of Kliger’s resume that TechCrunch has seen said he previously interned at Twitter in mid-2019. According to an email sent to USAID staff, Kliger also has a USAID email address, and was one of the DOGE staffers who is now listed in the CFPB’s staff directory, according to the CFPB’s union. ProPublica reported in May that Kliger had been advised by government ethics attorneys he held stock in companies that federal employees are forbidden from owning, and as such could not take actions that would financially benefit him personally. Court records show Kliger participated in mass layoffs at the agency, including the terminations of the lawyers who had warned him about possible ethics violations. (back to top) Tom Krause Role: Special Government Employee, U.S. Treasury; CEO, Cloud Software Group Tom Krause is a special government employee and a senior DOGE staffer in the U.S. Treasury. He concurrently serves as the chief executive of Cloud Software Group, a private company that owns several tech firms, including remote access giant Citrix, a once public outfit that went private through a series of deals. Bloomberg reports that Krause eliminated jobs at Citrix that staff said were critical to the security of the company’s products, according to multiple employees both named and unnamed for the story. Cloud Software Group told Bloomberg it inherited weaknesses at Citrix and faced rising security threats across the industry, and that cybersecurity has improved since its private equity buyout and meets or exceeds all industry standards. Before becoming CEO of Cloud Software, Krause, who is 47, was a former executive at Broadcom; prior to that he ran a consultancy firm. Since working at the Treasury as one of Musk’s DOGE front-line staffers, Krause has worked closely with Marko Elez, another senior Treasury employee. Politico reported in May, citing financial disclosures it had obtained, that Krause reported hundreds of thousands of dollars’ worth of shares in several financial, banking, and tech companies, including firms that provide services to the Treasury unit that Krause oversees. (back to top) Jeremy Lewin Role: DOGE Staffer Jeremy Lewin is a DOGE staffer assigned to the General Services Administration, which oversees the federal government’s massive procurement and logistics operations, Bloomberg reported. Lewin reportedly failed to gain access to a secure GSA area, resulting in a superior of his lobbying the CIA for a clearance. Lewin is a 27-year-old Harvard Law School graduate who recently worked at the same law firm, Munger, Tolles & Olson, as did U.S. Second Lady Usha Vance, The Handbasket reported. (back to top) Aram Moghaddassi Role: DOGE Operative Moghaddassi is part of a DOGE team assigned to the U.S. Department of Labor, and is one of several staffers that DOGE plans to install at the U.S. Treasury, per The New York Times. Moghaddasi has worked for at least three of Musk’s companies: X and Neuralink, according to multiple media reports, and a cached copy of Moghaddassi’s X account in 2023 said he also previously worked on AI at Tesla. Moghaddassi appears to be in his twenties. In 2019, he was a sophomore at University of California, Berkeley, where he studied applied math and computer science, the Santa Fe Institute’s website says. (back to top) Nikhil Rajpal Role: DOGE Staffer Nikhil Rajpal studied computer science and history at the University of California, Berkeley, where he served as the president of the libertarian-leaning student political group, Students for Liberty. According to archived snapshots of his website, Rajpal worked at Twitter from 2016 until some time before Musk’s acquisition. He may have first entered Musk’s orbit prior to this, reportedly doing work redesigning a Tesla console. On behalf of DOGE, Rajpal works at the National Oceanic and Atmospheric Administration and has a DOGE email address. (back to top) Kyle Schutt Role: DOGE Technologist Schutt is a technologist with longstanding links to Republican politics and was more recently linked to political operations by Elon Musk. Schutt reportedly has access to systems at FEMA. According to his since-deleted GitHub profile, which TechCrunch has seen, Schutt works at a company called Outburst Data. According to security researchers, Outburst Data hosts part of DOGE’s website and several other Musk-related sites, including his America PAC political fundraiser. TechCrunch has also seen the same DNS records, which reference a DOGE-named subdomain. On February 14, 404 Media reported a flaw that it said allows anyone to edit DOGE’s website. Schutt also serves as the chief technology officer at Revv, an online fundraising platform that is widely used by the Republican Party, as well as co-founder of Virginia-based software company KAMM. (back to top) Ethan Shaotran Role: DOGE Staffer Ethan Shaotran, 22, and a California native, is a DOGE staffer and also a Harvard University student in the class of 2025. Shaotran was first publicly linked to Musk in September 2024, when he was runner-up in a hackathon run by the billionaire’s AI company xAI. Shaotran was previously the founder of Energize.ai, though its website no longer loads. He also developed several iPhone apps, including a Donald Trump-themed running game called “Donald Dash.” Shaotran reportedly has a working GSA email address and requested access to a decade’s worth of GSA data. Shaotran also has access to email systems at the Department of Education and access to the department’s back-end website. Shaotran was temporarily detailed to the Office of the Postmaster General at the U.S. Postal Service as of March 12, 2025, according to a Freedom of Information request. (back to top) Thomas Shedd Role: Director of Technology Transformation Services, GSA; Chief Information Officer, Department of Labor. Thomas Shedd is a former Tesla engineer who now serves as the director of the General Services Administration’s Technology Transformation Services, or TTS, a unit known for designing and building digital services for the federal government. Since Shedd took charge of the unit, the GSA fired its 18F division of specialist technology consultants who worked on the Internal Revenue Service’s free tax-filing system and other government projects. Several staffers also reportedly resigned after Shedd gained access to parts of Notify.gov, a system that sends mass text messages to the public during emergencies, which contains the personal information of Americans who registered. As of mid-March, Shedd was also tapped as the chief information officer at the Department of Labor, which he serves concurrently as GSA’s technology director. Shedd reportedly is looking to reduce the agency’s headcount by 30%. Prior to working in federal government, Shedd worked at Tesla for eight years, according to the GSA, where he worked on “building software that operates vehicle and battery factories.” It’s not clear what prior government service Shedd has, if any, but he has said he wants to run TTS like a “startup software company,” according to Wired magazine, including the use of AI to analyze government contracts. (back to top) Jordan Wick Role: DOGE Staffer Jordan Wick is a former Waymo software engineer who appears to have a DOGE email account associated with the Executive Office of the President, Wired reported. Wick is among the team that was given access to Consumer Financial Protection Bureau systems. Wick is also the co-founder of Y Combinator startup Intercept, according to YC’s website. An archived version of Wick’s website says that as of 2022, he had “recently” graduated with a master’s degree in engineering from MIT. (back to top) Christopher Young Role: DOGE Staffer Young is a DOGE staffer who works at the Consumer Financial Protection Bureau, per Bloomberg Law. Young is a “top Republican field operative” who was hired as Musk’s political advisor in 2024, The New York Times reported. Young has worked in Republican politics since at least 2007, according to his LinkedIn profile. ProPublica has since reported that Young earns as much as $1 million annually as a political adviser to Musk while also helping to dismantle the federal regulator and its consumer protection rules. (back to top) Aides and advisors Marc Andreessen Role: Unofficial Advisor to DOGE Marc Andreessen, the co-founder of Silicon Valley VC firm Andreesen Horowitz, doesn’t formally work for DOGE but has acted as “a key networker for talent recruitment” at the agency, according to The Washington Post. Andreessen has jokingly referred to himself as an “unpaid intern” for DOGE, as well. (back to top) George Cooper Role: DOGE Recruiter Cooper is a Palantir engineer who worked on DOGE’s recruiting efforts in late 2024, according to Wired. He graduated from Pennsylvania’s Lehigh University in 2019 with a bachelor’s degree in computer science and business, according to his LinkedIn profile. Cooper worked to hire other Palantirians to join DOGE as they are “the most exceptional people I know,” he wrote in a message, seen by Wired. (back to top) Vinay Hiremath Role: DOGE Recruiter Hiremath, 32, is the co-founder of video recording startup Loom, which was sold to Atlassian in 2023 for $975 million. According to a blog post on his website titled “I am rich and have no idea what to do with my life,” Hiremath worked for DOGE in late 2024 for about a month making hundreds of recruiting calls. He wrote that he was added to DOGE-tied Signal groups and “immediately put to work.” While Hiremath praised DOGE’s work as “extremely important,” Hiremath said he quit as he needed to focus on himself, calling off plans to move to Washington, D.C. and going to Hawaii instead. (back to top) Anthony Jancso Role: DOGE Recruiter Jancso is a former Palantir software engineer who also worked on DOGE’s recruitment efforts late in 2024, according to Wired. In 2023, Jancso co-founded Accelerate SF, an initiative that taps engineers to solve the city’s problems with AI. Jancso’s exact age isn’t public, but he graduated from University College London in 2021 with a bachelor’s degree in economics according to his LinkedIn profile, seen by TechCrunch. Jancso was himself recruited to DOGE by Boring Company president Steve Davis. (back to top) Michael Kratsios Role: DOGE Recruiter Kratsios helped lead efforts to staff DOGE in late 2024, conducting interviews of prospective staff, Bloomberg reported. Kratsios was previously managing director of Scale AI and the chief technology officer of the United States under President Trump’s first term. He was also a principal at Thiel Capital, a VC firm founded by Peter Thiel, from 2014 to 2017, according to his LinkedIn. (back to top) Katie Miller Role: DOGE Advisor & Spokesperson Katie Miller is a Trump-appointed advisor to DOGE, and has served as its spokesperson. Miller served in the first Trump administration and is the spouse of Trump’s deputy chief of staff, Stephen Miller. Miller also serves on a presidential advisory board related to intelligence matters. (back to top) If you work in the federal government or know more about DOGE and want to contact TechCrunch, securely get in touch. This was originally published February 18, 2025, and will be updated regularly.
-
When you claim to have the world’s most powerful video intelligence platform, you probably know a thing or two about foundational models. We’re pleased to announce that Twelve Labs’ CEO, Jae Lee, will be joining us on the main stage at TechCrunch Sessions: AI, happening June 5th at UC Berkeley’s Zellerbach Hall. One of the biggest pioneers in AI is taking the stage on June 5 at UC Berkeley’s Zellerbach Hall — and for a limited time, we’re rolling back ticket prices to make sure everyone in the AI community is there. Whether you lead, fund, build, or simply love AI, insights and meaningful connections shouldn’t come with barriers. Save up to $300 on your ticket, and get 50% off a second, so you can bring a friend, colleague, or partner along for the ride. About Jae Lee’s session With new, more powerful AI models launching seemingly every week, the pace of innovation is both thrilling and overwhelming. In this dynamic conversation, Logan Kilpatrick, Senior Product Manager at Google DeepMind, Jae Lee, CEO of Twelve Labs, and Danielle Perszyk, PhD, Cognitive Scientist and Member of Technical Staff, Amazon AGI SF Lab will share firsthand insights from the frontlines of AI development. Together, they’ll explore how startups can not only build on top of today’s leading foundation models, but also adapt and scale in a rapidly evolving landscape. From choosing the right models to anticipating future shifts, this session will equip founders, builders, and product leaders with strategies to stay ahead, stay relevant, and seize the opportunities of the AI era. Get the details on this session and check out all the AI industry leaders joining us on the TC Sessions: AI agenda page. Get to know Lee Jae Lee is the co-founder and CEO of Twelve Labs, a pioneering company building state-of-the-art multimodal foundation models that empower developers and enterprises to extract deep insights from complex video data. Under his leadership, Twelve Labs is redefining how machines understand video — unlocking new possibilities across industries from media to security to enterprise intelligence. You can learn more about the work Twelve Labs is doing here. Before founding Twelve Labs, Jae served as Lead Data Scientist at the Ministry of National Defense in Korea, where he applied machine learning to national-scale challenges. He also gained industry experience as a software engineering intern at both Amazon and Samsung, building a strong foundation at the intersection of AI, data, and scalable infrastructure. Jae holds a Bachelor’s degree in Computer Science from UC Berkeley, making this event a meaningful return to the university where his journey in technology and entrepreneurship began. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW Pocket your ticket savings at TC Sessions: AI At TC Sessions: AI, Lee will draw on his deep expertise to co-lead a compelling discussion on how startups and developers can harness the power of foundational AI models. Don’t miss this conversation at the forefront of AI innovation — register your spot now and save up to $300, and don’t forget to bring your +1 with an additional 50% discount.
-
Apple plans to release a new set of AI products and frameworks at its Worldwide Developers Conference (WWDC) this June, including tools that’ll let third-party developers create software using Apple AI models, per a Bloomberg report. Apple’s hope is that expanding its AI tech in this way will draw more attention — and users — as the company looks to catch up with its competitors in the AI space. The new framework will let developers integrate Apple Intelligence across their apps, Bloomberg reports. The company is seeking to first allow developers to use its smaller models, according to the publication. WWDC this year will also reportedly see Apple overhaul its operating systems across iPhone, iPad, and Mac. Apple is also set to release new device-specific capabilities, including one that helps manage battery life, and a new Health app — powered, of course, by AI (although the app reportedly won’t be ready until next year).
-
AI is evolving fast, and access to valuable insights and powerful networking shouldn’t be reserved for those who pay top dollar. We’re rolling back TechCrunch Sessions: AI ticket prices! Save up to $300 on your pass, and get an extra 50% off when you bring a plus-one. Learn from the brightest minds in AI and forge meaningful connections — while locking in the biggest savings before the doors open on June 5 at UC Berkeley’s Zellerbach Hall. Whether you’re pioneering, building, funding, or just diving into the world of AI, this immersive experience is for you. We welcome everyone, from industry pros and founders to academics and AI die-hards. Join us for a full day of cutting-edge programming, where you’ll learn from and engage with leading minds in the field, including: Anthropic co-founder Jared Kaplan, taking attendees through a behind-the-scenes look at hybrid reasoning models. A peek behind the scenes of how OpenAI works with startups, with Hao Sang of their GTM team. Tanka founder and CEO Kisson Lin, on why your next founder will be an AI. The two winners of our Audience Choice competition: Cohere’s Yann Stoneman on using generative AI in privacy-driven companies, and the Global Innovation Forum’s Hua Wang on moving swiftly while maintaining compliance. And that’s just the beginning — many more interactive and insightful sessions fill the day. Check out the full TC Sessions: AI agenda here. Some of the many AI pioneers leading main stage and breakout sessions at TechCrunch Sessions: AI, taking place on June 5 at UC Berkeley’s Zellerbach Hall.Image Credits:TechCrunch And that’s on top of our consistent focus on networking opportunities, with attendees getting the chance to set up 1:1 sessions, meet with peers and potential partners, and start the relationships that lead to big deals down the road. When the event itself is done, you’ll also get the chance to keep the momentum rolling with side events hosted by partners across Berkeley, California, including Tanka, Toyota, and MyHomie. Just because our event is done doesn’t mean your perks for getting a ticket should be, after all. Remember, these massive ticket savings are only for a limited time, so act now and head here to reserve your slot at one of the most exciting events within the AI space this year!
-
Google is gearing up to hold its largest developer conference of the year, Google I/O 2025, on May 20 and May 21. CEO Sundar Pichai, DeepMind CEO and co-founder Demis Hassabis, and executives in charge of Search, Cloud, and Android will announce major updates to Google’s product offerings. We’re expecting Google I/O 2025 to focus on AI a lot (seriously, a lot). Google’s family of AI models, Gemini, will likely take center stage during Google I/O 2025 as Pichai and Hassabis continue their push for dominance over OpenAI, xAI, Anthropic, and other well-funded competitors. We’re also expecting updates around DeepMind’s projects, such as its multimodal AI system Project Astra. At last year’s Google I/O, the company teased a pair of smart glasses powered by Project Astra. A key theme at Google I/O 2025 will be how AI is infiltrating all of Google’s products. For example, Google’s head of Search, Elizabeth Reid, is giving a talk on the AI stage about how generative AI is “revolutionizing search.” Other Google executives are scheduled to speak at the event about how scientists are using AI, how AI agents will use apps for users, and how Waymo’s autonomous vehicles use AI to navigate the physical world. The Google I/O 2025 keynote kicks off at 10 a.m. PT on May 20 from the Shoreline Amphitheater in Mountain View. You can watch a livestream of Google’s keynote here, or watch via the embed below. At 1:30 p.m. PT, the developer keynote begins. As the name suggests, the developer keynote will focus less on the consumer side of things. Later in the day, at 3:30 p.m. PT, Hassabis will speak with Alex Kantrowitz, host of the Big Technology Podcast, about the future of DeepMind’s AI and its impact on the world. At the same time, there will be livestreamed talks about the latest updates to Android, Chrome, and Google Cloud. Some of the talks at Google I/O won’t be livestreamed, but lucky for you, TechCrunch will be on the ground covering the biggest announcements from the event. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW This year, Android announcements will come a week ahead of Google I/0 2025 during an event called “The Android Show.” It kicks off on Tuesday at 10 a.m. PT, and you can watch the livestream here.
-
Apple sent invites for its developer-focused event, Worldwide Developers Conference (WWDC), which will be held from June 9 to June 13 with an “On the horizon” tagline. The company will likely announce new versions of its operating systems, namely iOS 19, iPadOS 19, WatchOS 12, macOS 16, and visionOS 3. The developer conference is both in person and online, with Apple introducing online group sessions for the first time, according to the announcement page. The company will also announce a slew of Apple Intelligence-related features. Reports suggest that these will include intelligent battery management and a virtual health assistant. In iOS 19, iPadOS 19, and macOS 16, Apple will reportedly bring a ton of visual changes with a design language that would be in line with the VisionOS user interface. A report from Bloomberg noted that the company will revamp icons, menus, apps, windows, and system buttons while simplifying navigation and control. The story is developing…
-
We’re thrilled to announce that Danielle Perszyk, the leader of Amazon AGI SF Lab’s human-computer interaction efforts, will be taking the main stage at TechCrunch Sessions: AI on June 5 at UC Berkeley’s Zellerbach Hall. The AGI SF Lab is at the forefront of developing foundational capabilities for AI agents that can operate in the real world — and Danielle is driving that vision forward. Don’t miss this rare opportunity to hear directly from one of the key minds shaping the future of practical, agentic AI. For a limited time, save over $300 on your ticket — and get 50% off a second for your plus one. Don’t wait—this offer won’t last long. Register now before it expires! Join us at TC Sessions: AI for a full day of groundbreaking programming, interactive sessions, live demos, and high-impact networking with the brightest minds in AI. Bring a friend or colleague — because big ideas are better shared, and their ticket is half off. About Danielle Perszyk’s session With new, more powerful AI models launching seemingly every week, the pace of innovation is both thrilling and overwhelming. In this dynamic conversation, Logan Kilpatrick, Senior Product Manager at Google DeepMind, Jae Lee, CEO of Twelve Labs, and Danielle Perszyk, PhD, Cognitive Scientist and Member of Technical Staff, Amazon AGI SF Lab will share firsthand insights from the frontlines of AI development. Together, they’ll explore how startups can not only build on top of today’s leading foundation models, but also, adapt and scale in a rapidly evolving landscape. From choosing the right models to anticipating future shifts, this session will equip founders, builders, and product leaders with strategies to stay ahead, stay relevant, and seize the opportunities of the AI era. To get all the latest on this session and check out everyone else joining us — visit the TC Sessions: AI agenda page. Get to know Perszyk Danielle Perszyk is a cognitive scientist and member of the technical staff at Amazon’s AGI SF Lab, where she leads the Human-Computer Interaction (HCI) team. Her work focuses on developing foundational capabilities for practical AI agents that can operate effectively in both digital and physical environments. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW Danielle earned her PhD from Northwestern University, where she studied the evolution of language and the development of social cognition. Prior to joining Amazon, she contributed to AI initiatives at Adept and Google, bringing a unique interdisciplinary perspective to the challenges of building intelligent, interactive systems. Be “Cognitive” of big ticket savings at AI At TC Sessions: AI, Danielle Perszyk will bring her deep expertise in cognitive science and human-computer interaction to a must-see panel on how founders can harness foundational models to scale AI in powerful, practical ways. Don’t miss this chance to learn from one of the minds shaping the future of agentic AI, while pocketing up to $600 in ticket savings. Secure your spot now!