New offer - be the first one to apply!

September 17, 2025

Research Software Engineer, Multimodal AI

Senior • On-site

$166,000 - $244,000/yr

San Jose, CA

Minimum qualifications:

  • Bachelor’s degree or equivalent practical experience.
  • 5 years of experience with software development in one or more programming languages.
  • 3 years of experience testing, maintaining, or launching software products, and 1 year of experience with software design and architecture.
  • 2 years of experience in software development (e.g., deep learning, perception, or computer vision).
  • Experience in C++, Python, Generative AI, Machine Learning.

Preferred qualifications:

  • Master's degree or PhD in Computer Science or related technical field.
  • 1 year of experience in a technical leadership role.
  • Experience with relevant ML frameworks such as JAX, TensorFlow, or PyTorch.
  • Experience with multimodal learning, large language models or AI agents.
  • Experience with prompt engineering, few-shot learning, post-training techniques, and evaluations.
  • Familiarity with large-scale model training and deployment.

About the job

Google's software engineers develop the next-generation technologies that change how billions of users connect, explore, and interact with information and one another. Our products need to handle information at massive scale, and extend well beyond web search. We're looking for engineers who bring fresh ideas from all areas, including information retrieval, distributed computing, large-scale system design, networking and data storage, security, artificial intelligence, natural language processing, UI design and mobile; the list goes on and is growing every day. As a software engineer, you will work on a specific project critical to Google’s needs with opportunities to switch teams and projects as you and our fast-paced business grow and evolve. We need our engineers to be versatile, display leadership qualities and be enthusiastic to take on new problems across the full-stack as we continue to push technology forward.

The Google Extended Reality (XR) mission is to give people superhuman abilities by making information instantly and intuitively accessible. The team is working to advance the artificial intelligence for the next generation of wearable computers that are immersive, spatially aware, and contextual.

In this role, you will conduct development on Large Language Models (LLMs) and agents, particularly in the multimodal domain (vision, audio), focusing on developing more capable Artificial Intelligence (AI) for XR devices (glasses, goggles). This role offers an opportunity to contribute to AI research and seeing your work define the next-generation of computing products.

Android is Google’s open-source mobile operating system powering more than 3 billion devices worldwide. Android is about bringing computing to everyone in the world. We believe computing is a super power for good, enabling access to information, economic opportunity, productivity, connectivity between friends and family and more. We think everyone in the world should have access to the best computing has to offer. We provide the platform for original equipment manufacturers (OEMs) and developers to build compelling computing devices (smartphones, tablets, TVs, wearables, etc) that run the best apps/services for everyone in the world.

The US base salary range for this full-time position is $166,000-$244,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.

Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google.

Responsibilities

  • Develop algorithms/models to enhance AI agents for XR devices using techniques like prompting, few-shot learning, post-training techniques to improve model performance and real-world XR scenarios.
  • Write production-quality C++/Python code and tests.
  • Create a comprehensive evaluation plan, from dataset development to Key Performance Indicator (KPI) definitions and measurements.
  • Identify, implement and ship the latest modeling innovations including, orchestration, multimodality, tool integrations, memory, hybrid agent architectures and personalization.
  • Prove out concepts through rapid prototyping and iterative development through team testing in close partnership with the XR product teams.