Minimum qualifications:
- Bachelor's degree or equivalent practical experience.
- 8 years of experience in software development.
- 5 years of experience building and developing large-scale infrastructure, distributed systems or networks, or experience with compute technologies, storage, or hardware architecture.
- 5 years of technical expertise building in AI/ML infrastructure.
Preferred qualifications:
- Master’s degree or PhD in Engineering, Computer Science, or a related technical field.
- 8 years of experience with data structures/algorithms.
- 3 years of experience in a technical leadership role leading project teams and setting technical direction.
- 3 years of experience working in a complex, matrixed organization involving cross-functional, or cross-business projects.
- 3 years of experience with AI/ML inference stack.
About the job
Google's software engineers develop the next-generation technologies that change how billions of users connect, explore, and interact with information and one another. Our products need to handle information at massive scale, and extend well beyond web search. We're looking for engineers who bring fresh ideas from all areas, including information retrieval, distributed computing, large-scale system design, networking and data storage, security, artificial intelligence, natural language processing, UI design and mobile; the list goes on and is growing every day. As a software engineer, you will work on a specific project critical to Google’s needs with opportunities to switch teams and projects as you and our fast-paced business grow and evolve. We need our engineers to be versatile, display leadership qualities and be enthusiastic to take on new problems across the full-stack as we continue to push technology forward.
We believe Generative AI (GenAI) inference will revolutionize our industry and we aim to make GKE the go-to platform for deploying these workloads. Large GenAI models present scaling and usability challenges with accelerators compared to traditional CPU workloads. As the creators of Kubernetes and with Google's extensive AI experience, we believe in GKE's ability to innovate in this space.
We are looking for an ambitious, execution-oriented Software Engineers to help us make GKE the leading cost-effective, simplified, and fastest platform for running GenAI inference workloads.
As a Software Engineer, you will joining the Inference Workload team, responsible for Gen AI Inference features, reliability and operations as well as simplifying and improving GenAI inference workload onboarding.
The US base salary range for this full-time position is $197,000-$291,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.
Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about
benefits at Google.
Responsibilities
- Define and pioneer initiatives to deliver the most efficient and cost-effective AI Inference workloads for GKE, translating customer needs and engaged analysis into multi-quarter technical roadmaps.
- Author, drive consensus, and finalize detailed system designs for large-scale, cross-team Inference projects, and guide/review the designs of other senior engineers for architectural robustness and scalability.
- Uncover, scope, and prioritize significant areas of technical debt across the Inference or core GKE systems. Develop strategies and delegate the execution of paying off this debt.
- Serve as the primary reviewer and technical authority for critical components, establishing and enforcing best practices that set the standard for the entire team and partner teams.
- Design and oversee the development of advanced test, monitoring, and scalable benchmarking infrastructure to prevent future needs and bottlenecks.