Software Engineer, TPU Inference, AI/ML
Company: Google
Location: Kirkland
Posted on: April 3, 2026
|
|
|
Job Description:
info_outline X In accordance with Washington state law, we are
highlighting our comprehensive benefits package, which is available
to all eligible US based employees. Benefits for this role include:
Health, dental, vision, life, disability insurance Retirement
Benefits: 401(k) with company match Paid Time Off: 20 days of
vacation per year, accruing at a rate of 6.15 hours per pay period
for the first five years of employment Sick Time: 40 hours/year
(statutory, where applicable); 5 days/event (discretionary)
Maternity Leave (Short-Term Disability Baby Bonding): 28-30 weeks
Baby Bonding Leave: 18 weeks Holidays: 13 paid days per year Note:
By applying to this position you will have an opportunity to share
your preferred working location from the following: Kirkland, WA,
USA; Sunnyvale, CA, USA . Minimum qualifications: Bachelor’s degree
or equivalent practical experience. 2 years of experience with
coding in Python or 1 year of experience with an advanced degree. 2
years of experience with inference. 2 years of experience with
large language models. 2 years of experience with machine learning
algorithms. Preferred qualifications: Master's degree or PhD in
Computer Science, or a related technical field. 2 years of
experience with Kubernetes. 2 years of experience in GPU
programming. 2 years of experience with compilers. 2 years of
experience in cloud. About the job Google's software engineers
develop the next-generation technologies that change how billions
of users connect, explore, and interact with information and one
another. Our products need to handle information at massive scale,
and extend well beyond web search. We're looking for engineers who
bring fresh ideas from all areas, including information retrieval,
distributed computing, large-scale system design, networking and
data storage, security, artificial intelligence, natural language
processing, UI design and mobile; the list goes on and is growing
every day. As a software engineer, you will work on a specific
project critical to Google’s needs with opportunities to switch
teams and projects as you and our fast-paced business grow and
evolve. We need our engineers to be versatile, display leadership
qualities and be enthusiastic to take on new problems across the
full-stack as we continue to push technology forward. As a Software
Engineering in the TPU inference at scale team in the core ML, you
will be working on from Large Language Model (LLM)/Non-LLM models
bringup to performance tuning/optimization on Google Cloud TPUs.
The ML, Systems, & Cloud AI (MSCA) organization at Google designs,
implements, and manages the hardware, software, machine learning,
and systems infrastructure for all Google services (Search,
YouTube, etc.) and Google Cloud. Our end users are Googlers, Cloud
customers and the billions of people who use Google services around
the world. We prioritize security, efficiency, and reliability
across everything we do - from developing our latest TPUs to
running a global network, while driving towards shaping the future
of hyperscale computing. Our global impact spans software and
hardware, including Google Cloud’s Vertex AI, the leading AI
platform for bringing Gemini models to enterprise customers. The US
base salary range for this full-time position is $147,000-$211,000
bonus equity benefits. Our salary ranges are determined by role,
level, and location. Within the range, individual pay is determined
by work location and additional factors, including job-related
skills, experience, and relevant education or training. Your
recruiter can share more about the specific salary range for your
preferred location during the hiring process. Please note that the
compensation details listed in US role postings reflect the base
salary only, and do not include bonus, equity, or benefits. Learn
more about benefits at Google . Responsibilities Research and
implement LLM models/recommendation/diffusion model architectures,
ensuring their efficient and accurate execution on generations of
TPUs. Guide significant performance improvements by leveraging
TPU-specific hardware features, such as sparsecore, and conducting
detailed analyses to quantify performance differentials between
optimized and baseline implementations on GPUs. Collaborate closely
with key customers to deeply understand their existing
recommendation model deployments and facilitate their seamless
transition and optimization for execution on TPUs. Implement models
in Jax/PyTorch, verifying model correctness, ensuring performance
across heterogeneous hardware.We are working on OSS projects such
as vLLM, max diffusion, maxtext.
Keywords: Google, Tacoma , Software Engineer, TPU Inference, AI/ML, IT / Software / Systems , Kirkland, Washington