New offer - be the first one to apply!
July 2, 2025
Mid • On-site
$129,300 - $223,600/yr
Austin, TX , +1
In-house designed SoCs (System on Chips) are the brains and brawn behind AWS’s Machine Learning Acceleration servers, TRN and INF. Our team builds functional models of these ML accelerator chips to speed up SoC verification and system software development. We’re looking for a Hardware Functional Modeling Engineer to join the team and deliver new C++ models, infrastructure, and tooling for our customers.
As part of the ML acceleration modeling team, you will:
- Develop and own SoC functional models end-to-end, including model architecture, integration with other model or infrastructure components, testing, and debug
- Work closely with architecture, RTL design, design verification, emulation, and software teams to build, debug, and deploy your models
- Innovate on the tooling you provide to customers, making it easier for them to use our SoC models
- Drive model and modeling infrastructure performance improvements to help our models scale
- Develop software which can be maintained, improved upon, documented, tested, and reused
Annapurna Labs, our organization within AWS, designs and deploys some of the largest custom silicon in the world, with many subsystems that must all be modeled and tested with high quality. Our SoC model is a critical piece of software used in both our SoC development process and by our partner software teams. You’ll collaborate with many internal customers who depend on your models to be effective themselves, and you'll work closely with these teams to push the boundaries of how we're using modeling to build successful products.
You will thrive in this role if you:
- Are an expert in functional modeling for SoCs, ASICs, TPUs, GPUs, or CPUs
- Are comfortable modeling in C++ with OOP principles
- Enjoy learning new technologies, building software at scale, moving fast, and working closely with colleagues as part of a small team within a large organization
- Want to jump into an ML-aligned role, or get deeper into the details of ML at the hardware/system-level
Although we are building ML SoC models, no machine learning background is needed for this role. You’ll be able to ramp up on ML as part of this role, and any ML knowledge that’s required can be learned on-the-job.
This role can be based in either Cupertino, CA or Austin, TX. The broader team is split between the two sites, with a slight preference for CA, due to colocation with more customer teams.
We're changing an industry. We're searching for individuals who are ready for this challenge, who want to reach beyond what is possible today. Come join us and build the future of machine learning!
- 3+ years of non-internship professional experience writing functional models of hardware, SoCs, ASICs, etc.
- Experience programming with C++ using OOP
- Familiarity with SoC, CPU, GPU, and/or ASIC architecture and micro-architecture
- 3+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, and testing
- Experience developing for QEMU
- Experience with PyTest and GoogleTest
- Familiarity with modern C++ (11, 14, etc.)
- Experience with machine learning accelerator hardware and/or software
Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status.
Los Angeles County applicants: Job duties for this position include: work safely and cooperatively with other employees, supervisors, and staff; adhere to standards of excellence despite stressful conditions; communicate effectively and respectfully with employees, supervisors, and staff to ensure exceptional customer service; and follow all federal, state, and local laws and Company policies. Criminal history may have a direct, adverse, and negative relationship with some of the material job duties of this position. These include the duties and responsibilities listed above, as well as the abilities to adhere to company policies, exercise sound judgment, effectively manage stress and work safely and respectfully with others, exhibit trustworthiness and professionalism, and safeguard business operations and the Company’s reputation. Pursuant to the Los Angeles County Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records.
Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Our compensation reflects the cost of labor across several US geographic markets. The base pay for this position ranges from $129,300/year in our lowest geographic market up to $223,600/year in our highest geographic market. Pay is based on a number of factors including market location and may vary depending on job-related knowledge, skills, and experience. Amazon is a total compensation company. Dependent on the position offered, equity, sign-on payments, and other forms of compensation may be provided as part of a total compensation package, in addition to a full range of medical, financial, and/or other benefits. For more information, please visit https://www.aboutamazon.com/workplace/employee-benefits. This position will remain posted until filled. Applicants should apply via our internal or external career site.