GPU Kernel Engineer, Onboard Software Platform

Wayve

Wayve

Mountain View, CA, USA
Posted on Apr 11, 2024

Who are we?

Our team is the first in the world to use autonomous vehicles on public roads using end-to-end deep learning. With our multi-national world-class technical team, we’re building things differently.

We don’t think it’s scalable to tell an algorithm how to drive through hand-coded rules and expensive HD maps. Instead, we believe that using experience and data will allow our algorithms to be more intelligent: capable of easily adapting to new environments. Our aim is to be the future of self-driving cars: the first to deploy in 100 cities across the world bringing autonomy to everyone, everywhere.

Impact expected

As an Embedded Kernel Engineer within our dynamic team, you'll be instrumental in deploying Wayve's autonomous vehicle (AV) AI model across consumer vehicles. Your role is crucial in developing model compilers and crafting high-performance kernels for efficient inferencing on embedded GPU environments. Through close collaboration with machine learning engineers, you'll pinpoint opportunities to amplify inference performance by optimally utilizing the hardware capabilities of various deployment platforms. Your deep understanding of GPU architecture, from memory management to the intricacies of GPU cores, will be pivotal. Utilizing tools like TensorRT and model compilers, you will push the boundaries of what's possible in inference performance in the embedded environment.

Challenges you will own

  • Optimization Leadership: Lead efforts to discover the optimal model compilation strategies that harmonize compute intensity, caching, and memory bandwidth to maximize hardware utilization on targeted platforms.
  • Precision Transformation: Innovate in transforming large AI models to low precision implementations, ensuring minimal accuracy loss.
  • GPU Architecture Mastery: Become the go-to authority on GPU architecture for targeted hardware platforms, such as NVIDIA Orin or Qualcomm Snapdragon.
  • Model Compilation Process Creation: Design and implement the process to convert AI models from a PyTorch framework to native platform-specific programs, enhancing model efficiency and performance.

What we are looking for in our candidate

  • GPU Development Expertise: A minimum of 5 years of direct experience in developing kernels for GPUs, showcasing an ability to solve complex computational challenges.
  • Advanced C++ Skills: Proficiency in C++ programming, with a demonstrated history of developing efficient, high-quality code.
  • GPU Programming Tools Proficiency: Extensive experience with GPU programming using tools like CUDA and TensorRT, specifically within embedded environments.
  • Deep GPU Design Knowledge: A thorough understanding of GPU design and operations, including familiarity with AI accelerators.
  • Quantization and Compiler Experience: Expertise in model quantization, particularly in implementing low precision formats, and experience with developing and using model compilers.
  • Educational Qualifications: A Master's degree in a relevant field, supplemented by research experience.

Our offer

  • Competitive compensation and significant upside potential as a well funded series B startup.
  • Immersion in a team of world-class researchers, engineers and entrepreneurs.
  • Opportunity to shape the future of autonomous driving, a real world breakthrough technology.
  • Help relocating to London or Mountain View, CA, with visa sponsorship.
  • Flexible working hours - we trust you to do your job well, at times that suit you and your team.

Wayve is built by people from all walks of life. We believe that it is our differences that make us stronger, and our unique perspectives and backgrounds that allow us to build something different. We are proud to be an equal opportunities workplace, where we don’t just embrace diversity but nurture it - so that we all thrive and grow.