GPU programming Expert (San Francisco)

Mistral AI

Mistral AI

Palo Alto, CA, USA
Posted on Mar 18, 2024
Mistral AI is hiring an expert in the role of serving and training large language models at high speed on GPUs. The role is based in San Francisco.
The role will involve
-Writing low-level code to take all advantage of high-end GPUs (H100) and max out their capacity
-Rethinking various part of the generative model architecture to make them more suitable for efficient inference-Integrating low-level efficient code in a high-level MLOps framework
The successful candidate will have
-High technical competence for writing custom CUDA kernels and pushing GPUs to their limits. High expertise on the distributed computation infrastructure of current generation GPU clusters
-Overall understanding of the field of generative AI, knowledge or interest in fine-tuning and using language models for applications
About Mistral
- At Mistral AI, our mission is to make AI ubiquitous and open. We are passionate about bridging the gap between technology and businesses of all sizes.
- We are a leading innovator in the field of open-source large language models. Our advanced LLM solutions can be seamlessly deployed on any cloud, allowing for optimized integration and robust performance. Developers are using our API via la Plateforme to build incredible AI-first applications powered by our models that can understand and generate natural language text and code.
- We are multilingual at our core. We released le Chat, as a demonstrator of our models.
- We are a tight-knit, nimble team dedicated to bringing our cutting-edge AI technology to the world. Our teams are distributed between France, UK and USA
- We are creative, low-ego, team-spirited, and have been passionate about AI for years. We hire people who thrive in competitive environments, because they find them more fun to work in. We hire passionate women and men from all over the world.