Company
Mistral AILocation
LondonCompany Size
200 - 500 employeesSalary
Competitive salary dependent on experienceAbout the job
Mistral AI is seeking an AI Scientist, Safety to evaluate, enhance, and develop safety mechanisms for large language models (LLMs). This full-time hybrid role based in Paris or London focuses on identifying and addressing potential risks, biases, and misuse of LLMs to ensure AI systems remain ethical, fair, and beneficial to society. The position involves designing adversarial attacks, evaluating fairness, developing monitoring and moderation tools, building robust real-time safety defenses, analyzing user reports, and contributing to responsible AI policies. You’ll also collaborate on safety fine-tuning, implement safety filters, and stay updated on AI safety research. Candidates should have a degree in Computer Science, AI, or a related field (advanced degrees preferred), be proficient in Python or similar languages, and have hands-on experience with AI frameworks like PyTorch, TensorFlow, or Jax. Experience with generative AI, transformer-based models, MLOps, and model fine-tuning is ideal. Mistral offers competitive salary and equity, generous benefits, and visa sponsorship in both France and the UK. The company values low-ego, collaborative team players passionate about advancing safe and responsible AI.
Apply For this Job