Keysight is on the forefront of technology innovation, delivering breakthroughs and trusted insights in electronic design, simulation, prototyping, test, manufacturing, and optimization. Our ~15,000 employees create world-class solutions in communications, 5G, automotive, energy, quantum, aerospace, defense, and semiconductor markets for customers in over 100 countries. Learn more about what we do.
Our award-winning culture embraces a bold vision of where technology can take us and a passion for tackling challenging problems with industry-first solutions. We believe that when people feel a sense of belonging, they can be more creative, innovative, and thrive at all points in their careers.
About Keysight AI Labs
Keysight accelerates innovation to connect and secure the world. Our solutions span wireless communications, semiconductors, aerospace & defense, automotive, and beyond. We combine measurement science, simulation, and advanced AI to help engineers design, simulate, and validate the world’s most advanced systems.
About the AI Team
Keysight AI Team pioneers scientific and secure AI — including physics-informed learning, test-and-measurement-augmented intelligence, and trustworthy ML for mission-critical industries. We collaborate across research, product and engineering teams to deliver cutting-edge AI capabilities baked directly into Keysight’s products.
About the Role
We are seeking a Senior ML Security & Robustness Engineer to strengthen the adversarial robustness, privacy, and trustworthiness of AI models deployed across edge, embedded, hybrid, and cloud environments. You will shape frameworks, defenses, and best practices that secure classical, deep learning, and foundation models against real-world attacks. This includes model protection (obfuscation, watermarking), secure ML lifecycle management, and evaluation under adversarial threat models.
Responsibilities
This is a hands-on and high-impact role, blending applied research and production engineering:
- Design, test, and deploy adversarial defenses for ML models across varied deployment architectures (edge, hybrid, cloud)
- Own robustness evaluation pipelines, red-teaming, and model penetration testing
- Secure ML artifacts via fingerprinting, obfuscation, and model watermarking
- Implement privacy-preserving learning techniques (e.g., FL, DP-SGD)
- Contribute to threat modeling and secure ML lifecycle governance
- Develop and maintain tooling for continuous robustness testing and secure MLOps workflows
- Collaborate with research and product teams to transition prototype defenses into production
- Publish and communicate findings internally and externally when appropriate
Qualifications
Required Qualifications
- Master’s or PhD in Computer Science, Cybersecurity, Applied Mathematics, Electrical Engineering, or related field
- Strong foundations in deep learning, optimization, statistics, and reliability evaluation
- Expertise in adversarial ML methods and evaluation frameworks
- Hands-on proficiency with PyTorch (preferred) or TensorFlow
- Experience deploying hardened models to embedded / constrained environments
- Experience with secure ML lifecycle concepts and threat modeling
- Experience with at least one ML security tool (e.g., ART, CleverHans, Foolbox)
- Model IP protection: watermarking, fingerprinting, secure model storage
- Strong communication and cross-functional collaboration skills in English
Desired Qualifications
- Experience with FL frameworks (e.g., Flower, OpenFL)
- Familiarity with cryptographic principles and secure computation techniques
- MLOps tooling experience (MLflow, W&B, CI/CD)
- Publications in top AI and/or security venues (NeurIPS, ICML, AAAI, IEEE S&P, USENIX, ACM CCS, etc.)
- Contributions to open-source ML security projects
Careers Privacy Statement***Keysight is an Equal Opportunity Employer.***