Keysight is at the forefront of technology innovation, delivering breakthroughs and trusted insights in electronic design, simulation, prototyping, test, manufacturing, and optimization. Our ~15,000 employees create world-class solutions in communications, 5G, automotive, energy, quantum, aerospace, defense, and semiconductor markets for customers in over 100 countries. Learn more about what we do.
Our award-winning culture embraces a bold vision of where technology can take us and a passion for tackling challenging problems with industry-first solutions. We believe that when people feel a sense of belonging, they can be more creative, innovative, and thrive at all points in their careers.
Responsibilities
About the Role
We are seeking a highly skilled and experienced Product Security Engineer (Engineer 3) to design, develop, and scale an AI-driven security automation engine that will integrate directly into Keysight’s Software Development Life Cycle (SDLC). This role will lead the development of an intelligent system capable of:
- Understanding the full Software Development Life Cycle (SDLC) and shifting left automation of remediation tasks earlier in the SDLC
- Consuming and interpreting scan data from our vulnerability assessment platform
- Automatically generating exploit and testing scripts using AI
- Validating vulnerability existence with high confidence and reproducibility earlier in the SDLC
- Feeding verified results back into the UI with reliability, auditability, and traceability
- Providing AI-assisted code patching, secure coding recommendations, and CI/CD automation to accelerate mitigation workflows earlier in the SDLC
About the Team:
You will collaborate with product owners, security researchers, and full stack engineers to build advanced autonomous security capabilities that enhance Keysight’s Software Development Life Cycle (SDLC).
Key Responsibilities:
Software Development Life Cycle (SDLC) Integration
- Seamlessly integrate security automation into Keysight’s Software Development Life Cycle (SDLC)
- Enable ingestion, normalization, and correlation across Black Duck SCA, Nessus Vulnerability Management, Burp Suite DAST, SAST tools, and product telemetry
AI-Driven Vulnerability Verification & Automation
- Design and implement AI models that interpret vulnerability scan data and autonomously generate exploit scripts
- Build automated validation pipelines using sandboxed and orchestrated environments
- Develop classification and scoring logic with strong confidence metrics
- Implement guardrails, safety classifiers, and auditing for safe LLM operations
False Positive Reduction & Evidence Quality
- Build multi‑agent reasoning workflows to reduce false positives
- Produce high-fidelity Vulnerability Verification Evidence Packages (VVEP) with logs, traces, and reproducible exploit outcomes
AI Assisted Code Patching & Remediation
- Deliver AI-generated secure code recommendations and automated patching workflows
- Integrate remediation automation into CI/CD pipelines (Azure DevOps, GitHub Actions, Jenkins)
Penetration Test Augmentation
- Support AI-enhanced penetration testing, API fuzzing, and dynamic security validation
- Work with security researchers to improve exploit reliability and detection precision
Leadership & Cross Functional Collaboration
- Partner with product teams and engineering leadership to ensure validated results drive actionable remediation
- Influence secure development practices and drive adoption of security automation across product teams
Qualifications
Must Haves:
- Bachelor’s degree in Computer Science, Computer Engineering, Cybersecurity, AI/ML, or a related field.
- 3-6 years of R&D experience, with a strong preference for AI and security‑focused work.
- Python proficiency, security automation frameworks, and CI/CD integration.
- Experience with secure software development practices and vulnerability management.
- Hands-on experience with SAST, DAST, SCA, container scanning, and other security tools.
- Familiarity with Git, CI/CD pipelines, build automation, and DevSecOps principles.
- Familiarity with LLMs, AI-assisted security workflows, or Agentic AI concepts.
- Ability to operate independently and contribute to technical decision-making.
- Excellent communication and cross‑team collaboration skills.
We Value:
- Master’s degree in Cybersecurity, Computer Science, or a related technical field.
- Hands‑on AI application development experience with exposure to AI/LLM‑based verification or autonomous testing frameworks.
- Practical experience with AI/ML‑based security automation or Agentic AI/LLM integration (OpenAI, Azure OpenAI, HuggingFace).
- Exposure to cloud security or container/Kubernetes security for software development pipelines.
- Experience with penetration‑testing methodologies (OWASP, PTES, MITRE ATT&CK).
- Understanding of vulnerability types (CVE/CWE), exploit vectors, and modern attack techniques.
- Experience with multi‑tool security data environments (SAST + DAST + SCA + VM).
- Experience producing SBOM, VEX, VVEP, or SARIF artifacts.
- Working knowledge of Docker, Kubernetes, and modern CI/CD platforms.
- Background in fuzzing, exploit development, or dynamic testing automation.
- Experience integrating tooling into CI/CD systems.
- A passion for continuous improvement and learning.
- Excellent problem-solving skills and attention to detail.
- Proven ability to work cross-functionally and influence without authority.
Careers Privacy Statement***Keysight is an Equal Opportunity Employer.***