2026 PhD Software Engineer Intern (Engineering Security, United States
We’re looking for PhD candidates to intern on the Engineering Security team during winter 2026 (12 weeks). You will be embedded in our engineering team and work closely with other specialists, software developers, and product managers. As a PhD intern, you will contribute to research and development of AI agents for security and privacy. You’ll explore methods to build, evaluate, and scale agents that can reason about complex systems, detect vulnerabilities, propose and apply patches, and verify security outcomes.
About the Team:
Join Uber's Engineering Security (EngSec) organization at our growing center of excellence for AI/ML initiatives. Our mission is transforming security and privacy through AI innovation, focusing on three critical areas: using AI to strengthen security capabilities, building secure-by-design AI systems, and protecting against emerging AI-based threats. We're developing the next generation of security and privacy platforms that leverage and defend against cutting-edge AI to protect Uber's ecosystem.
What You'll Do
- Research and prototype AI-driven agents that can discover, remediate, and verify security vulnerabilities
- Design and evaluate automated patching and verification workflows using LLMs and agent-based architectures
- Investigate reasoning, planning, and tool-use capabilities of LLM-based agents in security contexts
- Collaborate with researchers and engineers to integrate solutions into production systems
- Document findings and contribute to technical reports, publications, or open-source tools
Basic Qualifications
- Current PhD student in Computer Science, Artificial Intelligence, Security, or related field
- Candidates must have at least one semester/quarter of their education left following the internship
- Strong understanding of the LLM ecosystem and agentic AI frameworks (e.g., tool use, multi-step reasoning, orchestration)
- Proficiency in Python and experience with ML frameworks (e.g., PyTorch, TensorFlow)
- Demonstrated ability to conduct independent research
Preferred Qualifications
- Publications in top ML/AI or security venues (e.g., NeurIPS, ICML, ICLR, USENIX Security, CCS, IEEE S&P)
- Familiarity with automated reasoning, program synthesis, or verification methods
- Experience applying LLMs to security and software engineering tasks
- Strong problem-solving and communication skills