2026 PhD Software Engineer Intern (AI Security), United States
We’re looking for PhD candidates to intern on the AI Security team during winter 2026 (12 weeks). You will be embedded in our engineering team and work closely with other specialists, software developers, and product managers. As a PhD intern, you will contribute to research and development of AI agents for security and privacy. You’ll explore methods to build, evaluate, and scale agents that can reason about complex systems, detect vulnerabilities, propose and apply patches, and verify security outcomes.
About the Team
Uber’s AI Security team secures how AI agents and tools interact with our systems and data. We build the foundations for agentic identity (who the agent is, what it’s acting on behalf of, and how identity/attestation propagates across tools) and risk-based access (context-aware, real-time authorization that adapts to sensitivity, intent, and behavior). We partner across EngSec, Edge Gateway, Developer Platform, and ML Platform teams to make AI adoption safe, observable, and compliant at Uber scale.
What You'll Do
- Research & prototype identity and attestation for AI agents (e.g., A2A AuthN/A2A AuthZ, context propagation, chain-of-custody verification) and evaluate correctness, robustness, and usability
- Design risk-based access policies and scoring that adapt to actor, tool, data sensitivity, and runtime signals; validate via offline/online experiments
- Build evaluation harnesses for agent workflows (tool-use, multi-step planning, self-verification) to measure security outcomes (prevent, detect, contain) and regression-proof changes
- Ship with engineers: integrate prototypes into production gateways/SDKs; add observability (auditing, explanations of allow/deny), and stress-test for scale and latency
- Communicate findings through docs, internal talks, and (where appropriate) publications or open-source contributions
Basic Qualifications
- Current PhD student in Computer Science, AI/ML, Security, or related field
- Candidates must have at least one semester/quarter of their education left following the internship
- Strong grounding in LLMs/agent frameworks (tool use, planning, orchestration) and empirical evaluation
- Proficiency in Python and modern ML tooling (PyTorch or TensorFlow)
- Demonstrated ability to conduct independent research and translate ideas into working prototypes
Preferred Qualifications
- Publications in top ML/AI or Security venues (e.g., NeurIPS, ICML, ICLR, USENIX Security, CCS, IEEE S&P)
- Experience with identity/authz standards (OAuth2/OIDC), policy engines (e.g., OPA/Rego), or program analysis/verification
- Applied security for LLM/agent systems (prompt/ tool security, redaction, auditability, explainability)
- Strong problem-solving and communication; comfort navigating ambiguous, cross-functional spaces