You are viewing a preview of this job. Log in or register to view more details about this job.

Research Engineer: Interpretability, Theory, & Analysis

The theory/analysis team at Enigma is seeking Research Engineers specializing in mechanistic interpretability to develop and deploy scalable methods & pipelines for interpreting mechanisms, representations, and circuits within our brain foundation models. The role presents a unique opportunity to fuse mechanistic interpretability with neuroscience, uncovering principles of natural intelligence.

The Enigma Project (https://enigmaproject.ai) is a Stanford-based non-profit research organization, launched in August 2024 with $30M in funding. Our core mission is to leverage deep learning to crack the neural code. We own the full neuroAI pipeline: from neurotechnology development, to neural data collection, to modeling, theory, & analysis.

 

Role & Responsibilities:

  • Design and implement scalable pipelines for automated interpretability analyses of brain foundation models
  • Develop infrastructure for running massive-scale in silico experiments on digital twins
  • Build tools for automated circuit discovery and geometric/topological analysis of neural manifolds
  • Create efficient, reproducible analysis workflows for processing high-dimensional neural data
  • Engineer systems for automated hypothesis generation and testing
  • Implement and scale feature visualization and manifold learning techniques
  • Maintain distributed computing infrastructure for parallel interpretability analyses
  • Develop interactive visualization tools for exploring neural representations

Key Qualifications:

  • Master's degree in Computer Science or related field with 2+ years of relevant industry experience, OR Bachelor's degree with 4+ years of relevant industry experience
  • Strong understanding of mechanistic interpretability techniques and research literature
  • Expertise in implementing and scaling ML analysis pipelines
  • Experience with high-performance computing and distributed systems
  • Proficiency in Python and deep learning frameworks (i.e., PyTorch)
  • Experience with distributed computing and high-performance computing clusters
  • Strong software engineering practices including version control, testing, and documentation
  • Familiarity with visualization tools and techniques for high-dimensional data

Preferred Qualifications:

  • Experience with feature visualization techniques (e.g., activation maximization, attribution methods)
  • Knowledge of geometric methods for analyzing neural population activity
  • Familiarity with circuit discovery techniques in neural networks
  • Experience with large-scale data processing frameworks
  • Background in neuroscience or computational neuroscience
  • Contributions to open-source ML or interpretability tools
  • Experience with ML experiment tracking platforms (W&B, MLflow)

What We Offer:

  • Opportunity to work on fundamental questions in AI interpretability and neuroscience
  • Collaborative environment bridging academic research and engineering excellence
  • Access to state-of-the-art computing resources and unique neural datasets
  • Competitive salary and benefits
  • Career development and mentoring
  • Location at Stanford University with access to its vibrant research community

Application: Please send your CV and a one-page statement of interest to: recruiting@enigmaproject.ai