Spring Fellowship Applications Now Open! Apply by January 25, 2026. Learn more | Apply here

Programs

Fellowships

Apply Now!

Spring 2026 applications are now open! Apply by January 25, 2026 (end of day, anywhere on earth).

Our fellowships are 8-week, in-person reading groups that meet for 1.5hrs/wk. We offer two ways to engage:

  • Self-contained with no outside readings nor capstone required.
  • ~1 hour of required pre-reading each week plus a capstone project (e.g., a policy brief/memo, implementing an ML paper, proposing a research project, etc.).

The fellowships are open to undergraduates, graduate students, and postdocs. Catered lunch/dinner from local restaurants (not pizza!) is provided. Please direct questions to contact@tassa.dev .

Our technical fellowship covers the foundations of frontier AI technology. Topics include model interpretability, goal misgeneralization, reinforcement learning, oversight, red-teaming, and more.

Sample curriculum:

  • Week 1: Introduction to modern machine learning and AI
  • Week 2: AI risks and alignment challenges
  • Week 3: Scalable oversight and adversarial training
  • Week 4: Interpretability and model internals
  • Week 5: Red-teaming and constitutional AI
  • Week 6: Technical governance
  • Week 7: Social impacts and civic AI
  • Week 8: Field review and career opportunities

Note: We update and iterate our curricula each semester to reflect the latest developments in AI safety research.

Research Support & Funding

Interested in doing AI Safety Research at Tufts? Submit a Research Interest Form ! Applications are reviewed on a rolling basis.

We offer capstone and research support -- including compute funding -- to members and those in our fellowships researching AI Safety issues. Please direct questions to contact@tassa.dev .

In addition, several Tufts labs conduct AI Safety research, including but not limited to: