Introduction to AI Alignment Fellowship
CAIA runs an 8-week introductory fellowship on AI safety, covering both technical and policy topics. Topics covered include include neural network interpretability,1 learning from human feedback,2 US AI policy, and potential catastrophic risks from advanced AI systems. The program is open to both undergraduate and graduate students. Students with machine learning experience are especially encouraged to apply, but no prior experience is required.
The fellowship meets weekly in small groups, with dinner provided and no additional work outside of meetings. Our curriculum is adapted from OpenAI's AI Safety Fundamentals course. See the Spring 2025 syllabus here.
Apply here by February 9, 2025, 11:59pm EST.Technical Paper Reading Group
CAIA runs a weekly open technical ML reading group. Reading group sessions are led by experienced TAs; cover recent significant papers in AI/ML safety; and meet weekly in small groups. Dinner is provided and there is no additional work outside of meetings.
Join the CAIA SlackStudent Research
CAIA supports original student research in AI safety. If you are interested in beginning technical or policy research, reach out to cornellaialignment@gmail.com to be connected with resources and a faculty or upperclassman mentor.