AI will change the world as we know it.
We conduct research and outreach to advance the development of safe AI.
Join our mailing list →
We conduct research and outreach to advance the development of safe AI.
Join our mailing list →Managing risks from advanced artificial intelligence is one of the most important problems of our time.1 We are a community of student technical and policy researchers at Cornell, working to reduce these risks and steer the trajectory of AI development for the better.
CAIA runs a semester-long Introduction to AI Alignment fellowship, covering topics like neural network interpretability,2 learning from human feedback,3 goal misgeneralization in reinforcement learning agents,4 eliciting latent knowledge,5 and evaluating dangerous capabilities in models.6
We also run an intermediate-level technical reading group, where we discuss relevant contemporary ML papers in AI safety. Finally, we support undergraduate and graduate students in conducting original research relevant to AI safety, and host workshops and socials.
Interested in helping shape the future of AI safety? Express interest in our programs by joining our mailing list.
This is a list of some of the organizations our members have worked with. Not all organisations listed sponsor or are affiliated with CAIA.