Cornell AI Alignment Club

We conduct research and outreach to advance the development of safe AI.

Decorative orbital graphicAbstract illustration for AI safety

Managing risks from advanced AI is one of the most important challenges of our time.

CAIA is a community of student technical and policy researchers at Cornell working to reduce these risks and improve the trajectory of AI development.

We run an introduction to AI alignment fellowship covering topics like neural network interpretability, learning from human feedback, goal misgeneralization, eliciting latent knowledge, and evaluating dangerous capabilities in models. Interested students can learn more on the programs page.

We also run an intermediate technical reading group, support undergraduate and graduate students in original research, and host workshops and socials.

Managing risks from advanced artificial intelligence is an urgent global problem1. If you want to get involved, start by joining our mailing list.

Upcoming Events

Join talks, reading sessions, and workshops from the CAIA community.

Subscribe to get events directly on your calendar.

Subscribe to Events Calendar

News

Most recent updates from Cornell AI Alignment.

Research

EigenBench accepted to ICLR 2026 as an Oral paper!

EigenBench was accepted to ICLR 2026 and selected for an Oral presentation.

Read update
Announcement

RAISE Act is signed by the NY Governor

Governor Kathy Hochul signed the RAISE Act into law, establishing nation-leading AI safety requirements for frontier model developers. CAIA helped canvass support for the bill.

Read update
Event

CAIA ice skate social

Thank you everyone for a great semester working on AI safety. We wrapped up with a fun CAIA ice skate social.

Recent Work

Selected papers and projects by CAIA community members.

Our members have worked with:

Decorative AI data beams
Google DeepMind logoMATS logoPivotal logoMETR logoCenter for AI Safety logoLISA logoCBAI logoBerkeley SPAR logoKAIROS logoAmazon AGI logoRAND logoSAIF logo