
Are you ready to align the AI?
This June (tentatively June 2-27), we're running the Research Engineering Camp for Alignment Practitioners (RECAP) based on the global ARENA program — a hands-on, fast-paced accelerator that's launched people into OpenAI, Anthropic, and top safety research fellowships like the Machine Alignment Theory Scholars (MATS) Program.
👯 80% of the program will be in pair programming: accelerating learning cooperatively.
🤖 The curriculum will be intensely technical; it's based on ARENA curriculum.
🧑🔬 The programme aims to upskill people exploring fit into a research or engineering career in AI Safety.
We're building AI systems we don't yet understand — let alone control. As models grow more capable, the gap between what they can do and what we can guarantee keeps widening. The field of AI safety exists because right now, no one knows how to reliably align powerful AI with human intent. And we may not get many tries.
That's why we're running RECAP — a 4-week program that's designed to equip you with the engineering skills to build safe and reliable AI systems.
Program Curriculum
Fundamentals
Mathematics, programming, and basic neural networks
Mechanistic Interpretability
Highlights:
- Build a transformer from scratch
- Perform interpretability on a transformer trained to classify bracket strings as balanced or unbalanced
LLM Evaluations
Learn to properly evaluate large language models, including model-written evals and evaluating LLM Agents
Capstone Project
Apply your skills to a real AI safety problem
Our Team
Jonathan Ng
Director
- Organized AI Safety fellowships at NUS and NTU
- Graduated from the pilot version of ARENA
- MATS Spring '23 alumnus
- Authored MACHIAVELLI, cybercapabilities.org
Clement Neo
Technical Lead / Head Teacher
- Lab Advisor at Apart Research, where he mentors and advises people pivoting into AI safety and mechanistic interpretability research
- Research Community Coordinator at Singapore AI Safety Hub
- TA of Cambridge Machine Learning Alignment Bootcamp 2023, of which alumni now working at AI Safety Camp, Apollo Research, UK AI Safety Institute, US Congress, and 80,000 Hours
Logistics
The bootcamp will be held in-person, based at the Singapore AI Safety Hub (SASH). Learn more about SASH.
Lunch and snacks will also all be included.
What Do Participants Get?
Participants will not only strengthen their machine learning engineering foundations, but also develop broader software engineering skills — like organizing codebases effectively and adopting clean, scalable coding practices. These skills are highly relevant for technical roles in AI safety, whether at dedicated orgs like Apollo, FAR, and METR, or on alignment teams at frontier labs like Anthropic.
By the end of the bootcamp, each participant will have built a personal GitHub portfolio showcasing the projects they've completed. This can be a strong asset when applying to technical AI safety positions or internships.
We also expect that working alongside other alignment-focused teams and researchers will naturally lead to valuable peer exchange, idea-sharing, and potential collaboration.
Contact Us
Get in touch: info at recap.sg