IIT Delhi AI Safety Careers Fellowship

This fellowship is run and managed by the organisers listed below.

The page is hosted by BlueDot Impact who are advising on and supporting the programme.

“We think that reducing risks from advanced AI systems is one of the most important and interesting problems of our time and that undergraduate and graduate students can quickly start doing valuable research for mitigating these risks.”

– AISCF IIT Delhi supports students interested in pursuing careers in AI safety.

Apply here

 

Contact Us:

Mail us: aiscf.iitd@gmail.com.

Or reach out to us via call or WhatsApp:
Aryan: +91-9773639596 |  Shivam: +917503030314 | Tanish: +91-9907378184

What is Al Safety?

AI Safety is a research field that addresses the questions:

  • How do we ensure the development of advanced artificial intelligence benefits humanity?

  • And how do we avoid catastrophic failures while building advanced AI systems?

What is AISCF?

As machine learning (ML) systems become increasingly capable of performing most human tasks and become an integral part of the global economy, there is a pressing need to verify that such systems are safe and aligned with human values.

Introducing our stimulating eight-week AI Safety Careers fellowship! Dive into AI safety with us as we explore the curriculum outlined in the AI Safety Fundamentals Programme.

The fellowship will help you understand and reason about current and future AI systems, their potential failure modes, and state-of-the-art methods for ensuring safety.

We’ll conduct a weekly reading group facilitated by researchers working in AI Safety. The reading group aims to inculcate a safety-first mindset among students. This will challenge your mind and inspire you to think critically about AI’s future. In addition, we have lined up engaging online guest lecture series featuring renowned researchers from top institutions like UC Berkeley, Stanford, etc.

The fellowship is free for all and will begin with weekly discussion groups starting on August 16. Applicants need to send in their applications by August 10, 2023. The fellowship is open to all IIT-Delhi students: undergraduate, master’s and PhD.

Apply here

Activities

The fellowship will comprise the following activities:

  • Weekly reading groups conducted by experienced facilitators in AI Safety.

  • Live Presentations, Q&A sessions with AI Safety researchers.

  • Guest lectures by experienced researchers & Hackathons.

  • Personal career advice and mentorship.

  • Socials, Dinners & Hangout sessions.

Syllabus

We will be following a curated version of BlueDot Impact’s AI Safety Fundamentals course:

  • Week 1: Artificial General Intelligence

  • Week 2: Reward misspecification and instrumental convergence

  • Week 3: Goal misgeneralisation

  • Week 4: Task decomposition for scalable oversight

  • Week 5: Adversarial techniques for scalable oversight

  • Week 6: Interpretability

  • Week 7: Agent foundations / AI governance, careers in alignment

Details & Logistics

We’ll be conducting a weekly reading group (of course, with gaps during examination) with the following details:

  • Weekly commitment: You will be given printed and soft copies of all the readings a week before. It will take you 3-4 hours to complete these readings.

  • Discussion Schedule – The discussion group will start with Week 0 on 16th August.

  • Venue – To be communicated via Email & WhatsApp groups.

Career Pathways post AISCF

The Graduates of the AISCF fellowship would have access to the following exciting career opportunities:

Starter Resources

Feel free to refer to the following resources to familiarize yourself with AI Safety:

Frequently Asked Questions

  • What are the prerequisites for the fellowship?

    • All students are encouraged to apply, regardless of their Machine Learning (ML) experience.

  • Are there any costs or fees to attend the fellowship if I get selected?

    • No, the fellowship is free to attend, and all readings will be provided by us.

  • Will there be multiple groups for the discussion sessions, and if yes, how will they be divided?

    • Yes, the discussion sessions will be conducted separately for three groups:

      • Group A – Students with significant ML knowledge and research experience. Ideal candidates are BTech/ Masters/Ph.D. with research experience in ML.

      • Group B – Students with introductory ML knowledge.

      • Group C – Students with little/no exposure to AI/ML.

  • How many students will be selected for the fellowship?

    • We expect to take 8-10 students for each group. So, it would be around 24-30 students in total.

  • Where will the fellowship be conducted?

    • The fellowship will be offline and conducted in the IIT Delhi campus (the exact venue to be announced later).

  • How much do I need to know about AI Safety to apply?

    • No prior knowledge of AI Safety is expected. This fellowship is designed as an introductory course on AI Safety, welcoming applicants from diverse backgrounds.

  • Is the fellowship open to both undergraduate and graduate students?

    • Yes, fellowship is open to all IIT-Delhi students: undergraduate, master’s and PhD.

  • What kind of learning can I expect from the AI Safety Fellowship?

    The AI Safety Fellowship offers a structured curriculum covering a wide range of topics related to AI Safety. You will learn about:

    • Foundations of AI Safety and its importance in shaping the future of AI technology.

    • Ethical considerations and societal impacts of AI deployment.

    • Technical knowledge of AI systems, their failure modes, and state-of-the-art algorithms to develop them safely.

    • Practical projects to apply your knowledge and skills in real-world scenarios.

How to apply?

Fill in this application form.

That’s it! We’ll reach out to you soon!

Contact: aiscf.iitd@gmail.com

If you have any questions, feel free to reach out to us via call or WhatsApp: Aryan: +91-9773639596, Shivam: +91-77503030314 or Tanish: +91-9907378184

We use analytics cookies to improve our website and measure ad performance. Cookie Policy.