Instructor-led 4-days
Course Description:
This immersive 4-Day course explores the ethical, legal, and societal challenges surrounding the use of artificial intelligence. As AI systems become more integrated into everyday decision-making—from healthcare to hiring to criminal justice—understanding how to design and deploy AI responsibly is more critical than ever.
Participants will gain foundational knowledge of AI ethics frameworks, bias mitigation strategies, regulatory policies, and the importance of transparency and accountability. Through hands-on labs, real-world case studies, and interactive group exercises, attendees will assess risks, test bias in models, and design ethical AI pipelines.
Prerequisites:
- General understanding of AI and machine learning concepts (no coding required, but helpful)
- Basic familiarity with data analytics or computer science
- No prior ethics or philosophy background required
Key Takeaways:
- Understand foundational ethical principles applied to AI (e.g., fairness, accountability, transparency, privacy)
- Identify ethical risks in AI models including bias, discrimination, and data misuse
- Explore global regulatory frameworks (GDPR, AI Act, IEEE, NIST AI RMF)
- Gain hands-on experience detecting bias and testing transparency in AI tools
- Learn how to implement responsible AI practices in business, government, and tech development
- Build an ethical AI governance framework and risk assessment checklist
- Participate in structured debates and stakeholder role-play scenarios
Module 1: Foundations of AI Ethics and Responsible Innovation
Topics:
- Introduction to AI Ethics: Why It Matters
- The Ethical Lifecycle of AI: Design to Deployment
- Fairness, Accountability, Transparency, and Ethics (FATE)
- Stakeholders and Consequences: Who Gets Impacted?
Hands-On Labs:
- Lab 1: Case Study Analysis – Identify ethical failures in well-known AI deployments (COMPAS, Amazon hiring tool, etc.)
- Lab 2: Create a stakeholder map and impact matrix for a facial recognition system
🟨 Module 2: AI Bias, Fairness, and Risk Mitigation
Topics:
- Sources of Bias in Data and Models
- Measuring and Mitigating Bias in AI Outputs
- Inclusive Design and Diverse Data Representation
- Introduction to Explainable AI (XAI)
Hands-On Labs:
- Lab 3: Use a prebuilt ML model to detect and analyze gender/racial bias (with tools like AIF360 or Fairlearn)
- Lab 4: Visualize and interpret model predictions using SHAP or LIME (Explainability lab)
Module 3: Regulation, Governance, and Transparency
Topics:
- Global Legal Landscape: GDPR, EU AI Act, NIST AI Risk Management Framework
- Ethical Auditing and Impact Assessments (ALTAI, Model Cards, Datasheets)
- Building Ethical AI Governance and Oversight Structures
- Public Policy, Corporate Responsibility, and AI for Good
Hands-On Labs:
- Lab 5: Conduct a mock AI Risk Assessment using the NIST RMF or ALTAI tool
- Lab 6: Draft a Model Card and Datasheet for a sample AI system
🟥 Module 4: Applied Ethics, Design Challenges, and Capstone
Topics:
- Red Teaming AI Systems: Anticipating Abuse and Misuse
- Trade-offs in AI Ethics: Utility vs. Fairness, Privacy vs. Performance
- Group Ethics Simulation: Role-Play Debate (e.g., Should Police Use Predictive AI?)
- Final Capstone Project
Hands-On Labs:
- Lab 7: Simulate a “red team” ethical review of a new AI tool and report key risks