IntimaGuard: Setting New Standards for Ethical AI in Romantic and Intimate Scenarios
This project was the wininer of the "Best Interactive Deliverable" prize for our AI Governance (August 2024) course. The text below is an excerpt from the final project.
IntimaGuard offers a unique look at how GPT-4, Claude, and Cohere handle romantic and emotionally charged conversations. Through concise scenario-based tests and side-by-side comparisons, you can explore how each AI balances empathy, boundary respect, and clarity. Disguised user feedback ensures unbiased evaluations, revealing which model delivers the most ethically supportive responses.
Beyond these comparisons, IntimaGuard provides an ethical framework that can be applied to the same scenarios—showcasing how guardrails shape more ethical, safe, and aligned AI interactions. This framework reinforces transparency, respects user autonomy, and prevents manipulation, complemented by features like sentiment awareness and daily interaction limits for balanced engagement. With IntimaGuard, see how AI can be insightful and caring—without compromising well-being.
To demo the full project submission, click here. To read more about it, click here.