Philosopher. Researcher. Founder.Working to make AI systems more responsible and less epistemically risky. Founder of the Viral Epistemology Lab. Researcher across Harvard, Northeastern, and beyond. AI safety advisor and public speaker.
Laura Haaber Ihle is a philosopher whose work sits at the intersection of AI ethics, epistemology, and AI safety and governance — working across academia and industry to ensure that AI systems are developed and deployed responsibly.
A central strand of her work concerns the epistemic risks that arise from AI. She is the founder of the Viral Epistemology Lab, an initiative dedicated to identifying, categorising, and mitigating epistemic risks in technology — from misinformation and hallucinations to broken information distribution networks. Alongside her research, she consults with private and public sector organisations on responsible AI implementation, governance frameworks, and assessment tools.
Laura has held research positions at Harvard University's Department of Philosophy and Northeastern University's Institute for Experiential AI, and served as VP of Ethics, Governance & Policy at the Responsible AI Future Foundation. She is a long-standing associate researcher at the AI Ethics Lab, an Expert Panelist on the MIT Sloan Management Review International Panel of AI Experts, a member of several IEEE working groups, and an associate editor at Springer Nature. She holds a PhD in philosophy, political science, and economics.
I work with organisations navigating the complexity of responsible AI — from early-stage strategy to implementation, governance, and beyond. If you are trying to understand how to move forward with AI safely and thoughtfully, that is exactly the kind of problem I enjoy working on.
All client cards are anonymised. Names available on request where no NDA applies.
Interested in working together? I would love to hear about your project.
Get in Touchworking across disciplines with researchers, policymakers, and organizations who share a commitment to ethical AI development.
Open to research collaborations, speaking engagements, advisory roles, and building strong AI governance — and always up for chats about AI safety, ethics, epistemic risks in AI, or your next exotic project.