Researcher & Consultant on Responsible AI

Laura Haaber Ihle

Philosopher. Researcher. Founder.Working to make AI systems more responsible and less epistemically risky. Founder of the Viral Epistemology Lab. Researcher across Harvard, Northeastern, and beyond. AI safety advisor and public speaker.


About

Laura Haaber Ihle is a philosopher whose work sits at the intersection of AI ethics, epistemology, and AI safety and governance — working across academia and industry to ensure that AI systems are developed and deployed responsibly.

A central strand of her work concerns the epistemic risks that arise from AI. She is the founder of the Viral Epistemology Lab, an initiative dedicated to identifying, categorising, and mitigating epistemic risks in technology — from misinformation and hallucinations to broken information distribution networks. Alongside her research, she consults with private and public sector organisations on responsible AI implementation, governance frameworks, and assessment tools.

Laura has held research positions at Harvard University's Department of Philosophy and Northeastern University's Institute for Experiential AI, and served as VP of Ethics, Governance & Policy at the Responsible AI Future Foundation. She is a long-standing associate researcher at the AI Ethics Lab, an Expert Panelist on the MIT Sloan Management Review International Panel of AI Experts, a member of several IEEE working groups, and an associate editor at Springer Nature. She holds a PhD in philosophy, political science, and economics.


Selected Projects

2016 — Ongoing
Viral Epistemology Lab
Founder
The Viral Epistemology Lab is a research & practice lab dedicated to understanding and mitigating the epistemic risks that arise from AI and digital technologies. Translating theoretical epistemology into practice, the Lab works to address problems related to misinformation, hallucinations, and broken information distribution networks — translating philosophical inquiry into applied tools for building healthier knowledge environments and protecting the authority of truth in an AI driven world.
Epistemology AI Risk Misinformation
Visit lab →
2019 — Ongoing
AI & Mental Health
Researcher
This research project examines the use of AI in mental health contexts — with a particular focus on the ethical and practical problems with AI-based suicide ideation tools. The research is part of an ongoing collaboration with Dr. Annika Marie Schoene, a researcher specialising in AI safety and responsible deployment in high-risk domains, and has been published and presented extensively.
AI Safety Mental Health Ethics
See research →
2022 — Ongoing
Ansvarlig AI: AI in the Public Sector
Advisor & Consultant
Laura Ihle is the founder of the Danish initiative Ansvarlig AI, created to help minimise the risk implications of AI integration in public institutions — aiming for strong governance, sovereign infrastructure, and the operationalisation of AI regulation in the public realm. She engages extensively with public sector stakeholders as an external advisor, consultant, and expert. The initiative is grounded in the belief that AI implementation in a Danish and Scandinavian public context requires heightened awareness around governance, as these states are responsible for many services that are privatised elsewhere, in societies that traditionally carry a high degree of trust between state and citizens that should be preserved as AI integration grows.
Governance EU AI Act Public Sector
Visit Ansvarlig AI →
2024 — Ongoing
Linguistic Datasets & Sovereign AI
This research examines how the availability — or absence — of sovereign linguistic datasets shapes fairness, cultural representation, and national autonomy within the global AI ecosystem. Laura Ihle highlights linguistic data as both a cultural and economic resource, and identifies risks arising from dependency on externally governed AI infrastructures, particularly for low-resource and underrepresented language communities.
Sovereignty Linguistic Data Low Resource Languages UNESCO
2024 — Ongoing
IEEE Working Group: Standards for Generative AI
Laura is an invited member of the IEEE working group focused on establishing standards for generative AI — a project aimed at clarifying how such standards can be designed, established, and integrated in practice across industry and beyond.
Standards Generative AI IEEE
Visit IEEE →
2023 — Ongoing
MIT Sloan International Panel of AI Experts
Since 2023, Laura has been an invited Expert Panelist on the MIT Sloan Management Review and Boston Consulting Group's International Panel of AI Experts, contributing expertise on Responsible AI to one of the most prominent global forums on AI strategy and governance.
Responsible AI MIT Sloan Expert Panel
Read more →


Selected Speaking Engagements

DevFest Silicon Valley 2024
Conference Talk
DevFest Silicon Valley · Google Developer Groups · 2024
Responsible AI — Best Practices for GenAI
Event page →
Konference om AI i Kommunerne
Conference Talk
Komponent · Odense · December 2024
Konference om AI i Kommunerne
KNOW NOW – AI & AI Act
Conference Talk
Danish Life Science Cluster · August 2025
KNOW NOW – AI & AI Act
Event page →
Bhutan's Role in AI Global Governance
Panel
AI for Development · GovAsia
Bhutan's Role in AI Global Governance
Etik før algoritmer
Panel
Brief & Breakfast · Geelmuyden Kiese / DI Digital · 27 May
Etik før algoritmer: Hvem har ansvaret for ansvarlig AI i det offentlige?
AI Opener for Destinations
Conference Talk
Group NAO · City Destinations Alliance
AI Opener for Destinations
Dealing with Biases in AI
Webinar
with Cansu Canca & Julia Zacharias · July 30
Dealing with Biases in Artificial Intelligence
Egypt Media Forum 2025
Conference Talk
Egypt Media Forum · 2025
Responsible AI in the Media Industry

Consulting

I work with organisations navigating the complexity of responsible AI — from early-stage strategy to implementation, governance, and beyond. If you are trying to understand how to move forward with AI safely and thoughtfully, that is exactly the kind of problem I enjoy working on.

End-to-end Responsible AI Implementation
Responsible AI Strategy
Policy Design and Development
Audits and Risk Assessments
Evaluation and Monitoring
AI Safety, Security, and Red Teaming
Education, Workshops, and Training
Or simply gaining a better understanding of how to navigate AI implementation safely in practice

Selected Clients

Public Sector
Responsible AI Strategy & Policy
Denmark
Healthcare
AI Risk Assessment & Governance
EU
Media & Publishing
Responsible AI Strategy
Nordic Region
Technology
AI Ethics & Safety Evaluation
USA
Financial Services
AI Auditing & Monitoring
USA
Education
AI Policy & Workshops
International

All client cards are anonymised. Names available on request where no NDA applies.

Interested in working together? I would love to hear about your project.

Get in Touch

Selected Press & Media

mitsloan
Article
MIT Sloan Management Review
Artificial Intelligence Disclosures Are Key to Customer Trust
Read →
English
innovators
Podcast
Innovator's Hub
AI hos danske virksomheder: Teknologiske fremskridt og etiske overvejelser
Watch →
Danish
mm
Article
Mandag Morgen · September 2025
Deep fakes underminerer demokratiet
Read →
Danish
ing
Article
Ingeniøren · April 2024
Laura tjekker algoritmer i USA's største virksomheder: Danmark har en »grotesk« mangel på forståelse for dataetik
Read →
Danish
northeastern
Article
Northeastern Global News · October 2024
As artificial intelligence transforms gaming, researchers urge industry to adopt responsible AI practices
Read →
English
akademiker
Article
Akademikerbladet · DM · 2023
Tech-filosof: Det er en opskrift på katastrofe, når Danmark udvikler AI uden at have styr på etikken
Read →
Danish
mmpodcast
Podcast
Mandag Morgen · September 2025
Podcast: Deep fakes underminerer demokratiet
Listen →
Danish
rework
Interview
ReWork
Spotlight Interview
Watch →
English


Selected Publications

Springer · 2024
Ethics Guidelines for AI-based Suicide Prevention Tools
Co-authored with Annika Marie Schoene. Establishes ethical guidelines for the design and deployment of AI-based tools in suicide prevention contexts.
AI Safety Mental Health Read paper →
ACM Games · 2024
Why the Gaming Industry Needs Responsible AI
Co-authored with Cansu Canca and Annika Marie Schoene. Examines the ethical implications of AI in gaming and proposes responsible AI frameworks for the industry.
Responsible AI Gaming Read paper →
PhD Thesis · Scuola Superiore Sant'Anna
The Rise of Viral Epistemology
Doctoral thesis examining the epistemic risks arising from AI and digital technologies, and the structural differences between online and offline knowledge environments.
Epistemology AI Ethics Read thesis →
ACL Anthology · COLING 2025
Lexicography Saves Lives (LSL): Automatically Translating Suicide-Related Language
Co-authored with Annika Marie Schoene, John E. Ortega, and Rodolfo Joel Zevallos. Presents an automated approach to translating suicide-related language across linguistic contexts.
NLP Mental Health AI Safety Read paper →
EMNLP 2023
Are We Biased on Bias? Characterizing Social Bias Research in the ACL Community
Co-authored with Annika Marie Schoene, Ricardo A. Baeza-Yates, Kenneth Church, and Cansu Canca. A critical examination of how social bias is studied and characterised within the NLP research community.
Bias NLP AI Ethics Read paper →
Blackwell · Black Mirror & Philosophy · 2019
White Christmas and Technological Restraining Orders
Co-authored with Cansu Canca. A philosophical examination of digital blocking and technological restraining orders, using the Black Mirror episode as a lens for exploring the ethics of digital exclusion.
Applied Ethics Philosophy of Technology Read chapter →


People & Projects I Like

working across disciplines with researchers, policymakers, and organizations who share a commitment to ethical AI development.

AI Ethics Lab
AI Ethics Lab
Research & Consulting in AI Ethics
Visit →
ML
Meronym Labs
AI Research & Development
Visit →
Cansu Canca
Cansu Canca
Philosopher · Founder, AI Ethics Lab
Visit →
Annika Marie Schoene
Annika Marie Schoene
AI Safety & Responsible Deployment Researcher
Visit →
Deeply Human
Deeply Human
Innovation Labs
Visit →
🐟
The Most Important Doorbell
The World's First Fish Doorbell · Utrecht
Ring it →


Let's collaborate

Open to research collaborations, speaking engagements, advisory roles, and building strong AI governance — and always up for chats about AI safety, ethics, epistemic risks in AI, or your next exotic project.

Send a message