AI Ethics Strategy Lessons From H&M Group

Carolyn Geason-Beissel/MIT SMR | Getty Images Artificial intelligence changes how organizations work, and that’s one of the reasons why it challenges our ethics: Who should take responsibility for automated decisions and actions? How much agency should algorithms have? How should we organize interactions between minds and machines? How does technology affect the workforce? Where are […]

Mar 31, 2025 - 12:00
 0
AI Ethics Strategy Lessons From H&M Group

Carolyn Geason-Beissel/MIT SMR | Getty Images

Artificial intelligence changes how organizations work, and that’s one of the reasons why it challenges our ethics: Who should take responsibility for automated decisions and actions? How much agency should algorithms have? How should we organize interactions between minds and machines? How does technology affect the workforce? Where are biases built into the system?

Companies, regulators, and policy makers search for steadfast ethical principles to help them navigate these moral mazes. They follow what feels like a logical strategy: First, identify universal values (such as transparency, fairness, human autonomy, or explainability), then define applications of the values (such as decision-making or AI-supported recruitment processes), and, finally, formalize them in codes of conduct. The idea is that codes of conduct for AI should override the computational code of AI.

But perhaps this linear approach is too simplistic. German philosopher of technology Günther Anders warned of the “Promethean gap” that opens up between our power to imagine and invent new technologies and our ethical ability to understand and manage those technologies.1 With AI, the Promethean gap widens into a chasm. The rapid developments and disruptions of AI outpace the principled deliberations about rules to govern AI’s application.

Static, rule-based ethics cannot keep up with rapidly changing AI technologies because such technologies question the very base of our values and humanity itself, argue the authors of a report from Harvard’s Edmond & Lily Safra Center for Ethics.2 They suggest that an “ethics of experimentation,” rather than a rules-based approach, is the “only kind of framework that can gain traction on the realities of the moment.” For organizations using AI, this raises the question, how does an “AI ethics of experimentation” work in practice? And how can companies bridge the Promethean gap between moral imagination and AI’s technological power?

To get pragmatic, real-world answers, we teamed up with global fashion retailer H&M Group to research and learn from its approach to AI ethics. Linda Leopold, H&M Group’s head of AI strategy (which encompasses AI ethics) has spent the past six years embedding the responsible use of AI across the organization.

“Our strategy is built on a combination of governance and culture,” Leopold explained. “You can’t approach AI ethics only with formal procedures. That has to do not only with the nature of the topic of ethics and the fast development of the AI technology but also with how people function: You need to reach both brains and hearts.”

Creating an AI Ethics Culture at H&M Group: Three Key Elements

Our research into H&M Group’s AI ethics strategy yields actionable lessons for other practitioners. We studied the company’s work on an ethical culture, including literacy in, awareness of, and engagement with AI-related moral dilemmas and identified three distinct elements that help employees across H&M Group put AI ethics into practice. Supplementing more traditional governance methods and tools, these elements add up to provide a moral compass for making good decisions in rapidly changing, turbulent environments.3

1. Debate concrete examples, and don’t seek perfection.

“The genie can’t be put back in the bottle,” said Google DeepMind CEO and Nobel Prize winner Demis Hassabis at the recent World Economic Forum in Davos, Switzerland, adding a stark warning that AI might threaten “the future of humanity, the human condition, and where we want to go as a society.” Related ethical questions raised by Hassabis and others, such as historian and science writer Yuval Harari or Geoffrey Hinton, who has been dubbed a “godfather of AI,” are no doubt important. However, rather than discussing whether AI is ushering in the dusk or dawn of humanity, the AI ethics conversation at H&M Group focuses on settings in which AI is used right now.

To increase awareness and understanding, H&M Group encourages AI ethics discussions about concrete business use cases set in contexts relevant to its employees. This work started as an experiment in 2019, with the creation of the Ethical AI Debate Club, where colleagues could meet to discuss fictional yet realistic scenarios of the moral dilemmas AI can present. Since then, the company has continued to develop the debate concept and has held many sessions, often with an audience watching a group of employees debate. Such discussions have become a key method in the effort to create an AI ethics culture and AI literacy.

One of the earliest debate scenarios involved the company’s AI-powered chatbot, named Mauricio, which provides H&M Group’s predominantly young customers with fashion advice. Mauricio is patient, understanding, polite, and has a good sense of humor. In short, he’s a really nice guy. While chatting with his young audience about fashion, the chatbot picks up a lot of customer data — including data about people’s personal well-being, psychological problems, intimate fears, desires, and disorders. Mauricio sometimes knows more about his customers than their parents or best friends.

This scenario requires an organizational ethics practice: What is the company’s moral responsibility? Should it use the data? (The users have consented, after all.) Should H&M Group operate a helpline for struggling teenagers? Or should it shut down Mauricio altogether?

Ethics is often discussed in abstract terms, which makes it hard to pin down required behaviors. However, everybody can relate to Mauricio, imagine ethical challenges, and discuss arguments for or against a certain course of action, Leopold said. The discussion group’s work is about “doing ethics” as part of everyday organizational practice — not about deliberating universal values — and using storytelling to create engagement.

“Cases are written as short stories that can entertain, provoke, and spark your imagination,” Leopold noted. “All this is about building culture, strengthening the [individual’s] ethical awareness and compass.” The company currently uses these cases in awareness sessions that are adjusted to be relevant to people in the context of their own business function. “It will make it easier for them to identify and analyze new situations that require ethical deliberation in their specific context,” Leopold said.

Such an “ethics in the wild” approach has an interesting philosophical lineage. Aristotle argued that we learn to act morally one step at a time. Consider how we learn to play piano, play golf, or learn a language: We learn by doing, experimenting, failing, and doing better next time. We’re good at solving moral dilemmas in situ; we’re well capable of knowing what is better. But — as humankind’s futile 2,500-year quest for global ethical principles suggests — we are ill-equipped to define what’s good and bad in absolutist terms.

A moral learning approach pushes us and our organizations to become better versions of ourselves — without our knowing what is unequivocally best. In fact, a pragmatic approach suggests that knowing what is better is actually enough for someone to make the right decision. We might not agree what the best use of AI is, but in a concrete situation (like Mauricio’s data collection), we can weigh the pros and cons of a concrete challenge and come to a (temporary) conclusion. It’s not perfect, but it’s good enough.

2. “Rules are tools”: Pursue evaluation, not judgment.

A second key element is H&M Group’s use of digital ethics principles, which are based on elements such as transparency, fairness, and environmental consciousness. The principles are backed up by examples of their application and by key considerations phrased as questions. Each set of questions invites staff members to reflect, probe moral mazes, and make conscious decisions.

Importantly, H&M Group’s digital ethics principles do not tell people what to do but rather what questions to ask and what aspects to consider — so people are equipped to develop their own answers in the various situations they experience. The principles function as a briefing document to prepare people for encounters with moral dilemmas, not as a guidebook to specific responses to them. As such, they function as keystones in the organizationwide strategy to build people’s capacity for moral action.

From an ethics researcher’s perspective, this approach has important parallels to pragmatist ethics. John Dewey, one of that movement’s key figures, argued that “rules are tools.” Dewey believed that rules are not abstract principles that tell us what is right and wrong in a specific situation. (Ethical AI challenges in the real world are much too messy, complex, and fluid for that). Rather, rules help us to discover moral aspects of a concrete situation and offer new perspectives and alternatives for moral action, he has posited. To paraphrase Dewey, moral rules are poor tools for deduction but useful tools for discovery.

At H&M Group, digital ethics principles are used as such discovery tools: They invite people to explore a variety of moral standpoints and perspectives that organizational members might take. They highlight the conditions for and consequences of taking action and express the company’s ethical stance. And they serve as a foundation and give direction, helping people to make conscious decisions that are in line with the company’s values.

This focus on rules as tools shifts the emphasis from judgment toward evaluation. For instance, when facing a specific case, an employee’s task is not to judge a priori whether a solution is, per se, morally good or bad; rather, the employee’s task is to evaluate the many consequences that related decisions might have for the business and a variety of stakeholders. Employees are usually pretty good at evaluating what is good and what is better — without necessarily knowing what is best.

To extend the argument, take the example of evaluating a scientific paper, a proposal for a new product, or a movie. In all three cases, we might not be able to define, in the abstract, what the best paper, idea, or film is, but we are pretty good at figuring out which paper is more interesting, which idea is more promising, and which movie is more enjoyable. The evaluation also leaves its trace for the next action to come, instigating a learning loop.

3. Make space for ethical infrastructures.

Concrete examples and evaluation skills work together to help shape people’s actions. However, for moral learning to persist and scale, it cannot rely solely on individual reflection — it needs structural support from the organization.4

H&M Group’s early debate club sessions and the debates that happen in several settings today are examples of institutionalized and psychologically safe environments where diverse audiences from different parts of the company — whether human rights experts, sustainability managers, data scientists, or engineers — can collectively reflect on and evaluate ethical issues.

Today, H&M Group does not hold as many debate club sessions as it used to, but it takes a similar approach to discussing moral dilemmas in almost all of its AI ethics awareness-raising presentations for teams and departments across the company, Leopold said. The goal is to shape an ethical culture of listening to others, speaking up, reflecting, and asking questions.

The design of such discussions matters. Who’s in the room, what’s included on and excluded from the agenda, the roles of participants and facilitators, the patterns of conversations, the rhythms of pro-versus-con debates — all of these add up to an infrastructure. H&M Group designs debate specifically to enable moral learning: Participants study scenarios, take up possible moral viewpoints, argue with others who hold the opposing view, and then reflect on collective solutions to ethical dilemmas. One design choice is that participants are assigned a standpoint rather than getting to pick one. “The purpose here is practice and learning — twisting and turning arguments, no matter what your personal opinion on the topic might be,” Leopold said.

The process of debating, taking others’ perspectives, and thinking through the moral implications of business decisions is what matters most.

The unorthodox idea behind building ethical infrastructures is that we often locate morality in the mind of the individual (consciousness), their compassion (heart), or their instinct (gut feelings). But at H&M Group, the focus is developing a collective moral reasoning — knowledge about what the organization, as moral agent, should and should not do. Ultimately, ethical infrastructures ensure that moral learning is not left to chance but instead intentionally cultivated as an integral part of organizational life.

Build Your Organization’s Moral Compass

As the H&M Group example illustrates, organizations can practice AI ethics and become better, even if they haven’t completely (and perhaps never will) figured out what the best AI ethics practices are.

Given that rules-based ethics cannot keep up with the unruly, fast-changing world of AI, we suggest adopting an ethics model based on experimentation and learning. For executives, the takeaway is to think of ethics as a journey of moral learning: This moves the organization from a top-down-compliance culture to an evolving ethical culture.

The journey toward such an ethical culture starts with focusing on concrete organizational dilemmas, and H&M Group did that. Instead of discussing abstract principles, have your people engage with cases and scenarios from everyday organizational practice. To guide these conversations, use a rules-as-tools focus to help people discover moral aspects of specific events and craft possible resolutions. Finally, provide the space, time, and institutionalized arrangements to ensure that moral learning can take place (literally) in the organization.

Leaders need to give space to dissenting voices and foster a collective moral memory that allows the organization to become a better version of itself, one step at a time, without ever requiring it to become the “best” version. Ultimately, the above strategy refrains from the search for the Holy Grail (or the Ten Commandments) of morality. Rather, as H&M Group’s experience illustrates, the goal for leaders is to equip collectives with a moral compass to make better decisions in ethically charged situations. As the AI landscape around us becomes more rugged, a moral compass is our best hope to keep teams on track.