Designing Content Moderation & Appeals at Scale at Spotify

Designing Content Moderation & Appeals at Scale at Spotify

Designing Content Moderation & Appeals at Scale at Spotify

Redesigned Spotify’s moderation system to cut handling time by 50% across 7+ content types, and built a 0→1 appeals service to make enforcement decisions transparent, scalable, and DSA-compliant.

Role

Design Lead

Project type

B2B2C

Duration

18-20 months

Why this project matters

Moderation decisions shape how safe, fair, and trustworthy a platform feels, especially for creators.

At Spotify, millions of creators and listeners are affected by content decisions every day. At the same time, Spotify must comply with regulations such as the EU Digital Services Act (DSA), which requires transparency, appeal rights, and auditable decision-making.

This project matters because it sits at the intersection of:

  • Consumer trust and transparency

  • Human judgment under policy constraints

  • Scalable decision systems

  • Regulatory and business risk

Poorly designed systems here lead to slow decisions, higher operational cost, legal exposure, and frustrated users.

Moderation decisions shape how safe, fair, and trustworthy a platform feels, especially for creators.

At Spotify, millions of creators and listeners are affected by content decisions every day. At the same time, Spotify must comply with regulations such as the EU Digital Services Act (DSA), which requires transparency, appeal rights, and auditable decision-making.

This project matters because it sits at the intersection of:

  • Consumer trust and transparency

  • Human judgment under policy constraints

  • Scalable decision systems

  • Regulatory and business risk

Poorly designed systems here lead to slow decisions, higher operational cost, legal exposure, and frustrated users.

Project scope

This work covered 2 tightly connected systems in Spotify’s enforcement ecosystem:

  • Monocle - an internal platform where operators review reported content and take enforcement actions

  • Appeals - a 0→1 service that allows creators and listeners to challenge moderation decisions

While Monocle had a clear operational goal (reduce handling time and improve accuracy), Appeals was highly ambiguous at the start.

There was no agreed model for:

  • Who could appeal

  • Which cases were in scope (IP vs non-IP)

  • How decisions should flow between operators, policy, and legal

  • How much transparency was appropriate for users

Part of my role was to frame the problem space, define system boundaries, and help teams align on what needed to be built before deciding how to build it.

Rather than treating these as separate tools, the scope became designing one end-to-end enforcement journey:

report → review → decision → notification → appeal → resolution → audit

This work covered 2 tightly connected systems in Spotify’s enforcement ecosystem:

  • Monocle - an internal platform where operators review reported content and take enforcement actions

  • Appeals - a 0→1 service that allows creators and listeners to challenge moderation decisions

While Monocle had a clear operational goal (reduce handling time and improve accuracy), Appeals was highly ambiguous at the start.

There was no agreed model for:

  • Who could appeal

  • Which cases were in scope (IP vs non-IP)

  • How decisions should flow between operators, policy, and legal

  • How much transparency was appropriate for users

Part of my role was to frame the problem space, define system boundaries, and help teams align on what needed to be built before deciding how to build it.

Rather than treating these as separate tools, the scope became designing one end-to-end enforcement journey:

report → review → decision → notification → appeal → resolution → audit

Role &
responsibilities
Role & responsibilities

Product Designer (lead on both systems)

  • Led end-to-end design across moderation tooling and appeals

  • Partnered closely with Product, Engineering, Trust & Safety, Policy, and Legal

  • Designed for multiple user groups: operators, policy experts, creators, and listeners

  • Spent significant time shaping ambiguous problem spaces and aligning teams before execution

Scale

  • Used daily by 200+ operators

  • Serving users across 180+ regions

Product Designer (lead on both systems)

  • Led end-to-end design across moderation tooling and appeals

  • Partnered closely with Product, Engineering, Trust & Safety, Policy, and Legal

  • Designed for multiple user groups: operators, policy experts, creators, and listeners

  • Spent significant time shaping ambiguous problem spaces and aligning teams before execution

Scale

  • Used daily by 200+ operators

  • Serving users across 180+ regions

Research and
Insights

What wasn’t working

Moderation (Monocle)

  • Operators used 20+ disconnected tools to review a single case

  • Context, history, and actions lived across systems

  • Tool switching increased cognitive load and error risk

Appeals

  • Appeals existed mainly as email threads, and only for IP legal cases

  • No clear process for non-IP cases (e.g. hate speech, harassment)

  • Decisions were hard to reconstruct or audit

  • With DSA coming, appeal volume and transparency expectations were expected to increase

For Appeals in particular, there was no clear blueprint upfront — we had to learn by doing and refine the model along the way.

Key insights

  • Operators need all relevant context in one place to make confident decisions

  • Appeals cannot be treated as a single form - IP and non-IP cases follow different legal paths

  • Decisions must be traceable and explainable, not just fast

  • Gaps between internal decisions and external communication directly undermine user trust

These insights made it clear that moderation and appeals could not be solved independently.

Research and Insights

What wasn’t working

Moderation (Monocle)

  • Operators used 20+ disconnected tools to review a single case

  • Context, history, and actions lived across systems

  • Tool switching increased cognitive load and error risk

Appeals

  • Appeals existed mainly as email threads, and only for IP legal cases

  • No clear process for non-IP cases (e.g. hate speech, harassment)

  • Decisions were hard to reconstruct or audit

  • With DSA coming, appeal volume and transparency expectations were expected to increase

For Appeals in particular, there was no clear blueprint upfront — we had to learn by doing and refine the model along the way.

Real world testing

I came up with 5 concepts and ran 9 usability sessions with moderation operators and policy experts across the US, UK, EU, and India, covering multiple content types:

  1. Testing directly shaped the design:

    • Kept key decision metadata (content type, policy applied, prior actions) visible at the top of the review page, reducing tool-switching and supporting faster, more accurate decisions

    • Added decision history logs so reviewers could understand case context at a glance.

  2. Introduced team-specific terminology to reduce interpretation errors across regions and teams.

  3. Validated the new combined review + action flow with 100% task completion across all test. sessions.

  4. Post-launch, iterated with Product, Engineering, and Policy through regular feedback sessions, refining workflows and extending Monocle to 7+ content types.

  5. For Appeals, validated flows with Policy and Legal and ran a limited rollout (~300 notifications, 10 appeals) to confirm end-to-end reliability before wider rollout.

Appeals — designing the system as we learned

Appeals started without a fixed model. The system evolved as we learned more about: regulatory requirements, operator constraints, and user expectations.

Key decisions:

  • Mapped end-to-end journeys for creators, listeners, operators, and policy experts

  • Designed separate flows for IP and non-IP cases

  • Aligned service design with backend case infrastructure

  • Made status, outcomes, and next steps explicit for users

  • Ensured every appeal decision could be reviewed internally

The service was shaped iteratively, with continuous refinement rather than a one-off delivery.

From Insights to Design

The core challenge wasn’t redesigning screens - it was defining how a fragmented, policy-driven process should work as a coherent system.

Framing ambiguity with service design - to move forward in an undefined space, I used service design diagrams to:

  • Map how decisions actually moved across operators, policy, legal, and users

  • Surface unclear ownership, handoffs, and decision points

  • Create a shared mental model before committing to UI design

I ran cross-functional workshops and JAM sessions to stress-test scenarios, align on decision logic, and converge on a shared operating model across teams.

These artefacts were used as alignment tools, not documentation for its own sake.

Monocle - a single decision workspace

Monocle replaced fragmented tooling with one clear review environment. The goal is to educe handling time without sacrificing accuracy or fairness.

Key decisions:

  • Consolidated context, history, and actions into one interface - Operators can review and apply tags to the content from one platform.

  • Defined a shared tagging structure with policy teams

  • Designed a clear information hierarchy to support fast, accurate decisions

  • Embedded decision history and audit logs directly into the workflow

Impact

Together, Monocle and Appeals replaced fragmented moderation tools and email-based processes with a single, coherent enforcement system spanning internal operations and consumer-facing transparency.

  1. 50% reduction in moderation handling time (~32 → 12 minutes per case)

  2. ~50,000 minutes saved per day across moderation teams

  3. 200+ global operators using Monocle as their primary workflow, replacing 20+ legacy tools

  4. 7+ content types supported with consistent enforcement logic

  5. Appeals moved from email to a structured, auditable service, enabling traceability and reducing operational risk

  6. DSA compliance achieved ahead of enforcement, with appeal rights and decision logs in place

As the system became coherent, operators focused more on judgement than navigation, and policy and legal teams gained a clear, auditable view of every decision.

From Insights
to Design

The core challenge wasn’t redesigning screens - it was defining how a fragmented, policy-driven process should work as a coherent system.

Framing ambiguity with service design - to move forward in an undefined space, I used service design diagrams to:

  • Map how decisions actually moved across operators, policy, legal, and users

  • Surface unclear ownership, handoffs, and decision points

  • Create a shared mental model before committing to UI design

I ran cross-functional workshops and JAM sessions to stress-test scenarios, align on decision logic, and converge on a shared operating model across teams.

These artefacts were used as alignment tools, not documentation for its own sake.

Monocle - a single decision workspace

Monocle replaced fragmented tooling with one clear review environment. The goal is to educe handling time without sacrificing accuracy or fairness.

Key decisions:

  • Consolidated context, history, and actions into one interface - Operators can review and apply tags to the content from one platform.

  • Defined a shared tagging structure with policy teams

  • Designed a clear information hierarchy to support fast, accurate decisions

  • Embedded decision history and audit logs directly into the workflow

Appeals — designing the system as we learned

Appeals started without a fixed model. The system evolved as we learned more about: regulatory requirements, operator constraints, and user expectations.

Key decisions:

  • Mapped end-to-end journeys for creators, listeners, operators, and policy experts

  • Designed separate flows for IP and non-IP cases

  • Aligned service design with backend case infrastructure

  • Made status, outcomes, and next steps explicit for users

  • Ensured every appeal decision could be reviewed internally

The service was shaped iteratively, with continuous refinement rather than a one-off delivery.

From Insights to Design

The core challenge wasn’t redesigning screens - it was defining how a fragmented, policy-driven process should work as a coherent system.

Framing ambiguity with service design - to move forward in an undefined space, I used service design diagrams to:

  • Map how decisions actually moved across operators, policy, legal, and users

  • Surface unclear ownership, handoffs, and decision points

  • Create a shared mental model before committing to UI design

I ran cross-functional workshops and JAM sessions to stress-test scenarios, align on decision logic, and converge on a shared operating model across teams.

These artefacts were used as alignment tools, not documentation for its own sake.

Monocle - a single decision workspace

Monocle replaced fragmented tooling with one clear review environment. The goal is to educe handling time without sacrificing accuracy or fairness.

Key decisions:

  • Consolidated context, history, and actions into one interface - Operators can review and apply tags to the content from one platform.

  • Defined a shared tagging structure with policy teams

  • Designed a clear information hierarchy to support fast, accurate decisions

  • Embedded decision history and audit logs directly into the workflow

Appeals — designing the system as we learned

Appeals started without a fixed model. The system evolved as we learned more about: regulatory requirements, operator constraints, and user expectations.

Key decisions:

  • Mapped end-to-end journeys for creators, listeners, operators, and policy experts

  • Designed separate flows for IP and non-IP cases

  • Aligned service design with backend case infrastructure

  • Made status, outcomes, and next steps explicit for users

  • Ensured every appeal decision could be reviewed internally

The service was shaped iteratively, with continuous refinement rather than a one-off delivery.

Research and Insights

What wasn’t working

Moderation (Monocle)

  • Operators used 20+ disconnected tools to review a single case

  • Context, history, and actions lived across systems

  • Tool switching increased cognitive load and error risk

Appeals

  • Appeals existed mainly as email threads, and only for IP legal cases

  • No clear process for non-IP cases (e.g. hate speech, harassment)

  • Decisions were hard to reconstruct or audit

  • With DSA coming, appeal volume and transparency expectations were expected to increase

For Appeals in particular, there was no clear blueprint upfront — we had to learn by doing and refine the model along the way.

Real World
Testing

I came up with 5 concepts and ran 9 usability sessions with moderation operators and policy experts across the US, UK, EU, and India, covering multiple content types:

  1. Testing directly shaped the design:

    • Kept key decision metadata (content type, policy applied, prior actions) visible at the top of the review page, reducing tool-switching and supporting faster, more accurate decisions

    • Added decision history logs so reviewers could understand case context at a glance.

  2. Introduced team-specific terminology to reduce interpretation errors across regions and teams.

  3. Validated the new combined review + action flow with 100% task completion across all test. sessions.

  4. Post-launch, iterated with Product, Engineering, and Policy through regular feedback sessions, refining workflows and extending Monocle to 7+ content types.

  5. For Appeals, validated flows with Policy and Legal and ran a limited rollout (~300 notifications, 10 appeals) to confirm end-to-end reliability before wider rollout.

Impact
Real world testing

Together, Monocle and Appeals replaced fragmented moderation tools and email-based processes with a single, coherent enforcement system spanning internal operations and consumer-facing transparency.

  1. 50% reduction in moderation handling time (~32 → 12 minutes per case)

  2. ~50,000 minutes saved per day across moderation teams

  3. 200+ global operators using Monocle as their primary workflow, replacing 20+ legacy tools

  4. 7+ content types supported with consistent enforcement logic

  5. Appeals moved from email to a structured, auditable service, enabling traceability and reducing operational risk

  6. DSA compliance achieved ahead of enforcement, with appeal rights and decision logs in place

As the system became coherent, operators focused more on judgement than navigation, and policy and legal teams gained a clear, auditable view of every decision.

I came up with 5 concepts and ran 9 usability sessions with moderation operators and policy experts across the US, UK, EU, and India, covering multiple content types:

  1. Testing directly shaped the design:

    • Kept key decision metadata (content type, policy applied, prior actions) visible at the top of the review page, reducing tool-switching and supporting faster, more accurate decisions

    • Added decision history logs so reviewers could understand case context at a glance.

  2. Introduced team-specific terminology to reduce interpretation errors across regions and teams.

  3. Validated the new combined review + action flow with 100% task completion across all test. sessions.

  4. Post-launch, iterated with Product, Engineering, and Policy through regular feedback sessions, refining workflows and extending Monocle to 7+ content types.

  5. For Appeals, validated flows with Policy and Legal and ran a limited rollout (~300 notifications, 10 appeals) to confirm end-to-end reliability before wider rollout.

Impact

Together, Monocle and Appeals replaced fragmented moderation tools and email-based processes with a single, coherent enforcement system spanning internal operations and consumer-facing transparency.

  1. 50% reduction in moderation handling time (~32 → 12 minutes per case)

  2. ~50,000 minutes saved per day across moderation teams

  3. 200+ global operators using Monocle as their primary workflow, replacing 20+ legacy tools

  4. 7+ content types supported with consistent enforcement logic

  5. Appeals moved from email to a structured, auditable service, enabling traceability and reducing operational risk

  6. DSA compliance achieved ahead of enforcement, with appeal rights and decision logs in place

As the system became coherent, operators focused more on judgement than navigation, and policy and legal teams gained a clear, auditable view of every decision.

More projects

More projects

Tiffany Szu-Chia Chen

Copyright 2025 by Tiffany Szu-Chia Chen

Tiffany Szu-Chia Chen

Copyright 2025 by Tiffany Szu-Chia Chen

Tiffany Szu-Chia Chen

Copyright 2025 by Tiffany Szu-Chia Chen