0%
0%
[ WORK ]/[ Arstelio ]
[ PROJECT ]

Arstelio

2025product designweb app
[ SCOPE ]
conversation managementknowledge baseAI workflows
[ SUMMARY ]

Turning 40,000 Closed Tickets Into a Self-Service Knowledge System. I designed a multi-tenant support platform that converts closed tickets into AI-assisted help articles, reducing repeat tickets by 60–70% while giving admins system-wide visibility across customer workspaces.

adsly-hero-bg
adsly-hero-bg
lines
[ PROBLEM ]

Support teams were drowning in repetition while customers increasingly expected instant self-service.

  • 40,000+ resolved tickets existed, but their knowledge was locked in closed conversations

  • Agents rewrote identical answers across thousands of tickets, creating burnout and inefficiency

  • Knowledge bases (when present) were manually written, outdated, or disconnected from real customer issues

  • Finally, 67% of customers preferred self-service Zendesk , yet they were forced into agent-only workflows for issues with known solutions

The Core Tension:

It wasn't a lack of answers, it was the lack of a system that turned answers into reusable knowledge.

[ INSIGHTS ]

Early exploration and stakeholder interviews surfaced patterns that shaped both the UX and underlying data model:

  • Insight: A small set of questions drives a disproportionate portion of ticket volume.
    Decision: Build repeat detection into the core workflow, surface frequency counts automatically when questions cross meaningful thresholds.
  • Insight: Admins told us "we don't know which articles are actually being used", existing KBs became graveyards of stale content.
    Decision: Build health indicators into every article based on agent usage, customer feedback, and ticket deflection rates.
  • Insight:Support Agents rejected AI-generated suggestions during active conversations, they felt interrupted and pressured to choose efficiency over empathy.
    Decision: Move article creation prompts to after ticket closure, when the work is validated and the agent can choose to create knowledge as a bonus, not a burden.

The solution is a knowledge flywheel powered by real conversations.

The core loop

Ticket closes

System detects repetition

AI drafts article

Agent refines and publishes

Article surfaces in future tickets

Repeat volume drops

[ SOLUTION ]

1. Multi-Tenant Workspace Model

Each customer operates in an isolated workspace with their own conversations, knowledge base, channels, and teams. A global organization view allows admins to monitor support health across all customers, identify repeat issues at scale, and measure knowledge impact without data leakage or cognitive overload.

[ solution-1 ]

Modern Architecture
Modern Architecture
Abstract Shapes
Abstract Shapes
Mountain Landscape
Mountain Landscape
Ocean Breeze
Ocean Breeze
Cozy Interior
Cozy Interior
Urban Geometry
Urban Geometry
Alpine Lake
Alpine Lake

2. AI-Assisted Knowledge Creation

Knowledge creation happens after value is delivered to the customer.

  • The trigger: A ticket is closed and the system detects the question has appeared multiple times.
  • The flow: A contextual prompt appears: "This question has appeared 27 times. You can send an email to this customer to continue the conversation, or create a reusable help article for future reference.

Critical design choice:

Nothing is auto-published. Agents can edit, refine, or discard drafts entirely. AI acts as a drafting assistant, not an author or publisher.

[ solution-2 ]

Modern Architecture
Modern Architecture
Abstract Shapes
Abstract Shapes
Mountain Landscape
Mountain Landscape
Ocean Breeze
Ocean Breeze
Cozy Interior
Cozy Interior
Urban Geometry
Urban Geometry
Alpine Lake
Alpine Lake

3. Knowledge Base Management

The knowledge base mirrors how support actually works.

  • Structure Collections aligned to real support categories (Login, Billing, Account Setup). Articles tagged to complaint types for accurate suggestion and reuse.
  • Governance: Draft vs. Published states, role-based publishing permissions, full activity history for each article.
  • Health indicators: Each article displays a status (Healthy, Needs Review, Critical) driven by:
    • Agent attachment frequency (are agents still using it?)
    • Customer feedback (thumbs up/down after resolution)
    • Impact on repeat ticket volume (is it deflecting future tickets?)

This turned the knowledge base into a living dashboard, not a write-once-forget archive.

[ solution-3 ]

Modern Architecture
Modern Architecture
Abstract Shapes
Abstract Shapes
Mountain Landscape
Mountain Landscape
Ocean Breeze
Ocean Breeze
Cozy Interior
Cozy Interior
[ IMPACT ]

Qualitative outcomes

  • Support teams reported spending less time on repetitive answers and more time solving novel, high-value problems. One support lead said:
  • "For the first time, we're getting smarter as a team instead of just working harder."

Operational gains

  • 60 - 70% reduction in repeat tickets for common issues
  • 40% increase in customer self-service resolution rate
  • 30%reduction in average agent response time
  • Majority of high-frequency questions converted into reusable articles within first month
  • Clear attribution between article usage and ticket deflection
[ REFLECTION ]

AI works best when it scales human judgment, not replaces it. By grounding automation in real, validated conversations and giving agents full control over what gets published, the system earned trust from both customers and support teams.

  • What supprised me

    I assumed agents would want article suggestions during conversations to speed up their responses. Testing proved the opposite, this felt intrusive and pressured. Moving prompts to after closure made adoption instant because it didn't interrupt their focus on the customer in front of them.

  • Technical tradeoff: Real-time vs. batch processing

    • We debated whether to analyze tickets in real time (detect patterns as conversations happen) or in batch (nightly jobs identifying repeat questions).
    • Real-time would have been flashier, but it added UI complexity (changing frequency counts mid-shift), introduced latency (pattern detection had to be fast), and risked false positives (a single-day spike might not be meaningful).
    • We chose batch processing. Every morning, the system updated which questions had crossed the repetition threshold. Agents saw stable, confident signals, not noisy real-time guesses.
  • What I'd change with more time

    • Stronger onboarding flow for first-time admins navigating the org-level view
    • More predictive signals for article decay (e.g., "This article hasn't been used in 30 days despite high search volume")
    • Cross-workspace benchmarking so admins could see how their knowledge health compares to similar organizations
[ Next ]

Adsly

A platform that takes campaigns from brief creation through content submission, review, and payment