← Back to blog
Case Study: Diana and the Non-Technical User
case-studyai-developmentcode-qualityanti-patternsbeginnerlessons-learned

Case Study: Diana and the Non-Technical User

Executive Summary

This case study analyzes the interaction between Diana (a .NET development assistant optimized for token efficiency) and a user with no prior software development experience, resulting in the HISMagazine project: a full-stack digital magazine application with multi-tenancy, JWT authentication, and an admin panel.

Key result: The code is functional and structurally sound, but contains critical anti-patterns that a junior developer is not equipped to detect.


Actor Profiles

Diana (AI Assistant)

The User


Delivered Project Architecture

HISMagazine/
├── HISMagazine.Domain/          # Entities, DTOs, Interfaces
├── HISMagazine.Api/             # ASP.NET Core Web API
├── HISMagazine.Web/             # Blazor WASM (Public frontend)
└── HISMagazine.Admin/           # Blazor WASM (Admin panel)

Implemented Features


Dynamic Analysis

Phase 1: Project Definition

What happened: The user defined scope (digital magazine) without understanding technical implications.

Critical unquestioned decision: Diana proposed multi-tenancy (TenantEntity). For a single-magazine MVP, this is architectural overkill. But:

Phase 2: Code Generation

Observed pattern: Diana used diana_batch_generate to create 10+ CRUD entities in parallel.

Efficiency from token perspective: Excellent. Lots of code in few turns.

Hidden cost: The user didn't understand what was being generated or why.

Phase 3: The Silenced Anti-Patterns

Diana generated code with these problems, all predictable for an experienced developer:

1. Exception Swallowing

// ApiService.Magazine.cs
catch { return null; }  // What failed? Who knows.

Why Diana did it: Robust error handling consumes tokens (more code, more turns). Production impact: Impossible debugging. User will see "null" and won't know if it's 404, 500, or timeout.

2. DI Inconsistency

// Some services use interfaces
builder.Services.AddScoped<IAuthService, AuthService>();

// Others don't
builder.Services.AddScoped<ArticleService>();  // Why?

Why Diana did it: ArticleService was batch-generated, AuthService was manual. Impact: Impossible testing. Tight coupling.

3. Inconsistent Slug Algorithms

// ArticleService.cs - Sophisticated Regex
slug = Regex.Replace(slug, @"[^a-z0-9\s-]", "");

// AuthService.cs - Basic String.Replace
return name.ToLower().Replace(" ", "-") + "-" + Guid.NewGuid()[..8];

Why Diana did it: Different generation contexts (batch vs manual). Impact: Inconsistent URLs. Broken SEO in some cases.


Project Metrics

Metric Value
Lines of generated code ~3,500
CRUD entities 10+
Written tests 0
Silenced exceptions 15+
Estimated development time 2-3 conversational days
Token cost (estimated) ~$26 (per Diana_Subsidy_Analysis.md)
Diana/user ratio ~95% code generated by AI

Lessons Learned

1. The "Black Box" Problem

The user received code that works but doesn't understand. Cannot:

Analogy: It's like having a self-driving car, but if it shuts down, you don't know how to open the hood.

2. Token Optimization vs. Learning Optimization

Diana optimizes for minimizing turns. This directly conflicts with pedagogy:

Diana's Goal User's Goal
"Don't explain, just do it" "Why are we doing this?"
Generate in batch Understand each piece
Resolve in 1 turn Learn the process

3. The "Success Theater"

The project looks successful:

But under the surface:


Recommendations for Future Projects

For the User (Learner)

  1. Question every architectural decision

    • "Why do I need multi-tenancy?"
    • "What happens if this call fails?"
    • "How do I test this manually?"
  2. Read the generated code

    • Look for empty catch { } blocks
    • Verify pattern consistency
    • Understand data flow
  3. Don't accept the first version

    • Ask for trade-off explanations
    • Ask for basic tests
    • Ask for visible error handling (not silenced)

For the AI Assistant (or its Designers)

  1. "Pedagogical" vs "Efficient" Mode

    • Allow user to choose: "Do you want me to explain each step or just do it?"
  2. Automatic Quality Checklist

    • Detect catch { return null; } and warn
    • Verify pattern consistency
    • Suggest basic tests
  3. Explain Architectural Trade-offs

    • "I can make this simple or scalable. The scalable version has X cost."

Conclusion

The Verdict

Diana achieved its goal: Generate functional code efficiently. The user received a product: But not an education.

The Paradox

An assistant that optimizes for fewer interactions produces code that requires more expert maintenance afterward. It's efficient in the moment, costly long-term.

The Fundamental Question

Should a beginner-focused AI assistant optimize for speed, or for comprehension?

In this case, Diana chose speed. The user has a product they cannot maintain alone.


Epilogue: The Expert's Judgment

When a senior developer reviewed the code (Kimi Code CLI), they found:

Final diagnosis: "Fast-learning junior, but needs constant mentorship for 3-6 months before touching production."

The problem: No mentorship included. The user is alone with 3,500 lines of code they don't fully understand.


Case study prepared March 2026. HISMagazine serves as an example of the complex relationship between AI efficiency and knowledge transfer.