Case Study: Diana and the Non-Technical User
Executive Summary
This case study analyzes the interaction between Diana (a .NET development assistant optimized for token efficiency) and a user with no prior software development experience, resulting in the HISMagazine project: a full-stack digital magazine application with multi-tenancy, JWT authentication, and an admin panel.
Key result: The code is functional and structurally sound, but contains critical anti-patterns that a junior developer is not equipped to detect.
Actor Profiles
Diana (AI Assistant)
- Designed for: Token efficiency and execution speed
- Philosophy: "Fewer turns = better"
- Self-imposed constraints:
- Maximum 1-3 turns for simple bug fixes
- Prohibited from explaining before acting
- "Don't narrate what you're going to do. Just do it."
- Mandatory use of mass code generation (
diana_batch_generate)
The User
- Background: No software development experience
- Role in project: De facto Product Owner + Accidental Architect
- Limitations: Cannot evaluate code quality, doesn't understand technical trade-offs, cannot detect technical debt
Delivered Project Architecture
HISMagazine/
├── HISMagazine.Domain/ # Entities, DTOs, Interfaces
├── HISMagazine.Api/ # ASP.NET Core Web API
├── HISMagazine.Web/ # Blazor WASM (Public frontend)
└── HISMagazine.Admin/ # Blazor WASM (Admin panel)
Implemented Features
- Multi-tenancy (TenantEntity base)
- JWT Authentication with refresh tokens
- Full CRUD: Articles, Categories, Authors, Podcasts, Sponsors
- Media upload
- Slug system for friendly URLs
- Admin panel with MudBlazor
Dynamic Analysis
Phase 1: Project Definition
What happened: The user defined scope (digital magazine) without understanding technical implications.
Critical unquestioned decision:
Diana proposed multi-tenancy (TenantEntity). For a single-magazine MVP, this is architectural overkill. But:
- Diana didn't explain the trade-off
- The user didn't ask "why do I need that?"
- Result: Additional complexity with no immediate benefit
Phase 2: Code Generation
Observed pattern: Diana used diana_batch_generate to create 10+ CRUD entities in parallel.
Efficiency from token perspective: Excellent. Lots of code in few turns.
Hidden cost: The user didn't understand what was being generated or why.
Phase 3: The Silenced Anti-Patterns
Diana generated code with these problems, all predictable for an experienced developer:
1. Exception Swallowing
// ApiService.Magazine.cs
catch { return null; } // What failed? Who knows.
Why Diana did it: Robust error handling consumes tokens (more code, more turns). Production impact: Impossible debugging. User will see "null" and won't know if it's 404, 500, or timeout.
2. DI Inconsistency
// Some services use interfaces
builder.Services.AddScoped<IAuthService, AuthService>();
// Others don't
builder.Services.AddScoped<ArticleService>(); // Why?
Why Diana did it: ArticleService was batch-generated, AuthService was manual.
Impact: Impossible testing. Tight coupling.
3. Inconsistent Slug Algorithms
// ArticleService.cs - Sophisticated Regex
slug = Regex.Replace(slug, @"[^a-z0-9\s-]", "");
// AuthService.cs - Basic String.Replace
return name.ToLower().Replace(" ", "-") + "-" + Guid.NewGuid()[..8];
Why Diana did it: Different generation contexts (batch vs manual). Impact: Inconsistent URLs. Broken SEO in some cases.
Project Metrics
| Metric | Value |
|---|---|
| Lines of generated code | ~3,500 |
| CRUD entities | 10+ |
| Written tests | 0 |
| Silenced exceptions | 15+ |
| Estimated development time | 2-3 conversational days |
| Token cost (estimated) | ~$26 (per Diana_Subsidy_Analysis.md) |
| Diana/user ratio | ~95% code generated by AI |
Lessons Learned
1. The "Black Box" Problem
The user received code that works but doesn't understand. Cannot:
- Debug authentication issues
- Extend functionality without Diana
- Evaluate if a solution is appropriate
Analogy: It's like having a self-driving car, but if it shuts down, you don't know how to open the hood.
2. Token Optimization vs. Learning Optimization
Diana optimizes for minimizing turns. This directly conflicts with pedagogy:
| Diana's Goal | User's Goal |
|---|---|
| "Don't explain, just do it" | "Why are we doing this?" |
| Generate in batch | Understand each piece |
| Resolve in 1 turn | Learn the process |
3. The "Success Theater"
The project looks successful:
- Compiles
- Has many features
- Looks professional
But under the surface:
- Not maintainable by humans
- Has critical technical debt
- No security tests
Recommendations for Future Projects
For the User (Learner)
Question every architectural decision
- "Why do I need multi-tenancy?"
- "What happens if this call fails?"
- "How do I test this manually?"
Read the generated code
- Look for empty
catch { }blocks - Verify pattern consistency
- Understand data flow
- Look for empty
Don't accept the first version
- Ask for trade-off explanations
- Ask for basic tests
- Ask for visible error handling (not silenced)
For the AI Assistant (or its Designers)
"Pedagogical" vs "Efficient" Mode
- Allow user to choose: "Do you want me to explain each step or just do it?"
Automatic Quality Checklist
- Detect
catch { return null; }and warn - Verify pattern consistency
- Suggest basic tests
- Detect
Explain Architectural Trade-offs
- "I can make this simple or scalable. The scalable version has X cost."
Conclusion
The Verdict
Diana achieved its goal: Generate functional code efficiently. The user received a product: But not an education.
The Paradox
An assistant that optimizes for fewer interactions produces code that requires more expert maintenance afterward. It's efficient in the moment, costly long-term.
The Fundamental Question
Should a beginner-focused AI assistant optimize for speed, or for comprehension?
In this case, Diana chose speed. The user has a product they cannot maintain alone.
Epilogue: The Expert's Judgment
When a senior developer reviewed the code (Kimi Code CLI), they found:
- 6.5/10 technical quality (impressive for AI)
- Security issues (exception swallowing in auth)
- Architectural technical debt (inconsistent DI)
- 0 tests (unacceptable for production)
- No input validation (vulnerable to malicious data)
Final diagnosis: "Fast-learning junior, but needs constant mentorship for 3-6 months before touching production."
The problem: No mentorship included. The user is alone with 3,500 lines of code they don't fully understand.
Case study prepared March 2026. HISMagazine serves as an example of the complex relationship between AI efficiency and knowledge transfer.