10 Questions an Intermediate Developer Might Have About the French Writing Playground Version 2.0
Published: November 16, 2025 • 10 min read
As I promised in my previous blog post, I'm continuing to document this building process during my SDR Era with a Q&A series. This time, I'm diving deeper into the technical architecture and design decisions that intermediate developers might be curious about.
One thing you should note: I'm asking these questions from the perspective of what I believe an intermediate developer would want to know, and I'm providing the answers based on my implementation. To keep this post digestible, I won't be deep-diving into every technical detail, but you can always reach out to me if you have questions.
So here goes: 10 questions I think an intermediate developer might have about Version 2.0 of the French Writing Playground application.
Question 1: Explain how the data flows from the user input in the writing area to the database
For this application, there are multiple layers between the writing area and the database. These different layers have different roles that ultimately serve one purpose: to ensure that only properly formatted data reaches the database. These layers catch different types of issues at different stages.
The 6-Layer Data Flow Architecture
Layer 1: Client-Side Input & Validation
This is where the user types their French text in the textarea. While they're typing, react-hook-form is working behind the scenes, keeping track of the word count in real-time and checking if everything meets the requirements using Zod validation. The submit button only becomes enabled once the text passes all the client-side checks.
Layer 2: API Request Formation
Once you hit that submit button, the form takes all your data (the text you wrote, which emotion you selected, and whether you want to add it to the public collage) and packages it up nicely into JSON format. Then it sends this package via a fetch request to the appropriate API route.
Layer 3: API Route Authentication
Now we're on the server side. Before doing anything with your text, NextAuth jumps in to verify: "Hey, who are you? Are you actually logged in?" It checks your session, extracts your email, queries the database to get your user ID, and if you're not authenticated? Boom, 401 error. You're not getting in. This layer ensures that random people can't just send data to the API.
Layer 4: Server-Side Validation
Even though we validated on the client side, we do it again here because you can never trust the client (sorry, but it's true). Zod re-validates all the data to make sure nothing fishy happened during transit. It also runs custom text validation to check word count and French percentage. If something's off, it returns a 400 error. This is the "trust but verify" layer.
Layer 5: AI Processing
This is where the magic happens. The text gets sent to the OpenAI API for CEFR evaluation (that's the standardized system for rating language proficiency from A1 to C2). OpenAI analyzes the text, identifies errors, and sends back corrections. The app then parses this response and extracts the corrections with their character positions so we know exactly where to highlight errors in the original text.
Layer 6: Database Transaction
Finally, we're ready to save everything. The app saves your writing entry to the database, then stores each grammar correction separately (so they can be highlighted in your text later), and updates your overall statistics like total entries written and average CEFR level. Each piece gets saved one after another, and if something goes wrong at any step, you'll see an error message rather than mysterious half-saved data.
Example Flow
For example, when you type "Bonjour, je suis content!" and click Submit, the text first gets validated client-side to ensure it meets word count requirements. Then it's sent to the server where it's checked again for spam patterns or URLs before finally being sent to OpenAI for CEFR evaluation and then eventually saved to the database.
Question 2: How did you build the architecture that enables dynamic styling of the application based on chosen theme?
Amazing question! You see, the theme architecture for this application is designed to support multiple requirements simultaneously:
- Having 16 different themes with unique color palettes
- Dynamic switching of themes without reloading the page
- Type safety for theme properties
- High performance by ensuring no CSS-in-JS runtime cost
- SSR compatibility to ensure that it works with Server Components
- Ensuring that the theme survives refresh through localStorage persistence
The Solution
The implemented theme architecture achieves all the above by using a centralized Zustand store to manage all 16 emotion-based themes, then applying styles through CSS custom properties that are dynamically set on the document root. This enables instant theme switching without the performance cost of generating or injecting styles with JavaScript at runtime while also maintaining type safety and developer experience.
This means that when you select 'Hungry' from the theme dropdown, the app instantly updates all colors across the interface, from the glassmorphic cards to the button hover states, all without any page reload. The selected theme persists even if you refresh the page.
Question 3: How do you handle authorization for this application?
I use session-based authorization at the API layer with NextAuth. Every API route verifies the user's session before processing requests and filters database queries by the authenticated user's ID. This ensures users can only access their own data while maintaining compatibility with NextAuth's JWT strategy.
Question 4: Did you implement any strategies to ensure optimal query performance for when multiple users use the application?
Yes, I did. I achieved this by implementing strategic indexing strategies (don't bite your tongue there) that allow for fast retrieval of data from the database without scanning every single row.
Index Types Implemented
Index types like B-tree, Composite, and Unique indexing were implemented to ensure that queries remain fast even with millions of entries. Indexes are created on:
- Foreign keys
- Frequently filtered columns like
created_atfor user entries - Composite indexes for common query patterns (e.g., when retrieving an entry created at a specific date)
Question 5: How does OpenAI integration work for CEFR evaluation?
OpenAI integration is the core feature of Version 2.0 that evaluates French writing proficiency. The app calls OpenAI's GPT-4.1 or GPT-5-mini based on the evaluation type via the official Software Development Kit (SDK).
Two Evaluation Types
- Fast Evaluations (GPT-4.1): The first evaluation every user gets when they submit an entry
- Detailed Evaluations (GPT-5-mini): Every user can request this after viewing the corrections returned by GPT-4.1
Both API calls send French text with a structured system prompt to get consistent, parseable responses with CEFR levels (A1-C2) and detailed corrections. The responses are then parsed, validated, and saved to the database.
Question 6: Does this app implement rate-limiting and quotas for OpenAI API calls?
By only adding $5 to my OpenAI account, I am able to implement rate-limiting to manage costs for OpenAI API calls. Alright, I'm just kidding. The correct answer is no, it doesn't.
However, it is something I have thought about. You see, this application was built to solve a personal problem of mine and to act as a portfolio project piece. I do intend on sharing the application with others, and if other people start actively using the application, I will implement rate-limiting at multiple layers:
- IP-based rate limiting in API routes (using in-memory Map)
- User quotas tracked in database (max entries per tier)
- OpenAI request throttling to prevent API overuse
Cost Management Strategy
I did, however, implement several layers of validation before the text ever reaches the OpenAI API to prevent abuse from users who try to send non-French related text, gibberish, or spam URLs. After all, managing API costs carefully is essential given that I'm a jobless recent graduate.
Question 7: How does authentication work with NextAuth and Supabase?
Authentication in the app uses a hybrid approach: NextAuth for session management and OAuth providers (in this case, Google), combined with Supabase for data storage. This combination provides flexibility with multiple authentication methods while leveraging Supabase's PostgreSQL database for storing user data.
How It Works
NextAuth handles both OAuth (Google) and credentials-based (email/password) authentication, creating JWT sessions that are stored in HTTP-only cookies for security. When authentication succeeds, user data is automatically synced to Supabase's PostgreSQL database through the NextAuth adapter.
Authorization Layer
For authorization, the app uses session-based checks at the API layer. Every protected API route verifies the user's session via NextAuth's getServerSession() function before processing any request. Once authenticated, the user's ID from the session is used to filter all database queries, ensuring users can only access their own data. This approach maintains compatibility with NextAuth's JWT strategy while providing robust access control at the application level.
Question 8: How does deployment and CI/CD (Continuous Integration / Continuous Deployment) work?
The application is deployed using Vercel with GitHub integration, enabling a seamless automated deployment pipeline.
The Deployment Flow
Every time I push changes to the main branch of my GitHub repository, Vercel automatically detects the update and triggers its CI/CD pipeline. The process starts with Continuous Integration, where Vercel installs all dependencies, runs the Next.js build process, and checks for any compilation errors or type issues. If the build succeeds without errors, the Continuous Deployment phase begins, where Vercel deploys the new version to production and makes it live instantly.
Preview Deployments
One powerful feature is that Vercel also creates preview deployments for pull requests. When I create a PR to test a new feature, Vercel generates a unique preview URL (like french-writing-pr-123.vercel.app) that I can share with others for testing before merging to production. This lets me catch issues in a production-like environment without affecting the live site.
Environment Management
The only manual step involves environment variables. When I need to add or update secrets (like API keys), I configure them in the Vercel dashboard and then trigger a manual redeploy. Otherwise, the entire pipeline is hands-off: I simply push code and Vercel handles the rest, making updates live in under a minute.
Rollback Safety
If something goes wrong, Vercel keeps a history of all deployments, allowing me to instantly roll back to any previous version with a single click. This safety net means I can deploy confidently knowing I can quickly revert if needed.
Question 9: The application uses a lot of animations and transitions. How are they implemented?
Great question with an easy response too! The application uses:
- Framer Motion for complex animations (modal animations and page transitions)
- CSS transitions for simple effects (hover effects and loading states)
- Tailwind utility classes for transitions
- AnimatePresence for enter/exit animations
Question 10: I see users are able to send messages to each other on the application. How does that work?
The messaging system combines PostgreSQL database storage with real-time notifications to create an instant messaging experience. This is definitely one of the most interesting features I implemented.
Database Storage
Messages are stored in a PostgreSQL messages table with sender/receiver relationships, message content, timestamps, and read/unread status. Before users can message each other, they must be connected, so the system always verifies that an accepted connection exists to prevent spam from strangers.
Real-Time Notifications with pg_notify
Here's where it gets interesting. PostgreSQL has a built-in publish-subscribe (pub/sub) feature called pg_notify. Think of it like a notification bell in a messaging app. When you send a message, a PostgreSQL trigger automatically fires and broadcasts a notification through pg_notify saying "Hey, new message for User B!"
Your browser is listening for these notifications (subscribed to the channel), so the moment the database trigger fires, your UI instantly receives the notification and displays the new message - no page refresh needed. It's like having a live connection to the database that taps you on the shoulder when something happens.
The Technical Flow
- Nikka sends you a message → API route saves it to the database
- Database trigger automatically calls
pg_notify('new_message', { sender: 'Nikka', ... }) - Your browser (subscribed to this channel) receives the notification instantly
- Your UI shows the new message with a notification badge
- When you open the conversation, the app marks messages as read via an API call
Why This Approach?
Using PostgreSQL triggers with pg_notify is efficient for this application because the database handles the notification logic automatically without additional code in the API routes. Combined with Supabase Realtime (which listens to pg_notify channels), users get a WhatsApp-like instant messaging experience without needing a separate WebSocket server.
Final Thoughts
Of all these features implemented, the messaging feature is obviously the one that makes me feel really smart. Not because the process of implementing it is hard, but rather, it is new for me, and in the past, I've always thought it was a big deal to create an app with fully functional messaging features. Now, I know it's not.
Try out the French Writing Playground V2.0 and experience these features firsthand!
As always, thanks for reading!