<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Shubham Jha | Dev Blog]]></title><description><![CDATA[Deep dives into system design, enterprise development, and modern web technologies — written from 6+ years of real engineering experience. No toy projects. Code that actually ships.]]></description><link>https://blog.shubhamjha.com</link><generator>RSS for Node</generator><lastBuildDate>Wed, 22 Apr 2026 18:01:29 GMT</lastBuildDate><atom:link href="https://blog.shubhamjha.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[SaaS UI/UX Design 2026: Six Patterns That Drive Activation]]></title><description><![CDATA[The onboarding flow had 11 steps. The product, a B2B SaaS workspace, required users to complete all 11 before they could do anything meaningful. Week-one activation (the share of trial users who reach]]></description><link>https://blog.shubhamjha.com/saas-ui-ux-design-2026-six-patterns-that-drive-activation</link><guid isPermaLink="true">https://blog.shubhamjha.com/saas-ui-ux-design-2026-six-patterns-that-drive-activation</guid><category><![CDATA[SaaS]]></category><category><![CDATA[UIUX]]></category><category><![CDATA[Design Systems]]></category><category><![CDATA[Accessibility]]></category><category><![CDATA[React]]></category><dc:creator><![CDATA[Shubham Jha]]></dc:creator><pubDate>Tue, 21 Apr 2026 12:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/69d1293c6792e486f6810f5c/6554dec5-b7d4-471f-a67d-0f49481b050a.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The onboarding flow had 11 steps. The product, a B2B SaaS workspace, required users to complete all 11 before they could do anything meaningful. Week-one activation (the share of trial users who reached the product's core value) was 23%. Support tickets arrived in batches every Monday morning: "I can't figure out how to set up my account." "The button doesn't do anything." "I gave up after step four."</p>
<p>Six months of engineering work had gone into the product. The design had received exactly one pass from a developer following a rough mockup. Nobody had watched a real user try to use it.</p>
<p>We spent four weeks on UI/UX: cutting onboarding from 11 steps to 5, making the primary action on every screen unmissable, fixing components that showed a blank screen when data was loading, and making the core flows work from a keyboard. Week-one activation went from 23% to 38%. Support tickets dropped 46%. Average time-to-value dropped from 14 minutes to 6.</p>
<p>None of those changes added a single feature. They made the existing features comprehensible.</p>
<p>UI/UX in a SaaS product isn't polish. It's the difference between a product that converts and one that churns. Every design decision maps to a measurable outcome: activation, time-to-value, support volume, retention. This post is about the six patterns that moved those numbers.</p>
<hr />
<h2>1. Clarity: Labels, Hierarchy, and One Primary Action</h2>
<p>Clarity failures are invisible to builders and obvious to users. When you've been staring at a UI for two months, "Submit" means "create the campaign." When a user sees it for the first time, "Submit" means nothing.</p>
<p>The product we inherited had buttons labelled "Submit", "Continue", and "Next" used interchangeably across different flows. Two buttons on the same screen both styled as primary. A form that asked for information users wouldn't have until later in the setup process. Every screen was asking the user to guess.</p>
<p>The fix starts with labels. A button's text should describe what happens when you click it, not the action of clicking. "Submit" becomes "Create Campaign". "Continue" becomes "Save and invite team". "Next" becomes "Set up billing →". One word change per button, repeated across 40 screens, reduced first-session drop-off by 18%.</p>
<p>Hierarchy is the harder problem. Every screen should have exactly one primary action. Three equally-styled buttons and users freeze — they don't know which one matters. A primary calls for action. A secondary provides an escape. A ghost handles edge cases. When everything is primary, nothing is.</p>
<pre><code class="language-tsx">// Before: three competing primary buttons — users freeze
&lt;Button onClick={handleSave}&gt;Save&lt;/Button&gt;
&lt;Button onClick={handlePreview}&gt;Preview&lt;/Button&gt;
&lt;Button onClick={handlePublish}&gt;Publish&lt;/Button&gt;

// After: clear hierarchy — one call to action, one escape, one edge case
&lt;div className="flex items-center gap-3"&gt;
  &lt;Button variant="ghost" onClick={handleSave}&gt;Save draft&lt;/Button&gt;
  &lt;Button variant="secondary" onClick={handlePreview}&gt;Preview&lt;/Button&gt;
  &lt;Button variant="primary" onClick={handlePublish}&gt;
    Publish campaign →
  &lt;/Button&gt;
&lt;/div&gt;
</code></pre>
<p>The most underrated fix is microcopy. It's the short text at decision points: below a form field, next to a checkbox, inside an empty state. It doesn't need to be clever. It needs to answer the question the user is about to ask. A password field that says "Minimum 8 characters, one uppercase" prevents the error before it happens. A data-sharing checkbox that says "Your data is never sold or shared with third parties" converts better than one that says "Receive marketing emails". Microcopy lives at the exact moment of hesitation. It's the cheapest conversion optimization available.</p>
<p>Clarity also means reducing what you ask for. The 11-step onboarding asked for company size, industry, team structure, integration preferences, and notification settings, all before the user had done anything. We cut it to: name the workspace, invite one teammate, connect one data source. Everything else defaulted to sensible values and could be changed later. Activation went up because users reached the product before they ran out of patience.</p>
<hr />
<h2>2. Consistency: Design Systems as a Business Asset</h2>
<p>Inconsistency has a tax. A team without a design system makes the same spacing decision 300 times across a codebase. A user who encounters a button that behaves differently on page 3 than on page 1 hesitates. A QA engineer who doesn't know which of four slightly-different modal implementations is canonical tests all four. None of this is dramatic. It compounds into a product that feels unreliable and a team that moves slower than it should.</p>
<p>A design system is the set of constraints that makes the right decision the default. In practice, it comes down to three layers: a token file, a typed component API, and documented state patterns. The token file propagates changes. The typed API prevents the wrong variant from compiling. The state patterns enforce that every component handles loading, empty, and error before it ships. Each layer closes a different failure mode.</p>
<h3>Design tokens</h3>
<p>A design token is a named value that propagates through every component. Change the token, every component updates. Without tokens, a brand color change means touching hundreds of files. With tokens, it means one change in one place.</p>
<pre><code class="language-css">/* globals.css — semantic tokens that describe purpose, not appearance */
:root {
  /* Color — OKLCH for perceptual uniformity */
  --color-primary: oklch(45% 0.2 264);
  --color-primary-hover: oklch(40% 0.22 264);
  --color-destructive: oklch(50% 0.22 30);
  --color-surface: oklch(99% 0.005 264);
  --color-surface-raised: oklch(97% 0.008 264);
  --color-border: oklch(88% 0.01 264);
  --color-text: oklch(15% 0.01 264);
  --color-muted: oklch(50% 0.01 264);

  /* Spacing scale (4px base) */
  --space-1: 0.25rem;
  --space-2: 0.5rem;
  --space-3: 0.75rem;
  --space-4: 1rem;
  --space-6: 1.5rem;
  --space-8: 2rem;

  /* Typography */
  --text-sm: 0.875rem;
  --text-base: 1rem;
  --text-lg: 1.125rem;
  --leading-tight: 1.25;
  --leading-normal: 1.5;
}
</code></pre>
<h3>Typed component APIs</h3>
<p>A typed component API makes the right decision the only option. When <code>variant</code> is <code>'primary' | 'secondary' | 'ghost' | 'destructive'</code>, the wrong variant doesn't compile. When <code>size</code> is <code>'sm' | 'md' | 'lg'</code>, there's no improvised "large-ish" variant that appears when a developer needs something slightly bigger.</p>
<pre><code class="language-tsx">type ButtonVariant = 'primary' | 'secondary' | 'ghost' | 'destructive'
type ButtonSize = 'sm' | 'md' | 'lg'

interface ButtonProps {
  variant?: ButtonVariant
  size?: ButtonSize
  loading?: boolean
  children: React.ReactNode
  onClick?: () =&gt; void
  type?: 'button' | 'submit' | 'reset'
}

export function Button({
  variant = 'primary',
  size = 'md',
  loading = false,
  children,
  onClick,
  type = 'button',
}: ButtonProps) {
  return (
    &lt;button
      type={type}
      onClick={onClick}
      disabled={loading}
      aria-busy={loading}
      className={cn(buttonVariants({ variant, size }), loading &amp;&amp; 'cursor-not-allowed opacity-70')}
    &gt;
      {loading ? &lt;Spinner size="sm" aria-hidden /&gt; : children}
    &lt;/button&gt;
  )
}
</code></pre>
<p>The component handles <code>loading</code> state explicitly. No relying on callers to disable the button, add a spinner, and set <code>aria-busy</code> correctly in three separate places. The right behavior is baked in once.</p>
<p>For most teams, shadcn/ui plus a token file gets you here without building from scratch. The components are yours to own and modify, with no black-box dependency that upgrades and silently breaks your UI. For how a typed component system fits into a larger production architecture, <a href="https://shubhamjha.com/blog/scalable-web-apps-nextjs">this guide on scalable Next.js apps</a> covers design tokens, component APIs, and the architecture patterns they plug into.</p>
<p>Consistency doesn't mean uniformity. It means users can form accurate predictions. If modals always close on Escape and always return focus to the trigger, users stop thinking about how the UI works and start thinking about their task. Making those predictions reliable for every user, including those navigating by keyboard or assistive technology, is the next constraint.</p>
<hr />
<h2>3. Accessibility: Keyboard Navigation and Focus Management</h2>
<p>Accessibility is not a compliance feature. It's a product quality signal with SEO side effects. Every accessibility improvement (semantic HTML, keyboard navigation, readable contrast) makes the product better for every user, not just users with disabilities.</p>
<p>The SaaS products I audit fail in the same places. Keyboard navigation that dead-ends at an interactive component and leaves the user stuck. Focus that disappears when a modal closes — for screen reader users, that's not subtle, it breaks the flow completely. Form errors that say "invalid input" instead of identifying what's wrong and how to fix it. None of these are edge cases. They're the first things a keyboard or screen reader user encounters.</p>
<h3>Keyboard navigation</h3>
<p>Every interactive element should be reachable with Tab, operable with Enter or Space, and dismissible with Escape. WCAG 2.2 tightened the focus visible requirement to 3:1 contrast ratio for focus indicators, up from 2.1's standard. If you've disabled outline styles globally, your keyboard users are navigating blind.</p>
<pre><code class="language-css">/* Style focus indicators — don't remove them */
:focus-visible {
  outline: 2px solid var(--color-primary);
  outline-offset: 2px;
  border-radius: 2px;
}

/* Visible for keyboard navigation, hidden for mouse — best of both */
:focus:not(:focus-visible) {
  outline: none;
}
</code></pre>
<h3>Focus management in dialogs</h3>
<p>When a dialog opens, focus must move inside it. When it closes, focus must return to the element that triggered it. If a user opens a "Delete workspace" dialog from a button, completes the action, and focus disappears to the top of the document, they've lost their place. For screen reader users that's not subtle. It breaks the flow completely.</p>
<pre><code class="language-tsx">import * as Dialog from '@radix-ui/react-dialog'

interface ConfirmDialogProps {
  open: boolean
  onOpenChange: (open: boolean) =&gt; void
  title: string
  description: string
  confirmLabel: string
  onConfirm: () =&gt; void
  loading?: boolean
  destructive?: boolean
}

export function ConfirmDialog({
  open,
  onOpenChange,
  title,
  description,
  confirmLabel,
  onConfirm,
  loading,
  destructive,
}: ConfirmDialogProps) {
  return (
    &lt;Dialog.Root open={open} onOpenChange={onOpenChange}&gt;
      &lt;Dialog.Portal&gt;
        &lt;Dialog.Overlay className="fixed inset-0 bg-black/40 backdrop-blur-sm" /&gt;
        &lt;Dialog.Content className="fixed top-1/2 left-1/2 -translate-x-1/2 -translate-y-1/2 bg-surface rounded-xl shadow-xl p-6 w-full max-w-md"&gt;
          &lt;Dialog.Title className="text-lg font-semibold text-foreground"&gt;
            {title}
          &lt;/Dialog.Title&gt;
          &lt;Dialog.Description className="mt-2 text-sm text-muted"&gt;
            {description}
          &lt;/Dialog.Description&gt;
          &lt;div className="mt-6 flex justify-end gap-3"&gt;
            &lt;Dialog.Close asChild&gt;
              &lt;Button variant="ghost"&gt;Cancel&lt;/Button&gt;
            &lt;/Dialog.Close&gt;
            &lt;Button
              variant={destructive ? 'destructive' : 'primary'}
              onClick={onConfirm}
              loading={loading}
            &gt;
              {confirmLabel}
            &lt;/Button&gt;
          &lt;/div&gt;
        &lt;/Dialog.Content&gt;
      &lt;/Dialog.Portal&gt;
    &lt;/Dialog.Root&gt;
  )
}
</code></pre>
<p>Radix UI's <code>Dialog.Content</code> traps focus within the dialog and returns it to the trigger on close. The component handles correct <code>aria-modal</code>, <code>aria-labelledby</code>, and <code>aria-describedby</code> attributes automatically. You get WCAG-compliant focus management without writing a single line of focus trap code.</p>
<h3>Accessible form errors</h3>
<p>Form errors should identify the field by name and describe the fix. "Invalid input" is useless. "Email address must be in the format <a href="mailto:name@company.com">name@company.com</a>" is actionable. Screen readers announce form errors through <code>aria-live</code> regions; if your error messages render without one, screen reader users never hear them.</p>
<pre><code class="language-tsx">&lt;div&gt;
  &lt;label htmlFor="email" className="block text-sm font-medium"&gt;
    Work email
  &lt;/label&gt;
  &lt;input
    id="email"
    type="email"
    aria-describedby={error ? 'email-error' : undefined}
    aria-invalid={!!error}
    className={cn('mt-1 w-full rounded-md border px-3 py-2', error &amp;&amp; 'border-destructive')}
  /&gt;
  {error &amp;&amp; (
    &lt;p id="email-error" role="alert" className="mt-1 text-sm text-destructive"&gt;
      {error}
    &lt;/p&gt;
  )}
&lt;/div&gt;
</code></pre>
<p><code>role="alert"</code> causes screen readers to announce the error as soon as it appears. <code>aria-invalid</code> communicates that the field has a problem. <code>aria-describedby</code> links the field to the error message so the relationship is explicit to assistive technology. These three attributes together cover the most common screen reader failure mode in SaaS forms.</p>
<hr />
<h2>4. Component States: Beyond the Happy Path</h2>
<p>Every data-driven component in a SaaS product has five states: loading, empty, error, success, and disabled. Most get built in the success state and shipped. The other four are added reactively when a user files a ticket.</p>
<p>This is the demo happy path problem. During development, data loads in milliseconds, there's always data to display, and errors don't happen. In production, users are on slow connections, their accounts are empty on day one, and backend errors occur. The screens they see in those moments are the first impression of your product's reliability.</p>
<p>The pattern that prevents this is modelling async state as a discriminated union rather than three boolean flags. <code>isLoading</code>, <code>isError</code>, and <code>data</code> give you eight possible combinations, of which only three are valid. TypeScript won't stop you from setting <code>isLoading: true</code> and <code>data: someUser</code> simultaneously. A discriminated union makes the invalid combinations unrepresentable at the type level.</p>
<pre><code class="language-tsx">type AsyncState&lt;T&gt; =
  | { status: 'idle' }
  | { status: 'loading' }
  | { status: 'success'; data: T }
  | { status: 'error'; message: string }

function UserTable({ state, onRetry }: { state: AsyncState&lt;User[]&gt;; onRetry: () =&gt; void }) {
  if (state.status === 'loading') {
    return &lt;TableSkeleton rows={5} /&gt;
  }

  if (state.status === 'error') {
    return (
      &lt;ErrorState
        title="Couldn't load team members"
        message={state.message}
        action={{ label: 'Try again', onClick: onRetry }}
      /&gt;
    )
  }

  if (state.status === 'success' &amp;&amp; state.data.length === 0) {
    return (
      &lt;EmptyState
        icon={&lt;UsersIcon className="h-8 w-8" /&gt;}
        title="No team members yet"
        description="Invite your team to start collaborating."
        action={{ label: 'Invite team members', onClick: openInviteModal }}
      /&gt;
    )
  }

  if (state.status === 'success') {
    return &lt;DataTable data={state.data} columns={userColumns} /&gt;
  }

  return null
}
</code></pre>
<p>TypeScript now enforces that <code>state.data</code> only exists when <code>status === 'success'</code>. Accessing it in the loading branch is a compile error, not a runtime crash. Every state is handled explicitly before the component ships. For the broader hook and TypeScript patterns that make this approach consistent across a codebase, <a href="https://shubhamjha.com/blog/react-hooks-typescript">this guide on React hooks and TypeScript</a> covers how to enforce state discipline at scale.</p>
<h3>Skeleton loaders vs spinners</h3>
<p>Skeleton loaders beat spinners for most SaaS data-fetching contexts. A spinner says "wait." A skeleton says "this is where your content will appear." It reduces perceived wait time and prevents layout shift when content loads into an unprepared space.</p>
<p>The key: match the skeleton's dimensions to the loaded content. A skeleton that's 80px tall with content that loads at 160px still causes layout shift. It just delays it by 300ms. Measure the loaded state first, then build the skeleton to match it.</p>
<p>Use a spinner for short, indeterminate operations: saving, uploading, submitting. Use skeletons for data fetches that take more than 200ms and have a predictable shape.</p>
<hr />
<h2>5. Performance-Aware Design: Motion and Layout Stability</h2>
<p>Performance is a design decision as much as an engineering one. Animations that run at 60fps on a developer's MacBook render at 20fps on a mid-range Android. Images without <code>width</code> and <code>height</code> attributes cause the page to jump when they appear. Fonts that load after the initial paint cause text to reflow in ways that feel broken even when they're technically working. These are design choices. Every unnecessary animation, unoptimized image, or font-triggered reflow shows up in both Lighthouse scores and conversion rates.</p>
<h3>Motion as affordance</h3>
<p>Every animation in a SaaS product should communicate something. The test I use: could you remove this animation and lose information? If yes, keep it. If no, cut it.</p>
<p>State transitions earn their keep. A button that shows a spinner while loading tells the user "I received your click" — without it, users click twice and submit duplicate forms. A panel that slides in from the right tells them where it lives relative to the current screen. A modal that scales up from its trigger communicates "this is layered above, not replacing." A form field that shakes on invalid submission draws attention to the problem in a way color change alone can't — especially for users who don't perceive red as distinct.</p>
<p>List items that load in staggered sequence reduce perceived wait time on slow connections by making the page feel active rather than frozen.</p>
<p>Everything else — hover animations on static cards, decorative entrance effects, parallax — costs runtime on low-end devices without communicating anything. An animation that only runs on your developer MacBook isn't a product feature.</p>
<pre><code class="language-tsx">// Purposeful: communicates loading state without blocking interaction
function SubmitButton({ loading, children }: { loading: boolean; children: React.ReactNode }) {
  return (
    &lt;button
      disabled={loading}
      aria-busy={loading}
      className={cn(
        'relative flex items-center justify-center gap-2 rounded-md px-4 py-2 text-sm font-medium transition-opacity',
        loading &amp;&amp; 'cursor-not-allowed opacity-70'
      )}
    &gt;
      {loading &amp;&amp; (
        &lt;span
          className="h-4 w-4 animate-spin rounded-full border-2 border-current border-t-transparent"
          aria-hidden
        /&gt;
      )}
      &lt;span aria-live="polite"&gt;{loading ? 'Saving...' : children}&lt;/span&gt;
    &lt;/button&gt;
  )
}
</code></pre>
<p><code>aria-live="polite"</code> on the button text means screen readers announce the state change from "Save changes" to "Saving..." without interrupting other announcements.</p>
<h3>Optimistic UI</h3>
<p>Optimistic UI removes the 200–400ms lag on actions users repeat dozens of times per session: toggling a status, starring a record, archiving an item. Instead of waiting for the server, you update the UI immediately and revert if the server rejects.</p>
<pre><code class="language-tsx">function useOptimisticToggle(
  id: string,
  initialValue: boolean,
  serverAction: (id: string, value: boolean) =&gt; Promise&lt;void&gt;
) {
  const [optimisticValue, setOptimisticValue] = useState(initialValue)
  const [isPending, startTransition] = useTransition()

  const toggle = () =&gt; {
    const next = !optimisticValue
    setOptimisticValue(next) // update immediately — no waiting

    startTransition(async () =&gt; {
      try {
        await serverAction(id, next)
      } catch {
        setOptimisticValue(!next) // revert on failure
        toast.error("Change didn't save. Try again.")
      }
    })
  }

  return { value: optimisticValue, toggle, isPending }
}
</code></pre>
<p>Use it for low-risk, reversible, high-frequency actions. Avoid it for deletes, payments, or anything where the server might reject the request, because the user needs to know the outcome before moving on. For the connection between UI design decisions and Core Web Vitals scores (LCP, CLS, INP), <a href="/blog/core-web-vitals-nextjs-optimization">this post on Next.js Core Web Vitals</a> covers how performance-aware design choices propagate into measurable ranking signals.</p>
<hr />
<h2>6. SaaS-Specific UX Patterns That Scale</h2>
<p>These four patterns come up in every SaaS product I've worked on that retains users past 90 days. They don't show up much in general UX writing because they're specific to products people use as part of their job.</p>
<h3>Onboarding that matches intent</h3>
<p>A generic product tour wastes the one moment in a user's experience where they're most motivated to learn. A user who signed up for a project management tool doesn't need a tour of every feature. They need to create their first project, add a task, and understand what the product does. Everything else can wait.</p>
<p>Intent-based onboarding asks one question up front: "What are you here to do?" Then it routes users to the setup path that fits. A developer evaluating an API monitoring tool during a production incident has nothing in common with a team lead doing quarterly planning research. Same product, two different first-run experiences, very different activation rates.</p>
<p>The implementation is simpler than most teams expect: a short branching flow at signup that sets a <code>user.onboardingIntent</code> field, which your onboarding component reads to determine which steps to show. A switch statement on <code>intent</code> is the right abstraction. Don't build a generic flow engine for this.</p>
<h3>Progressive disclosure</h3>
<p>Show users the minimum they need for the current task. Surface advanced options only when they're relevant. Sensible defaults for everything, expandable sections for configuration, contextual options inline rather than buried in settings.</p>
<p>The rule I keep coming back to: a user in the middle of a task should never have to leave it to find a setting.</p>
<pre><code class="language-tsx">function CreateProjectForm() {
  const [showAdvanced, setShowAdvanced] = useState(false)

  return (
    &lt;form&gt;
      &lt;Field label="Project name" required /&gt;
      &lt;Field label="Team" type="select" defaultValue="current-team" /&gt;

      &lt;button
        type="button"
        className="mt-4 flex items-center gap-1 text-sm text-muted"
        onClick={() =&gt; setShowAdvanced((v) =&gt; !v)}
        aria-expanded={showAdvanced}
        aria-controls="advanced-options"
      &gt;
        &lt;ChevronIcon className={cn('h-4 w-4 transition-transform', showAdvanced &amp;&amp; 'rotate-90')} /&gt;
        Advanced settings
      &lt;/button&gt;

      {showAdvanced &amp;&amp; (
        &lt;div id="advanced-options" className="mt-3 space-y-4 border-t pt-4"&gt;
          &lt;Field label="Project key" hint="Used in task IDs (e.g. PROJ-123)" /&gt;
          &lt;Field label="Visibility" type="select" defaultValue="team" /&gt;
          &lt;Field label="Start date" type="date" /&gt;
        &lt;/div&gt;
      )}

      &lt;Button type="submit" className="mt-6"&gt;Create project&lt;/Button&gt;
    &lt;/form&gt;
  )
}
</code></pre>
<p><code>aria-expanded</code> and <code>aria-controls</code> communicate the toggle state to screen readers. Advanced fields default to sensible values, so a new user who never opens the section still gets a correctly configured project.</p>
<h3>Actionable empty states</h3>
<p>Empty states are the most under-designed component in most SaaS products. "No data" helps no one. An empty state should tell users what the section is for, why it's empty, and what to do about it.</p>
<pre><code class="language-tsx">interface EmptyStateProps {
  icon: React.ReactNode
  title: string
  description: string
  action?: { label: string; onClick: () =&gt; void }
}

export function EmptyState({ icon, title, description, action }: EmptyStateProps) {
  return (
    &lt;div
      className="flex flex-col items-center justify-center py-16 px-8 text-center"
      role="status"
    &gt;
      &lt;div className="mb-4 opacity-30 text-muted-foreground" aria-hidden&gt;
        {icon}
      &lt;/div&gt;
      &lt;h3 className="text-base font-semibold text-foreground"&gt;{title}&lt;/h3&gt;
      &lt;p className="mt-1 max-w-xs text-sm text-muted-foreground"&gt;{description}&lt;/p&gt;
      {action &amp;&amp; (
        &lt;Button className="mt-6" onClick={action.onClick}&gt;
          {action.label}
        &lt;/Button&gt;
      )}
    &lt;/div&gt;
  )
}
</code></pre>
<p>The empty state on day one, when the account has no data, is your second onboarding moment. Use it. "No integrations yet — connect your first tool to start syncing data." with a primary CTA converts far better than a grey "No integrations found" message.</p>
<h3>Fast feedback</h3>
<p>Users form their opinion of a product's responsiveness in the first few interactions. If saving a record takes 800ms and shows nothing, the product feels slow regardless of what the server is actually doing. Show a loading state immediately. Update the UI before the server confirms where risk is low. Confirm completion with a brief success state rather than silence.</p>
<p>Perceived speed and actual speed are different numbers. You can move one without touching the other.</p>
<hr />
<h2>7. Measuring UX With Real Signals</h2>
<p>UX improvements that survive stakeholder debate are the ones attached to numbers. "The modal flow was confusing" loses to "the error state on the invite flow caused 34% of users to abandon the page without completing their action." Both describe the same problem. One gets prioritised in the next sprint.</p>
<p>Four signals track UX quality over time. I'd instrument all four before making any significant changes — the delta is what makes the argument.</p>
<p>Activation rate is the percentage of new trial users who reach the product's core value within week one. This is the number the onboarding redesign moves directly. Anything under 40% in B2B SaaS warrants a focused investigation into where drop-off is happening and why.</p>
<p>Time-to-value is the median time from account creation to the user's first meaningful action: sending a report, creating a project, connecting an integration, whatever the product's "aha moment" is. If it's above 10 minutes, the path to value is too long, too confusing, or asking for too much upfront.</p>
<p>Error rate by flow measures how often users hit validation errors, error states, or dead ends in specific flows. A checkout flow with a 30% error encounter rate doesn't have a payment problem — it has a form design problem. Track this per flow, not globally, so you know exactly where to look.</p>
<p>Support ticket signal is underused. UX issues generate tickets with two shared properties: they're common, and they're confusing enough that the user couldn't resolve them independently. The top ticket category every month is a UX audit waiting to happen. If you're seeing the same three questions every Monday, that's the backlog item.</p>
<pre><code class="language-tsx">// Type-safe UX event tracking — schema drift kills analytics data
type UXEvent =
  | { name: 'onboarding_step_viewed'; step: number; totalSteps: number }
  | { name: 'onboarding_step_completed'; step: number; timeOnStepMs: number }
  | { name: 'onboarding_abandoned'; step: number; totalSteps: number }
  | { name: 'empty_state_cta_clicked'; section: string }
  | { name: 'error_state_retry_clicked'; page: string; errorCode: string }
  | { name: 'form_validation_error'; form: string; field: string }

export function trackUXEvent(event: UXEvent) {
  analytics.track(event.name, {
    ...event,
    timestamp: Date.now(),
    url: window.location.pathname,
  })
}

// Instrument every step of high-value flows
function OnboardingStep({ step, total }: { step: number; total: number }) {
  const enteredAt = useRef(Date.now())

  useEffect(() =&gt; {
    trackUXEvent({ name: 'onboarding_step_viewed', step, totalSteps: total })
    return () =&gt; {
      trackUXEvent({ name: 'onboarding_abandoned', step, totalSteps: total })
    }
  }, [step, total])

  const handleComplete = () =&gt; {
    trackUXEvent({
      name: 'onboarding_step_completed',
      step,
      timeOnStepMs: Date.now() - enteredAt.current,
    })
  }

  // ... render
}
</code></pre>
<p>The discriminated union on <code>UXEvent</code> enforces correct properties at the type level. <code>form_validation_error</code> requires <code>field</code>. <code>onboarding_step_completed</code> requires <code>timeOnStepMs</code>. Analytics events drift when different developers send the same event under different property names. Typed events prevent the schema inconsistency that makes your data useless six months later.</p>
<p>The data this instrumentation produces is what turns a UX argument into a product priority. You're not saying "the onboarding is too long." You're saying "step 7 has a 58% abandonment rate and users spend an average of 4 minutes on it before leaving." Those are two different conversations.</p>
<hr />
<p>The redesigned product shipped three weeks after the audit. Week-one activation: 38%. Support tickets: down 46%. Time-to-value: 6 minutes, from 14. The product hadn't changed. The path through it had.</p>
<p>That's the thing about UX in SaaS: you don't always need more features. Sometimes you need the ones you have to be comprehensible. None of those improvements required new engineering. All of them required treating the interface as a product in its own right — not a pass at the end of a sprint.</p>
<p>If your activation rate is sitting in a backlog labelled <em>known issues</em>, the data to make the case is already there. Find your version of the 23% number.</p>
<p>If you're working on a SaaS product and want to connect design decisions to outcomes, browse my <a href="https://shubhamjha.com/projects">project work</a> or <a href="https://shubhamjha.com/contact">reach out directly</a>.</p>
]]></content:encoded></item><item><title><![CDATA[Scalable Next.js Web Apps in 2026: 380KB to 94KB]]></title><description><![CDATA[A 94 Lighthouse score and an 11-second mobile load time are not contradictory. They happen together constantly — because Lighthouse runs against a simulated fast connection, and your real traffic does]]></description><link>https://blog.shubhamjha.com/scalable-next-js-web-apps-in-2026-380kb-to-94kb</link><guid isPermaLink="true">https://blog.shubhamjha.com/scalable-next-js-web-apps-in-2026-380kb-to-94kb</guid><category><![CDATA[Next.js]]></category><category><![CDATA[React]]></category><category><![CDATA[architecture]]></category><category><![CDATA[performance]]></category><category><![CDATA[TypeScript]]></category><category><![CDATA[app router]]></category><dc:creator><![CDATA[Shubham Jha]]></dc:creator><pubDate>Sat, 18 Apr 2026 05:00:00 GMT</pubDate><content:encoded><![CDATA[<p>A 94 Lighthouse score and an 11-second mobile load time are not contradictory. They happen together constantly — because Lighthouse runs against a simulated fast connection, and your real traffic doesn't.</p>
<p>A client's B2B dashboard had exactly this problem. Desktop audit: clean. Mobile users — about 60% of their traffic — were bouncing at nearly double the desktop rate. On a mid-range Android on 4G, Time to Interactive was 11 seconds.</p>
<p>The culprit was the architecture. Every component was a Client Component. The entire route tree was <code>'use client'</code>. They'd built a scalable web app on paper — and shipped it like a glorified React SPA: 380KB of JavaScript to every first-time visitor before they could read a single word.</p>
<p>We spent the next two weeks re-architecting. Server Components for data and layout, Client Components only where interactivity was required, caching at the right layers. The bundle dropped from 380KB to 94KB. Mobile TTI went from 11 seconds to 2.3. Bounce rate on mobile dropped 41%.</p>
<p>The architecture hadn't changed what the app did. It changed what it cost users to use it.</p>
<hr />
<h2>1. The 2026 Mental Model: Server First</h2>
<p>The single most expensive architectural mistake in Next.js apps is treating the App Router like it's still the Pages Router with a new folder structure.</p>
<p>The App Router's default is a Server Component. That means: no JavaScript sent to the browser, direct access to databases and file systems, zero hydration cost. You opt into client-side behaviour with <code>'use client'</code>; when you do, you're making a deliberate choice to ship JavaScript to the browser and accept the complexity that comes with it.</p>
<p>Most teams invert this. They <code>'use client'</code> everything at the top of the tree, then wonder why their bundle is large and their INP score is poor.</p>
<p>The correct mental model is <strong>layers</strong>:</p>
<pre><code>Server Layer (zero client JS)
├── Layout components
├── Data fetching (direct DB calls, fetch with cache)
├── Static content and SEO metadata
└── Server Actions for mutations

Client Layer (deliberate JS)
├── Interactive islands (forms, modals, dropdowns)
├── Browser API integrations (geolocation, clipboard)
├── Real-time subscriptions (WebSockets, SSE)
└── Client-only state (animations, local UI)
</code></pre>
<p>A product page ends up looking like this:</p>
<pre><code class="language-tsx">// app/product/[id]/page.tsx — Server Component. Zero client JS.
export default async function ProductPage({ params }: { params: Promise&lt;{ id: string }&gt; }) {
  const { id } = await params; // Next.js 15: params is a Promise
  const product = await getProduct(id); // direct DB call, no API round-trip

  return (
    &lt;main&gt;
      &lt;ProductDetails product={product} /&gt;   {/* Server Component */}
      &lt;ProductImages images={product.images} /&gt; {/* Server Component */}
      &lt;AddToCartButton productId={product.id} /&gt; {/* 'use client' — isolated */}
    &lt;/main&gt;
  );
}
</code></pre>
<p><code>ProductDetails</code> and <code>ProductImages</code> never touch the browser. <code>AddToCartButton</code> opts into client-side JavaScript because it needs interactivity. The <code>'use client'</code> boundary is surgical: component-level, not page-level.</p>
<p>Your initial JavaScript payload stays small. Server-rendered HTML arrives fast. Interactive pieces hydrate on top of already-visible content. In the client scenario above, this was the difference between an 11-second TTI and a 2.3-second one.</p>
<p>Fetch data in Server Components, not in <code>useEffect</code> hooks. Keep <code>'use client'</code> at the leaf of the component tree, not the root. Use Server Actions for mutations where the logic is self-contained — a separate API route just adds a round-trip you don't need.</p>
<h3>Partial Prerendering: the ceiling for Next.js performance</h3>
<p>Next.js 15 introduced Partial Prerendering (PPR) — currently experimental, but the most significant performance primitive in the App Router since Server Components themselves. PPR lets you prerender the static shell of a route at build time and stream dynamic content into it at request time, without splitting into separate routes.</p>
<pre><code class="language-tsx">// next.config.ts — opt in to PPR
export default {
  experimental: {
    ppr: 'incremental', // enable per-route with export const experimental_ppr = true
  },
}

// app/product/[id]/page.tsx
export const experimental_ppr = true

export default async function ProductPage({ params }: { params: Promise&lt;{ id: string }&gt; }) {
  const { id } = await params
  const product = await getProduct(id) // static product data — prerendered at build

  return (
    &lt;main&gt;
      &lt;ProductDetails product={product} /&gt; {/* prerendered — arrives from edge cache */}
      &lt;Suspense fallback={&lt;PriceSkeleton /&gt;}&gt;
        &lt;LivePrice productId={id} /&gt;       {/* dynamic — streamed per-request */}
      &lt;/Suspense&gt;
      &lt;Suspense fallback={&lt;ReviewsSkeleton /&gt;}&gt;
        &lt;ReviewFeed productId={id} /&gt;      {/* dynamic — streamed per-request */}
      &lt;/Suspense&gt;
    &lt;/main&gt;
  )
}
</code></pre>
<p>The static shell — layout, product details, images — arrives from the edge cache in under 10ms. Dynamic content streams in from the origin as its queries complete. Users see a fully-rendered skeleton instantly, with live data filling in. This is the architecture ceiling for perceived performance in a Next.js app in 2026.</p>
<p>Server Components handle data delivery. What happens once that data reaches the client — how it's stored, shared, and updated — is a separate problem with different tools.</p>
<hr />
<h2>2. State Architecture: Where Data Actually Lives</h2>
<p>Most React bugs I've debugged trace to the same root cause: the wrong kind of state in the wrong place.</p>
<p>Before writing a single hook, categorize the state you need. Most bugs trace back to this: server state copied into local state, global state used where component state would've been fine. The category determines the tool.</p>
<p><strong>Server state</strong> is data that lives on a server and is temporarily cached on the client. User profiles, product lists, feed items. It has a lifecycle (loading, stale, revalidating) that's fundamentally different from local state. This belongs in TanStack Query, not in <code>useState</code>. Copying server state into local state creates a second source of truth that will drift.</p>
<p><strong>UI state</strong> is local to a component or subtree: modal open/closed, active tab, hover state. This lives in <code>useState</code> or <code>useReducer</code>. It doesn't cross component boundaries and it doesn't need to be persisted.</p>
<p><strong>Shared app state</strong> crosses component boundaries without a parent-child relationship: active user session, current theme, notification count. This lives in Zustand or Context. Keep it as lean as possible. Global state is the hardest kind to trace.</p>
<pre><code class="language-tsx">// Wrong: server state duplicated into local state — will drift
const [user, setUser] = useState&lt;User | null&gt;(null);

useEffect(() =&gt; {
  fetchUser(userId).then(setUser);
}, [userId]);

// Correct: TanStack Query owns the lifecycle
const { data: user, isLoading, isError } = useQuery({
  queryKey: ['user', userId],
  queryFn: () =&gt; fetchUser(userId),
  staleTime: 60_000, // cache for 60 seconds
});
</code></pre>
<p>The <code>useState</code> version gives you none of the things you'll eventually need: no caching, no deduplication across components, no background revalidation, no cancellation on unmount. The <code>useQuery</code> version gives you all four, and every component that queries the same key shares the same cache.</p>
<p>The full state stack for a 2026 Next.js app:</p>
<table>
<thead>
<tr>
<th>State Type</th>
<th>Home</th>
<th>Why</th>
</tr>
</thead>
<tbody><tr>
<td>Data from the server</td>
<td>Server Components or TanStack Query</td>
<td>Single source of truth</td>
</tr>
<tr>
<td>Forms and validation</td>
<td>React Hook Form + Zod</td>
<td>Uncontrolled inputs, schema validation</td>
</tr>
<tr>
<td>Global client state</td>
<td>Zustand (lean)</td>
<td>Simple API, no boilerplate</td>
</tr>
<tr>
<td>Local UI state</td>
<td><code>useState</code> / <code>useReducer</code></td>
<td>No overhead for ephemeral state</td>
</tr>
<tr>
<td>Memoization</td>
<td>React Compiler (React 19)</td>
<td>Automatic; manual only for edge cases</td>
</tr>
</tbody></table>
<p>For a deep dive into how hooks fit into this model, <a href="https://shubhamjha.com/blog/react-hooks-typescript">this guide on React hooks and TypeScript patterns</a> covers each layer in detail.</p>
<hr />
<h2>3. Performance Engineering: Core Web Vitals as Architecture</h2>
<p>Performance isn't a build step — it's an architectural constraint. By the time you're running Lighthouse audits before launch, most performance problems are already baked in.</p>
<p>The metrics that matter in 2026 are the three Core Web Vitals:</p>
<ul>
<li><strong>LCP</strong> (Largest Contentful Paint): measures how fast the main content loads. Good: under 2.5s.</li>
<li><strong>INP</strong> (Interaction to Next Paint): measures how fast the page responds to user input. Good: under 200ms. This replaced FID as a Core Web Vital in March 2024.</li>
<li><strong>CLS</strong> (Cumulative Layout Shift): measures visual stability. Good: under 0.1.</li>
</ul>
<p>Most Next.js apps with unoptimized architectures sit in the "needs improvement" band on mobile: technically functional, but sluggish on mid-range hardware. The usual cause isn't a missing <code>useMemo</code>. It's structural: too much JavaScript in the initial bundle, layouts that shift as images load, or click handlers that do expensive work synchronously.</p>
<p>For a deep dive into fixing each metric in a real Next.js app — including what happens when your LCP element isn't an image at all — <a href="https://shubhamjha.com/blog/core-web-vitals-nextjs-optimization">this post on Core Web Vitals optimization</a> covers the specific changes that moved numbers.</p>
<h3>The image problem</h3>
<p>Image optimization is the single fastest way to improve LCP and CLS. Always use <code>next/image</code>:</p>
<pre><code class="language-tsx">import Image from 'next/image';

// Bad: raw &lt;img&gt; — no lazy loading, no size optimization, causes CLS
&lt;img src="/hero.png" alt="Hero" /&gt;

// Good: next/image — optimized formats, lazy loading, prevents CLS via reserved space
&lt;Image
  src="/hero.png"
  alt="Product hero shot"
  width={1200}
  height={630}
  priority // only for above-the-fold images
  placeholder="blur"
  blurDataURL={product.blurHash} // generate with plaiceholder or @unpic/placeholder
/&gt;
</code></pre>
<p>The <code>priority</code> prop tells Next.js to preload the image; use it only for the largest above-the-fold element. <code>placeholder="blur"</code> reserves space before the image loads, which prevents CLS.</p>
<h3>The JavaScript bundle problem</h3>
<p>Every <code>'use client'</code> directive adds JavaScript to the bundle. The architectural pattern that solves this is component splitting: isolating interactive behaviour into the smallest possible Client Component, leaving the rest of the tree server-rendered.</p>
<p>For heavy components that aren't needed on initial load, lazy loading is zero-cost:</p>
<pre><code class="language-tsx">import { lazy, Suspense } from 'react';

const HeavyChartDashboard = lazy(() =&gt; import('./HeavyChartDashboard'));

function AnalyticsPage() {
  return (
    &lt;Suspense fallback={&lt;ChartSkeleton /&gt;}&gt;
      &lt;HeavyChartDashboard /&gt;
    &lt;/Suspense&gt;
  );
}
</code></pre>
<p><code>lazy()</code> splits <code>HeavyChartDashboard</code> into a separate chunk. The initial bundle never includes it. Users who don't navigate to the analytics view never download it at all.</p>
<h3>Caching at the right layers</h3>
<p>Next.js provides four caching layers: the Request Memoization, the Data Cache, the Full Route Cache, and the Router Cache. Most teams use none of them deliberately.</p>
<p>The practical defaults:</p>
<pre><code class="language-tsx">// Cached indefinitely — static data (revalidate manually on update)
const data = await fetch('/api/products', { cache: 'force-cache' });

// Cached for 60 seconds — semi-static data
const data = await fetch('/api/trending', { next: { revalidate: 60 } });

// Never cached — personalized or always-fresh data
const data = await fetch('/api/user/cart', { cache: 'no-store' });
</code></pre>
<p>Getting these right means your server isn't doing redundant work on every request. It also means your Time to First Byte stays low, which directly affects LCP.</p>
<p>When a mutation happens — a user creates a record, submits a form, deletes an item — you need to invalidate the right cache without blowing away everything. Server Actions integrate with <code>revalidatePath</code> and <code>revalidateTag</code> for surgical invalidation:</p>
<pre><code class="language-tsx">// app/actions/createPost.ts
'use server';
import { revalidatePath, revalidateTag } from 'next/cache';

export async function createPost(data: PostInput) {
  await db.post.create({ data });
  revalidatePath('/blog');           // re-render the blog listing page
  revalidateTag('posts');            // invalidate all fetches tagged 'posts'
}
</code></pre>
<p>The caching layer stays consistent without a full cache flush. Users on the blog page see fresh data on their next visit without everyone else's cached responses being thrown away.</p>
<h3>Streaming: progressive rendering for data with different latencies</h3>
<p>Caching controls what the server fetches. Suspense streaming controls what the browser sees while those fetches resolve.</p>
<p>A common dashboard problem: the user header loads in 20ms, but the revenue stats query takes 400ms and the activity feed takes 600ms. Without streaming, the entire page waits for 600ms before painting anything. With Suspense streaming, the header renders immediately, the revenue stats arrive at 400ms, the activity feed at 600ms. The user sees content in 20ms — instead of watching a blank screen for 600ms.</p>
<pre><code class="language-tsx">// Without streaming: everyone waits for the slowest query
export default async function DashboardPage() {
  const [user, revenue, activity] = await Promise.all([
    getUser(),      // ~20ms
    getRevenue(),   // ~400ms
    getActivity(),  // ~600ms — the whole page waits for this
  ])
  return &lt;Dashboard user={user} revenue={revenue} activity={activity} /&gt;
}

// With streaming: header paints at 20ms, components stream in as queries complete
export default async function DashboardPage() {
  const user = await getUser() // fast — render immediately

  return (
    &lt;main&gt;
      &lt;UserHeader user={user} /&gt;

      &lt;Suspense fallback={&lt;RevenueSkeleton /&gt;}&gt;
        &lt;RevenueStats /&gt; {/* async Server Component — streams in at ~400ms */}
      &lt;/Suspense&gt;

      &lt;Suspense fallback={&lt;ActivitySkeleton /&gt;}&gt;
        &lt;ActivityFeed /&gt; {/* async Server Component — streams in at ~600ms */}
      &lt;/Suspense&gt;
    &lt;/main&gt;
  )
}

// Each component fetches its own data — no prop drilling, no coordination needed
async function RevenueStats() {
  const revenue = await getRevenue()
  return &lt;RevenueCard data={revenue} /&gt;
}
</code></pre>
<p>Three things to get right when adding Suspense boundaries:</p>
<p><strong>Skeleton accuracy matters for CLS.</strong> A skeleton that's 80px tall and loaded content that's 160px tall still causes layout shift when the real component mounts. Measure the loaded height and match it in the skeleton.</p>
<p><strong>Coarse boundaries beat fine-grained ones.</strong> A Suspense boundary per data item creates a visible "popcorn" loading effect. Wrap logical sections instead — a stats row, a full feed, a sidebar panel.</p>
<p><strong>Order determines streaming priority.</strong> HTTP/2 streams content in document order. Wrap your most important above-the-fold data in the first Suspense boundary so it arrives first.</p>
<p>This pattern changed one dashboard from a 2.8s LCP to a 0.9s LCP. The database queries didn't get faster. The page just stopped making users wait for the slowest one before showing them anything.</p>
<p>Getting caching right keeps TTFB low as the app scales. What it can't do is catch a correctly-cached response landing in a type that allows impossible states — that's a different problem, and it needs to be solved before a user hits it at runtime.</p>
<hr />
<h2>4. TypeScript as Your Architecture Enforcer</h2>
<p>TypeScript in most production codebases is a typed façade over untyped logic. Interfaces on props, a typed <code>useState</code>, and then <code>any</code> everywhere things get complex. That's not type safety. It's documentation that lies.</p>
<p>TypeScript's job is making impossible states unrepresentable — not annotating what can go wrong, but making wrong combinations inexpressible before they get written.</p>
<p><strong>Discriminated unions over boolean flags</strong></p>
<p>Three boolean flags give you eight possible states. Only three are valid. You're shipping the other five as potential bugs. The fix:</p>
<pre><code class="language-tsx">// The problem: three flags, eight possible states, only three are valid
interface UserState {
  data: User | null;
  isLoading: boolean;
  error: string | null;
}
// isLoading: true, data: User, error: "..."  ← valid TypeScript, impossible state

// The fix: make each valid state a separate variant
type UserState =
  | { status: 'idle' }
  | { status: 'loading' }
  | { status: 'success'; data: User }
  | { status: 'error'; error: string };

function UserProfile({ state }: { state: UserState }) {
  if (state.status === 'loading') return &lt;Spinner /&gt;;
  if (state.status === 'error') return &lt;ErrorBanner message={state.error} /&gt;;
  if (state.status === 'success') return &lt;Profile user={state.data} /&gt;;
  return null;
}
</code></pre>
<p>TypeScript now knows <code>state.data</code> only exists when <code>status === 'success'</code>. Accessing it in the loading branch is a compile error, not a runtime crash. TypeScript eliminates the bug before it's written.</p>
<p><strong>Type your API boundaries strictly</strong></p>
<p>The most dangerous <code>any</code> in a codebase is the one at an API boundary:</p>
<pre><code class="language-tsx">// Dangerous: any from fetch is silent about shape changes
const res = await fetch('/api/user');
const user: any = await res.json(); // TypeScript has left the room

// Safe: validate at the boundary, get typed output
import { z } from 'zod';

const UserSchema = z.object({
  id: z.string(),
  email: z.string().email(),
  plan: z.enum(['free', 'pro', 'enterprise']),
});

async function getUser(id: string): Promise&lt;z.infer&lt;typeof UserSchema&gt;&gt; {
  const res = await fetch(`/api/user/${id}`);
  if (!res.ok) throw new Error(`HTTP ${res.status}`);
  return UserSchema.parse(await res.json()); // throws if shape is wrong
}
</code></pre>
<p>Zod validates at runtime and infers the TypeScript type statically. The type is derived from the validator. They can't drift apart. When the API changes shape, your TypeScript errors tell you where to fix it.</p>
<p><strong>Typed environment variables</strong></p>
<p>Environment variables are untyped strings by default. A missing <code>NEXT_PUBLIC_API_URL</code> fails silently at runtime, not at build time:</p>
<pre><code class="language-tsx">// src/lib/env.ts — validate all env vars at startup
import { z } from 'zod';

const envSchema = z.object({
  NEXT_PUBLIC_API_URL: z.string().url(),
  DATABASE_URL: z.string().min(1),
  NEXTAUTH_SECRET: z.string().min(32),
});

export const env = envSchema.parse(process.env);
// If any variable is missing or wrong type, the app fails at startup — not in prod at 2am
</code></pre>
<p>TypeScript enforces the shape of data within your codebase. What it can't enforce is visual consistency across your UI. That's where a design system comes in. The two work together: TypeScript makes the wrong component API unwritable; the design system makes the wrong visual decision unreachable.</p>
<hr />
<h2>5. Design Systems and Long-Term Scale</h2>
<p>A design system isn't just a component library — it's the constraints that make the right decision the default one. The teams I've seen ship fast without accumulating UI debt have the same ingredients: typed component APIs, documented state patterns, and a token file for spacing, typography, and color. Without those, you're making the same decisions independently across the codebase, and the inconsistencies accumulate in ways users notice before you do.</p>
<p>Here's what that looks like for a button:</p>
<pre><code class="language-tsx">// components/ui/Button.tsx
type ButtonVariant = 'primary' | 'secondary' | 'ghost' | 'destructive';
type ButtonSize = 'sm' | 'md' | 'lg';

interface ButtonProps {
  variant?: ButtonVariant;
  size?: ButtonSize;
  loading?: boolean;
  disabled?: boolean;
  children: React.ReactNode;
  onClick?: () =&gt; void;
  type?: 'button' | 'submit' | 'reset';
}

export function Button({
  variant = 'primary',
  size = 'md',
  loading = false,
  disabled = false,
  children,
  onClick,
  type = 'button',
}: ButtonProps) {
  return (
    &lt;button
      type={type}
      onClick={onClick}
      disabled={disabled || loading}
      aria-busy={loading}
      className={cn(buttonVariants({ variant, size }))}
    &gt;
      {loading ? &lt;Spinner size="sm" /&gt; : children}
    &lt;/button&gt;
  );
}
</code></pre>
<p>Every variant is typed. The component handles loading state explicitly (no relying on the caller to disable the button manually). Accessibility is baked in (<code>aria-busy</code>). The API is stable; adding a new variant doesn't require touching every call site.</p>
<p><code>buttonVariants({ variant: 'destructive' })</code> resolves to Tailwind classes like <code>bg-destructive text-destructive-foreground</code>, which resolve to CSS custom properties defined once:</p>
<pre><code class="language-css">/* globals.css — one variable, every component */
:root {
  --primary: 222 47% 11%;
  --destructive: 0 84% 60%;
  --secondary: 210 40% 96%;
  --radius: 0.5rem;
}
</code></pre>
<p>When the brand color changes, the token changes — and every component that references it updates without touching a single component file. That's the leverage a design system adds over a component library: the component API stays stable, only the token changes, and it changes everywhere at once.</p>
<p>What separates a real design system from a component folder: typed variants so invalid values don't compile, documented state patterns for every data-driven component (loading, empty, error, success), a token file for spacing, typography, and color, and accessibility baked into primitives — focus rings, aria attributes, keyboard navigation. Teams that skip these tend to add them back as fire drills when the first accessibility audit lands.</p>
<p>For most teams, shadcn/ui plus a token file gets you 80% of the way there without the overhead of building from scratch. The components are yours to own: copy, modify, extend. No black-box dependency that upgrades and breaks your UI. For the UX side of what makes a SaaS product feel trustworthy at scale — component states, accessibility patterns, motion design — <a href="https://shubhamjha.com/blog/saas-ui-ux-design-principles">this SaaS UI/UX design guide</a> covers the decisions that matter most.</p>
<p>A consistent UI builds trust. Security defaults are what keep it from being undermined: hijacked links, clickjacked pages, injected content. The interface and the infrastructure have to hold together.</p>
<hr />
<h2>6. Security Defaults That Ship</h2>
<p>Security in a web app is mostly about defaults. The teams that get breached aren't doing exotic things wrong. They're missing boring defaults.</p>
<p><strong>HTTP security headers</strong></p>
<p>Add these to <code>next.config.ts</code>. They cost nothing and prevent a class of attacks:</p>
<pre><code class="language-ts">// next.config.ts
const securityHeaders = [
  { key: 'X-DNS-Prefetch-Control', value: 'on' },
  { key: 'X-Frame-Options', value: 'SAMEORIGIN' },
  { key: 'X-Content-Type-Options', value: 'nosniff' },
  { key: 'Referrer-Policy', value: 'strict-origin-when-cross-origin' },
  {
    key: 'Permissions-Policy',
    value: 'camera=(), microphone=(), geolocation=()',
  },
  {
    key: 'Content-Security-Policy',
    value: [
      "default-src 'self'",
      "script-src 'self' 'unsafe-inline'", // tighten this for production
      "style-src 'self' 'unsafe-inline'",
      "img-src 'self' data: https:",
    ].join('; '),
  },
];

const nextConfig = {
  headers: async () =&gt; [
    { source: '/(.*)', headers: securityHeaders },
  ],
};
</code></pre>
<p>The <code>'unsafe-inline'</code> comment is the one that always bites teams mid-sprint. The moment analytics, a chat widget, or an error monitoring script gets added, you're forced to choose: keep <code>'unsafe-inline'</code> (which lets any inline JavaScript run, defeating the point of CSP) or switch to nonces. Nonces are the right answer — a random value generated per request, injected into your CSP header and into any script tag you explicitly allow. Everything else gets blocked.</p>
<p>Next.js supports this via middleware:</p>
<pre><code class="language-ts">// middleware.ts — generate a nonce per request
import { NextResponse } from 'next/server'

export function middleware() {
  const nonce = Buffer.from(crypto.randomUUID()).toString('base64')
  const csp = [
    "default-src 'self'",
    `script-src 'self' 'nonce-${nonce}' https://plausible.io`,
    "style-src 'self' 'unsafe-inline'",
    "img-src 'self' data: https:",
  ].join('; ')

  const response = NextResponse.next()
  response.headers.set('Content-Security-Policy', csp)
  response.headers.set('x-nonce', nonce) // read this in your layout
  return response
}
</code></pre>
<p>Your root layout reads <code>x-nonce</code> from headers and passes it to any third-party <code>&lt;Script nonce={nonce}&gt;</code> tag. Scripts without a valid nonce get blocked by the browser. The CSP stays strict, vendor scripts work, and you don't have to gut the header every time a new tool gets added.</p>
<p><strong>Safe external links</strong></p>
<p>Any <code>&lt;a target="_blank"&gt;</code> without <code>rel="noopener noreferrer"</code> gives the linked page access to your <code>window.opener</code>: a classic attack vector. Enforce this at the component level:</p>
<pre><code class="language-tsx">// components/ui/ExternalLink.tsx
interface ExternalLinkProps {
  href: string;
  children: React.ReactNode;
}

export function ExternalLink({ href, children }: ExternalLinkProps) {
  return (
    &lt;a href={href} target="_blank" rel="noopener noreferrer"&gt;
      {children}
    &lt;/a&gt;
  );
}
// Never use &lt;a target="_blank"&gt; directly — use this instead
</code></pre>
<p><strong>Input validation at system boundaries</strong></p>
<p>Validate everything that enters your system. <code>zod</code> at the API layer, <code>React Hook Form</code> + <code>zod</code> at the form layer. Never trust client-sent data in Server Actions:</p>
<pre><code class="language-tsx">// app/actions/updateProfile.ts
'use server';
import { z } from 'zod';
import { auth } from '@/lib/auth';

const UpdateProfileSchema = z.object({
  name: z.string().min(1).max(100),
  bio: z.string().max(500).optional(),
});

export async function updateProfile(formData: FormData) {
  const session = await auth(); // always verify auth in Server Actions
  if (!session) throw new Error('Unauthorized');

  const result = UpdateProfileSchema.safeParse({
    name: formData.get('name'),
    bio: formData.get('bio'),
  });

  if (!result.success) return { error: result.error.flatten() };

  await db.user.update({ where: { id: session.user.id }, data: result.data });
  return { success: true };
}
</code></pre>
<p>Auth check, schema validation, typed result. Any input that doesn't match the schema never reaches the database.</p>
<p>Security defaults prevent things from going wrong. Observability tells you when they do anyway, often for longer than anyone expected before the first complaint arrives.</p>
<hr />
<h2>7. Observability at the Frontend Layer</h2>
<p>The question that always comes up after a production incident: "how long was this broken before someone told us?"</p>
<p>That's what frontend observability is for. Error tracking, performance monitoring, instrumenting the flows that drive revenue — not because these are interesting to set up, but because without them, bugs live in production for days before a user complaint surfaces them.</p>
<p><strong>Error boundaries</strong></p>
<p>Every async data boundary in your app should have a fallback. Next.js provides this via <code>error.tsx</code> at the route level:</p>
<pre><code class="language-tsx">// app/dashboard/error.tsx
'use client'; // error boundaries must be client components

interface ErrorProps {
  error: Error &amp; { digest?: string };
  reset: () =&gt; void;
}

export default function DashboardError({ error, reset }: ErrorProps) {
  useEffect(() =&gt; {
    // Send to error tracking (Sentry, Datadog, etc.)
    reportError(error);
  }, [error]);

  return (
    &lt;div role="alert"&gt;
      &lt;h2&gt;Something went wrong loading your dashboard.&lt;/h2&gt;
      &lt;p&gt;Error ID: {error.digest}&lt;/p&gt;
      &lt;button onClick={reset}&gt;Try again&lt;/button&gt;
    &lt;/div&gt;
  );
}
</code></pre>
<p>The <code>digest</code> field is a server-side error ID that links the user-visible error to the server log. When a user reports an issue, you have a trace ID to search on.</p>
<p><strong>Key flow monitoring</strong></p>
<p>Instrument the flows that matter most to your product:</p>
<pre><code class="language-tsx">// lib/analytics.ts — thin wrapper over your analytics provider
export function trackEvent(event: string, properties?: Record&lt;string, unknown&gt;) {
  if (typeof window === 'undefined') return;
  // PostHog, Segment, Plausible — implementation detail
  analytics.track(event, {
    ...properties,
    timestamp: Date.now(),
    url: window.location.pathname,
  });
}

// Usage in a conversion-critical flow
async function handleCheckout() {
  trackEvent('checkout_initiated', { plan, billing_cycle });
  try {
    await createSubscription(plan);
    trackEvent('checkout_succeeded', { plan });
  } catch (err) {
    trackEvent('checkout_failed', { plan, error: getErrorMessage(err) });
    throw err;
  }
}
</code></pre>
<p>Three events per conversion flow: initiated, succeeded, failed. This is the minimum that lets you build a funnel, calculate conversion rate, and know when something breaks before your revenue dashboard does.</p>
<p><strong>Real user monitoring for Core Web Vitals</strong></p>
<p>Lighthouse gives you a score against a simulated connection. Real user monitoring (RUM) gives you the actual distribution across your traffic — the p75 LCP on a mid-range Android in Southeast Asia, not your MacBook on Wi-Fi. These are different numbers, sometimes by 3×.</p>
<p>The web-vitals library pipes directly into any analytics endpoint:</p>
<pre><code class="language-ts">// app/web-vitals.ts — wire up once in your root layout
import { onLCP, onINP, onCLS } from 'web-vitals'
import type { Metric } from 'web-vitals'

function sendToAnalytics(metric: Metric) {
  // Sample 10% of sessions to avoid flooding your endpoint on high-traffic pages
  if (Math.random() &gt; 0.1) return

  navigator.sendBeacon(
    '/api/vitals',
    new Blob(
      [JSON.stringify({ name: metric.name, value: metric.value, rating: metric.rating, page: window.location.pathname })],
      { type: 'application/json' }
    )
  )
}

onLCP(sendToAnalytics)
onINP(sendToAnalytics)
onCLS(sendToAnalytics)
</code></pre>
<p>The <code>rating</code> field (<code>"good"</code>, <code>"needs-improvement"</code>, <code>"poor"</code>) lets you slice by threshold without doing the math yourself. The <code>page</code> field is the lever that matters most: you want to know that your homepage LCP is 1.4s but your order history page is 3.8s, because those are different problems with different fixes.</p>
<p>Pair this with a performance budget in CI — a warning, not a block — so regressions surface in pull requests before they reach production users:</p>
<pre><code class="language-js">// lighthouserc.js
module.exports = {
  ci: {
    assert: {
      assertions: {
        'largest-contentful-paint': ['warn', { maxNumericValue: 2500 }],
        'cumulative-layout-shift': ['error', { maxNumericValue: 0.1 }],
        'total-blocking-time': ['warn', { maxNumericValue: 300 }],
      },
    },
  },
}
</code></pre>
<p>Make CLS a hard error. CLS regressions are invisible to developers (they're subtle on a fast machine) but immediately noticeable to users. Everything else stays a warning — load time improves over time, but a layout that jumps is non-negotiable.</p>
<hr />
<h2>8. Production Checklist</h2>
<p>Before shipping a Next.js app, run through this list. Each item maps to a section above.</p>
<p><strong>Architecture</strong></p>
<ul>
<li> Default to Server Components; <code>'use client'</code> boundary is at the leaf, not the root</li>
<li> No <code>useEffect</code> used for data fetching — Server Components or TanStack Query used instead</li>
<li> No page-level <code>'use client'</code> directives where component-level would suffice</li>
<li> Server Actions used for mutations where appropriate</li>
</ul>
<p><strong>State</strong></p>
<ul>
<li> State categorized as UI, server, or shared before any hook is written</li>
<li> Server state lives in TanStack Query, not <code>useState</code></li>
<li> Async state modelled as discriminated union, not three boolean flags</li>
<li> Global state (Zustand) is minimal and justified</li>
</ul>
<p><strong>Performance</strong></p>
<ul>
<li> All images use <code>next/image</code> with explicit <code>width</code>, <code>height</code>, and <code>alt</code></li>
<li> Above-the-fold images use <code>priority</code> prop</li>
<li> Heavy components lazy-loaded with <code>lazy()</code> + <code>Suspense</code></li>
<li> Fetch calls use the correct cache strategy (<code>force-cache</code>, <code>revalidate</code>, <code>no-store</code>)</li>
<li> LCP under 2.5s, INP under 200ms, CLS under 0.1 on mobile (measured with Lighthouse)</li>
</ul>
<p><strong>TypeScript</strong></p>
<ul>
<li> No <code>any</code> at API boundaries — Zod used for runtime validation</li>
<li> Async state modelled as discriminated union</li>
<li> Environment variables validated with Zod at startup</li>
<li> No <code>React.FC</code> — explicit function signatures used</li>
</ul>
<p><strong>Security</strong></p>
<ul>
<li> HTTP security headers configured in <code>next.config.ts</code></li>
<li> All external links use <code>rel="noopener noreferrer"</code></li>
<li> All Server Actions validate auth before touching data</li>
<li> All user input validated with Zod before reaching the database</li>
<li> No secrets in client-side code or <code>.env</code> committed to version control</li>
</ul>
<p><strong>Observability</strong></p>
<ul>
<li> <code>error.tsx</code> exists for every major route segment</li>
<li> Error tracking integrated (Sentry or equivalent)</li>
<li> Key conversion flows instrumented (initiated, succeeded, failed)</li>
<li> Error boundaries surface a <code>digest</code> ID users can reference</li>
</ul>
<hr />
<p>The rebuilt app shipped two weeks after the audit. Bundle was 94KB. TTI on mobile was 2.3 seconds. Bounce rate among mobile users dropped from 71% to 34%.</p>
<p>None of those changes were algorithmic breakthroughs. They were defaults: Server Components where the server should be doing work, caching where caching made sense, TypeScript enforcing the shape of data at boundaries, images that didn't cause layout shift. The architectural patterns weren't clever. They were boring — in the way that production systems are supposed to be boring.</p>
<p>Boring architecture is a feature. The kind of scalable that survives a Product Hunt launch isn't heroic — it's correct by default.</p>
<p>If you're building a Next.js product and want to pressure-test your architecture or set up production-ready defaults from the start, <a href="https://shubhamjha.com/contact">reach out here</a>.</p>
]]></content:encoded></item><item><title><![CDATA[React Hooks, TypeScript 2026: Patterns That Actually Scale]]></title><description><![CDATA[The bug had been in production for three weeks before anyone noticed. A user's dashboard was showing stale data — not always, not on refresh, just sometimes, after a specific sequence of navigation. T]]></description><link>https://blog.shubhamjha.com/react-hooks-typescript-2026-patterns-that-actually-scale</link><guid isPermaLink="true">https://blog.shubhamjha.com/react-hooks-typescript-2026-patterns-that-actually-scale</guid><category><![CDATA[React]]></category><category><![CDATA[TypeScript]]></category><category><![CDATA[Next.js]]></category><category><![CDATA[performance]]></category><category><![CDATA[Frontend Development]]></category><dc:creator><![CDATA[Shubham Jha]]></dc:creator><pubDate>Tue, 14 Apr 2026 15:00:29 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/69d1293c6792e486f6810f5c/2a562726-5642-4e3e-b804-ff1f2657e7dd.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The bug had been in production for three weeks before anyone noticed. A user's dashboard was showing stale data — not always, not on refresh, just sometimes, after a specific sequence of navigation. Three of us spent a Friday afternoon tracing it. The root cause was four lines: server state duplicated into local state, a React hook that was supposed to keep them in sync, and a dependency array missing one key value.</p>
<p>The fix was deleting code. Thirty-two lines became four. The stale state problem went away because there was no longer a second copy of the data to go stale.</p>
<p>That incident reshaped how I think about React. Hooks especially — flexible enough to solve almost any problem, and flexible enough to create subtle, expensive bugs when you reach for them without knowing what they're actually for. This post is that mental model — what React hooks are actually for, and where almost every bug that involves them originates. Good React, I've come to think, is almost always subtractive.</p>
<blockquote>
<p>The best React I've written has always been the code I deleted.</p>
</blockquote>
<p>The patterns below are what that looks like in practice.</p>
<blockquote>
<p><strong>TL;DR</strong> — State should be categorised before a hook is written. <code>useEffect</code> is an escape hatch, not a default tool. TypeScript's value is making impossible states unrepresentable. Custom hooks are APIs, not just shared logic. The React Compiler handles most memoization now. Server Components have shrunk the legitimate use cases for client-side hooks.</p>
</blockquote>
<hr />
<h2>1. The 2026 Mental Model: React Is an Application Framework Now</h2>
<p>React has moved a long way from "UI library you drop into a page." It now ships with a server layer, a client layer, and a compiler that sits between your code and the DOM. That changes what hooks are actually for.</p>
<p>The question you ask before writing any component used to be "how do I manage this state?" Now it's: <strong>does this need to run on the client at all?</strong></p>
<p>In the Next.js App Router world, components are Server Components by default. They can fetch data directly, access databases, and render HTML with zero client JavaScript. You opt into client behaviour with <code>'use client'</code>. Every time you do, you're making a deliberate choice to ship JavaScript to the browser and accept the complexity that comes with it.</p>
<p>Hooks only exist in Client Components — a design constraint, not a limitation. They're for client-side behaviour: user interactions, browser APIs, local UI state, effects that genuinely need the browser. Not for data the server could fetch. Not for transformations that can happen at render time. If you're reaching for a hook, it should be because nothing else will do.</p>
<p>In practice, the stack looks like this:</p>
<ul>
<li><p><strong>Server Components</strong> handle data fetching and heavy computation</p>
</li>
<li><p><strong>TanStack Query v5</strong> handles client-side server state (caching, deduplication, background refresh)</p>
</li>
<li><p><strong>Zustand</strong> handles lightweight global client state (auth, theme, UI flags)</p>
</li>
<li><p><strong>React Hook Form</strong> <strong>+</strong> <strong>Zod</strong> handles forms and validation</p>
</li>
<li><p><strong>The React Compiler</strong> (shipped in React 19) handles most memoization automatically</p>
</li>
<li><p><strong>Hooks</strong> handle everything that genuinely belongs in the browser: local UI state, DOM subscriptions, browser API integrations</p>
</li>
</ul>
<p>Every pattern in this post fits into one of these layers. If you want a broader view of how they fit into a production architecture, <a href="https://shubhamjha.com/blog/building-scalable-web-apps">this production React architecture guide goes deeper</a>.</p>
<hr />
<h2>2. Model Your State Before You Write a Hook</h2>
<p>Most hook-related bugs aren't hook bugs. They're state modelling bugs. The hook is just where they surface.</p>
<p>Before you write anything, categorise the state you need:</p>
<p><strong>UI state</strong> is local to a component or a small subtree: modal open/closed, tab selection, form input values, hover state. This lives in <code>useState</code> or <code>useReducer</code>. It doesn't need to be global, and it doesn't need to be persisted.</p>
<p><strong>Server state</strong> is data that lives on a server and is temporarily cached on the client: user profiles, product lists, feed items. It has its own lifecycle — loading, stale, revalidating — that's fundamentally different from local state. This belongs in TanStack Query, not <code>useState</code>. Every time you copy server state into local state, you're creating a second source of truth that will eventually drift.</p>
<p><strong>Shared app state</strong> is client-side state that needs to cross component boundaries: the current user session, the active theme, a cart. This lives in Zustand or Context. Keep it lean — global state is the hardest kind to trace.</p>
<p>The categorisation matters because it defines your rendering model. Mixing them is where most React bugs come from: components showing inconsistent data, forms that reset unexpectedly, dashboards stuck on yesterday's totals.</p>
<pre><code class="language-tsx">// Wrong: server state duplicated into local state — will drift
const [user, setUser] = useState&lt;User | null&gt;(null);

useEffect(() =&gt; {
  fetchUser(userId).then(setUser);
}, [userId]);

// Correct: let TanStack Query own the lifecycle
const { data: user, isLoading } = useQuery({
  queryKey: ['user', userId],
  queryFn: () =&gt; fetchUser(userId),
});
</code></pre>
<p>The <code>useState</code> version has none of that — no caching, no deduplication, no background refresh, no cancellation on unmount. The <code>useQuery</code> version has all four for free, and it's the single source of truth across every component that queries the same key.</p>
<p>One rule worth keeping close: if data came from a network request, it's server state. Don't put it in <code>useState</code>.</p>
<p>Once your state is categorised correctly, the next question is effects. This is where most codebases quietly accumulate debt.</p>
<hr />
<h2>3. useEffect — The Most Misused React Hook in Production</h2>
<p>Here's an uncomfortable thing to say about a hook you use every day:</p>
<p><strong>Most</strong> <code>useEffect</code> <strong>calls in production codebases are wrong — not buggy on the surface, but wrong in category.</strong></p>
<p>They're solving problems React already solves elsewhere, using an API designed for something else entirely. The hook's flexibility is the problem. It makes every nail look like an effect.</p>
<img src="https://shubhamjha.com/images/blog/use-effect-flowchart.svg" alt="useEffect decision flowchart" style="display:block;margin:0 auto" />

<p>Here's a pattern you've almost certainly shipped:</p>
<pre><code class="language-tsx">const [userId, setUserId] = useState&lt;string | null&gt;(null);
const [userName, setUserName] = useState('');

useEffect(() =&gt; {
  if (userId) {
    setUserName(users[userId].name);
  }
}, [userId]);
</code></pre>
<p>It works. It probably passes code review. It's also wrong, and it'll cause you a bug.</p>
<p>The problem isn't the hook. It's using it to set state that could be derived directly during render. Every time <code>userId</code> changes, React has to: render, run the effect, set new state, and render again. Two renders, one stale intermediate state, and a footgun left for whoever touches this next. The fix is one line:</p>
<pre><code class="language-tsx">const userName = userId ? users[userId].name : '';
</code></pre>
<p>No effect. No second render.</p>
<p><code>useEffect</code> is for talking to the outside world. If what you're writing into one doesn't involve a subscription, a DOM mutation, a timer, a WebSocket, or an imperative third-party library, it probably doesn't belong there. React's own docs describe <code>useEffect</code> as an "escape hatch". That word choice isn't accidental. The legitimate use cases have genuinely shrunk.</p>
<p><strong>Three anti-patterns worth naming</strong></p>
<p><em>Fetching data in an effect.</em> The classic pattern — <code>useEffect(() =&gt; { fetch(...).then(setData) }, [])</code> — has five problems baked in: no loading state, no error handling, no cancellation on unmount, no deduplication on re-mount, and no caching. You'll eventually add all five, and what you've built, badly, is TanStack Query. Use the real thing instead.</p>
<p><em>Deriving state inside an effect.</em> If you find yourself writing <code>useEffect(() =&gt; { setSomething(deriveFrom(other)) }, [other])</code>, you've modelled state incorrectly. Compute the derived value during render. If the computation is expensive, reach for <code>useMemo</code> — but measure first, because the React Compiler handles most of this automatically in React 19.</p>
<p><em>Putting event-response logic in an effect.</em></p>
<pre><code class="language-tsx">// Wrong — asynchronous, hard to trace, depends on state sync
useEffect(() =&gt; {
  if (submitted) {
    trackAnalytics('form_submit');
    resetForm();
  }
}, [submitted]);

// Correct — synchronous, explicit, readable
function handleSubmit() {
  trackAnalytics('form_submit');
  resetForm();
  setSubmitted(true);
}
</code></pre>
<p>Some cases where <code>useEffect</code> genuinely belongs: <code>addEventListener</code> / <code>removeEventListener</code>, <code>ResizeObserver</code>, <code>IntersectionObserver</code>, WebSocket connections, imperative third-party libraries (charts, maps, rich text editors), <code>setInterval</code> with paired cleanup, <code>matchMedia</code>, geolocation watch. What they all have in common is a setup step and a teardown step.</p>
<p>Cleanup isn't optional. If your effect registers anything, it has to deregister it.</p>
<pre><code class="language-tsx">useEffect(() =&gt; {
  const ws = new WebSocket('wss://api.example.com/feed');
  ws.onmessage = (e) =&gt; setFeed(JSON.parse(e.data));

  return () =&gt; ws.close(); // always
}, []);
</code></pre>
<p>An effect without cleanup is a memory leak in development (StrictMode surfaces it immediately by double-invoking effects) and a silent bug in production.</p>
<p>Before writing any effect, ask: does it need cleanup? Can it run twice without harm? Are all dependencies listed? If you're omitting a dependency because "it won't change," that belief belongs in a <code>useRef</code> — not silently missing from the array. Enable <code>exhaustive-deps</code> in ESLint and treat its warnings as errors.</p>
<p>Getting effects right closes one hole. Impossible states that TypeScript lets you write by default need a different fix.</p>
<hr />
<h2>4. TypeScript Patterns That Actually Scale</h2>
<p>Most production React codebases have TypeScript configured. What's surprising is how few use it beyond the surface — interfaces on props, a typed <code>useState</code>, and then <code>any</code> everywhere things get hard. That's not type safety. It's a typed façade over untyped logic.</p>
<p>The patterns below catch real bugs before production and make your hooks readable six months later — including to you.</p>
<p><strong>Drop</strong> <code>React.FC</code> <strong>and the old import</strong></p>
<p>Since React 17, <code>import React from 'react'</code> is no longer needed. More importantly, <code>React.FC</code> is an anti-pattern — it implicitly adds <code>children</code> to every component, hides your return type, and makes generic components awkward. Use explicit function signatures:</p>
<pre><code class="language-tsx">// Old — avoid
const Button: React.FC&lt;ButtonProps&gt; = ({ label, onClick }) =&gt; { ... };

// Modern — prefer
interface ButtonProps {
  label: string;
  onClick: () =&gt; void;
  variant?: 'primary' | 'ghost';
}

function Button({ label, onClick, variant = 'primary' }: ButtonProps) {
  return &lt;button className={variant} onClick={onClick}&gt;{label}&lt;/button&gt;;
}
</code></pre>
<p><strong>Discriminated unions for async state</strong></p>
<p>In six years of React, this one change has caught more production bugs than anything else on this list. Almost every component has some version of this:</p>
<pre><code class="language-tsx">// The problem — three booleans that can contradict each other
const [data, setData] = useState&lt;User | null&gt;(null);
const [isLoading, setIsLoading] = useState(false);
const [error, setError] = useState&lt;string | null&gt;(null);
</code></pre>
<p>Three booleans give you eight states. Only three are valid. You're shipping the other five as bugs.</p>
<p>Instead, model the states as what they actually are:</p>
<pre><code class="language-tsx">type AsyncState&lt;T&gt; =
  | { status: 'idle' }
  | { status: 'loading' }
  | { status: 'success'; data: T }
  | { status: 'error'; error: string };

function UserProfile() {
  const [state, setState] = useState&lt;AsyncState&lt;User&gt;&gt;({ status: 'idle' });

  if (state.status === 'loading') return &lt;Spinner /&gt;;
  if (state.status === 'error') return &lt;ErrorMessage message={state.error} /&gt;;
  if (state.status === 'success') return &lt;Profile user={state.data} /&gt;;
  return null;
}
</code></pre>
<p>TypeScript now knows <code>state.data</code> only exists when <code>status === 'success'</code>. Accessing it in the loading branch is a compile error, not a runtime crash. That's what the type system is for.</p>
<p><strong>Type your hooks explicitly — especially tuple returns</strong></p>
<p>TypeScript infers <code>(string | Dispatch&lt;...&gt;)[]</code> from a hook that returns <code>[value, setter]</code> — a union array where both elements appear to have the same type to callers. Fix it with an explicit return type:</p>
<pre><code class="language-tsx">// Inferred incorrectly — callers lose type safety on destructuring
function useUsername() {
  const [name, setName] = useState('');
  return [name, setName]; // ❌
}

// Explicit tuple — correct types flow through
function useUsername(): [string, (name: string) =&gt; void] {
  const [name, setName] = useState('');
  return [name, setName]; // ✅
}
</code></pre>
<p>For hooks that wrap behaviour across any data shape, generics carry the type through:</p>
<pre><code class="language-tsx">type FetchState&lt;T&gt; =
  | { status: 'loading' }
  | { status: 'success'; data: T }
  | { status: 'error'; error: string };

// Note: this illustrates typed generics — for production data fetching, use TanStack Query instead
function useFetch&lt;T&gt;(url: string): FetchState&lt;T&gt; {
  const [state, setState] = useState&lt;FetchState&lt;T&gt;&gt;({ status: 'loading' });

  useEffect(() =&gt; {
    let cancelled = false;
    fetch(url)
      .then(res =&gt; { if (!res.ok) throw new Error(`HTTP ${res.status}`); return res.json() as Promise&lt;T&gt;; })
      .then(data =&gt; { if (!cancelled) setState({ status: 'success', data }); })
      .catch(err =&gt; { if (!cancelled) setState({ status: 'error', error: err.message }); });
    return () =&gt; { cancelled = true; };
  }, [url]);

  return state;
}

// Fully typed at the call site
const state = useFetch&lt;User&gt;('/api/user/42');
if (state.status === 'success') console.log(state.data.name); // ✅
</code></pre>
<p><code>unknown</code> <strong>over</strong> <code>any</code> <strong>in error handling</strong></p>
<p><code>any</code> turns off the type checker. <code>unknown</code> keeps it on and forces you to validate before use — which is what you actually want when catching errors:</p>
<pre><code class="language-tsx">// any — runtime crash if err is a string, not an Error object
catch (err: any) { setError(err.message); }

// unknown — TypeScript forces the guard
catch (err: unknown) {
  if (err instanceof Error) setError(err.message);
  else setError('An unexpected error occurred');
}
</code></pre>
<p>Two extra lines that prevent real bugs — the kind that only show up in edge cases and produce error screens you can't reproduce locally.</p>
<p><code>useReducer</code> <strong>with discriminated actions for complex state machines</strong></p>
<p>Once a component has more than three related state values and more than two update paths, <code>useState</code> starts fighting you. <code>useReducer</code> with typed actions makes every valid transition explicit:</p>
<pre><code class="language-tsx">type FormAction =
  | { type: 'FIELD_CHANGE'; field: 'email' | 'password'; value: string }
  | { type: 'SUBMIT' }
  | { type: 'SUBMIT_SUCCESS' }
  | { type: 'SUBMIT_ERROR'; message: string }
  | { type: 'RESET' };

function formReducer(state: FormState, action: FormAction): FormState {
  switch (action.type) {
    case 'FIELD_CHANGE': return { ...state, values: { ...state.values, [action.field]: action.value } };
    case 'SUBMIT':       return { ...state, status: 'submitting', errorMessage: null };
    case 'SUBMIT_SUCCESS': return { ...state, status: 'success' };
    case 'SUBMIT_ERROR': return { ...state, status: 'error', errorMessage: action.message };
    case 'RESET':        return initialState;
  }
}
</code></pre>
<p>Every possible transition is in one place. Testing the reducer is pure function testing — no component mounting required. TypeScript checks every branch of the switch, so if you add a new action type and forget the case, the compiler tells you before the user does.</p>
<p>All of these patterns share a goal: use the type system to make impossible states unrepresentable — not documenting what can go wrong, but making wrong combinations inexpressible before they get written.</p>
<hr />
<h2>5. Custom Hooks as Production Primitives</h2>
<p>Every good React codebase I've worked in has had the same folder — small, typed custom hooks that nobody had to think about. You just reach for them. They're boring on purpose. Predictable signatures, no hidden behaviour, sensible defaults.</p>
<p>Think of custom hooks as internal APIs — they have a contract. Same inputs, same outputs, every time. If a hook behaves differently depending on where it's called or when its arguments change, it's not an API — it's a trap.</p>
<p><strong>Four hooks every production codebase needs</strong></p>
<p><code>useDebouncedValue</code> prevents search inputs and filter controls from hammering the server on every keystroke:</p>
<pre><code class="language-tsx">function useDebouncedValue&lt;T&gt;(value: T, delay: number): T {
  const [debounced, setDebounced] = useState&lt;T&gt;(value);

  useEffect(() =&gt; {
    const timer = setTimeout(() =&gt; setDebounced(value), delay);
    return () =&gt; clearTimeout(timer);
  }, [value, delay]);

  return debounced;
}

// Usage
const debouncedQuery = useDebouncedValue(searchQuery, 300);
// Pass debouncedQuery to your API call, not searchQuery
</code></pre>
<p><code>useDisclosure</code> manages the open/closed state of modals, drawers, and popovers without repeating the same <code>useState(false)</code> pattern across dozens of components:</p>
<pre><code class="language-tsx">interface UseDisclosureReturn {
  isOpen: boolean;
  open: () =&gt; void;
  close: () =&gt; void;
  toggle: () =&gt; void;
}

function useDisclosure(initial = false): UseDisclosureReturn {
  const [isOpen, setIsOpen] = useState(initial);
  return {
    isOpen,
    open: useCallback(() =&gt; setIsOpen(true), []),
    close: useCallback(() =&gt; setIsOpen(false), []),
    toggle: useCallback(() =&gt; setIsOpen(prev =&gt; !prev), []),
  };
}
</code></pre>
<p><code>useLocalStorage</code> syncs React state with <code>localStorage</code>, with SSR safety built in. The naive implementation crashes on the server because <code>localStorage</code> doesn't exist there:</p>
<pre><code class="language-tsx">function useLocalStorage&lt;T&gt;(key: string, initialValue: T): [T, (value: T) =&gt; void] {
  const [stored, setStored] = useState&lt;T&gt;(() =&gt; {
    if (typeof window === 'undefined') return initialValue; // SSR guard
    try {
      const item = window.localStorage.getItem(key);
      return item ? (JSON.parse(item) as T) : initialValue;
    } catch {
      return initialValue;
    }
  });

  const setValue = useCallback((value: T) =&gt; {
    try {
      setStored(value);
      if (typeof window !== 'undefined') {
        window.localStorage.setItem(key, JSON.stringify(value));
      }
    } catch (err) {
      console.warn(`useLocalStorage: failed to write key "${key}"`, err);
    }
  }, [key]);

  return [stored, setValue];
}
</code></pre>
<p><code>usePrevious</code> gives you the last render's value — useful for animations, comparison logic, and knowing whether a value went up or down:</p>
<pre><code class="language-tsx">function usePrevious&lt;T&gt;(value: T): T | undefined {
  const ref = useRef&lt;T&gt;();
  useEffect(() =&gt; { ref.current = value; });
  return ref.current;
}
</code></pre>
<p>These four cover the majority of cases. Knowing when <em>not</em> to reach for a custom hook matters just as much.</p>
<p><strong>When not to extract a hook</strong></p>
<p>Extraction adds indirection. A reader now has to jump to another file to understand what a component does. That trade-off is worth it when the logic is genuinely reused in three or more places, or when isolating it makes testing meaningfully easier.</p>
<p>If logic only lives in one component and is easy to follow inline, leave it inline. Custom hooks aren't a tidying mechanism — they're an API boundary. Treat them like one.</p>
<p><strong>When to reach for a library instead</strong></p>
<p><code>usehooks-ts</code> and ReactUse cover most common primitives with strong TypeScript support, SSR safety, and active maintenance. Before writing <code>useMediaQuery</code>, <code>useEventListener</code>, <code>useOnClickOutside</code>, or <code>useIntersectionObserver</code> from scratch, check if the library already has a well-tested version. Rolling your own is only worth it when you need behaviour the library doesn't support, or when its API doesn't match your team's conventions.</p>
<p>A good hook library means one less thing to write. That's worth something before you even get to profiling.</p>
<hr />
<h2>6. Performance: Measure First, Optimize Second</h2>
<p>The most impactful performance change in React 19 is the React Compiler — and you don't have to write it. It automatically memoizes components, computed values, and callbacks — work you used to do by hand with <code>useMemo</code>, <code>useCallback</code>, and <code>React.memo</code>.</p>
<p>The APIs aren't dead. The bar for reaching for them manually is just a lot higher now.</p>
<p><strong>When</strong> <code>useMemo</code> <strong>and</strong> <code>useCallback</code> <strong>still matter</strong></p>
<p>The React Compiler handles most cases. The ones it doesn't handle cleanly are:</p>
<ul>
<li><p>Callbacks passed as props to third-party components that do their own reference equality checks internally</p>
</li>
<li><p>Values passed into virtualized list libraries (<code>react-window</code>, <code>react-virtual</code>) where reference stability directly controls whether rows re-render</p>
</li>
<li><p>Expensive computations inside components the Compiler hasn't yet optimised (check the React DevTools "Compiler" panel to see which components are covered)</p>
</li>
</ul>
<p>Outside those cases, reach for profiling data before reaching for <code>useMemo</code>.</p>
<p><strong>Component splitting: the optimization nobody reaches for first</strong></p>
<p>Before touching any memoization API, look at your component tree. A large component that renders an expensive subtree will re-render the entire subtree whenever any piece of its state changes — even if that state has nothing to do with the expensive part.</p>
<p>The fix is splitting:</p>
<pre><code class="language-tsx">// Before: SearchBar state change re-renders the entire page including ExpensiveList
function SearchPage() {
  const [query, setQuery] = useState('');
  return (
    &lt;div&gt;
      &lt;SearchBar query={query} onChange={setQuery} /&gt;
      &lt;ExpensiveList /&gt; {/* Re-renders on every keystroke */}
    &lt;/div&gt;
  );
}

// After: SearchBar manages its own state, ExpensiveList is isolated
function SearchPage() {
  return (
    &lt;div&gt;
      &lt;SearchBar /&gt;   {/* State lives here */}
      &lt;ExpensiveList /&gt; {/* Never re-renders due to search */}
    &lt;/div&gt;
  );
}
</code></pre>
<p>No memoization APIs at all. Just a smaller state scope.</p>
<p>It delivers the biggest wins in real codebases, and it's the last thing engineers try.</p>
<p><strong>Diagnosing before optimising</strong></p>
<p>Open React DevTools Profiler, record a slow interaction, and look at which components are highlighted. The components that re-render most often and take the most time are your actual bottleneck — not the ones you assume are slow.</p>
<p>The user-facing metric that matters most in 2026 is INP (Interaction to Next Paint), which replaced FID in Core Web Vitals in March 2024. Google's Web Vitals specification classifies INP below 200ms as good, 200–500ms as needing improvement, and above 500ms as poor. Most React SPAs with unoptimised re-renders sit in the middle band — technically functional, but sluggish on mid-range hardware. The usual culprits are slow <code>onClick</code> handlers, expensive renders triggered by user input, and long tasks blocking the main thread. For a deeper breakdown of INP and how Next.js affects your scores, <a href="https://shubhamjha.com/blog/core-web-vitals-nextjs-optimization">this INP optimization guide for Next.js covers it in full</a>.</p>
<p><strong>Lazy loading with Suspense</strong></p>
<p>For components that are large, rarely needed, or only relevant on certain routes, lazy loading keeps your initial bundle small:</p>
<pre><code class="language-tsx">import { lazy, Suspense } from 'react';

const HeavyDashboard = lazy(() =&gt; import('./HeavyDashboard'));

function App() {
  return (
    &lt;Suspense fallback={&lt;LoadingScreen /&gt;}&gt;
      &lt;HeavyDashboard /&gt;
    &lt;/Suspense&gt;
  );
}
</code></pre>
<p>Lazy loading is worth doing before you have profiling data — bundle size directly affects Time to Interactive on initial load, and the cost of <code>lazy()</code> is near zero.</p>
<p>For computation-heavy logic that genuinely needs to move off the JavaScript thread, <a href="https://shubhamjha.com/blog/rust-webassembly-typescript-guide">this practical Rust and WebAssembly with TypeScript guide</a> covers when and how to reach for WebAssembly as the next tier of performance optimization.</p>
<hr />
<h2>7. Hooks in the Next.js App Router</h2>
<p>The App Router does one thing to your mental model for hooks: it shrinks it. Most of your application simply doesn't need them. If you're still getting familiar with the App Router itself, <a href="https://shubhamjha.com/blog/how-to-master-react-nextjs">this Next.js App Router fundamentals guide</a> is a good foundation before going deeper here.</p>
<p>Server Components are the default. They have direct access to databases, file systems, and APIs, and they ship zero JavaScript by default. Adding <code>'use client'</code> opts you into a client-side bundle, a hydration cost, and all the state-management complexity that comes with it. The question to ask before adding it isn't "does this component have state?" — it's "does this interaction genuinely need the browser?"</p>
<p>Keep <code>'use client'</code> boundaries as narrow as possible. A page-level directive forces the entire page — and everything it imports — into the client bundle. A component-level directive on just the interactive part keeps the rest of the page server-rendered.</p>
<pre><code class="language-tsx">// app/product/[id]/page.tsx — Server Component, no client JS
export default async function ProductPage({ params }: { params: { id: string } }) {
  const product = await getProduct(params.id); // direct DB call
  return (
    &lt;div&gt;
      &lt;ProductDetails product={product} /&gt;  {/* Server Component */}
      &lt;AddToCartButton productId={product.id} /&gt;  {/* Client Component — isolated */}
    &lt;/div&gt;
  );
}
</code></pre>
<p><strong>Navigation hooks</strong></p>
<p>In the App Router, <code>useRouter</code>, <code>usePathname</code>, and <code>useSearchParams</code> from <code>next/navigation</code> replace the old <code>next/router</code> imports. All three require <code>'use client'</code>:</p>
<pre><code class="language-tsx">'use client';

import { useRouter, usePathname, useSearchParams } from 'next/navigation';

function FilterControls() {
  const router = useRouter();
  const pathname = usePathname();
  const searchParams = useSearchParams();

  function updateFilter(key: string, value: string) {
    const params = new URLSearchParams(searchParams.toString());
    params.set(key, value);
    router.push(`\({pathname}?\){params.toString()}`);
  }
}
</code></pre>
<p><strong>React 19's</strong> <code>use()</code> <strong>hook</strong></p>
<p><code>use()</code> is unlike any previous hook — it can be called conditionally, inside loops, inside <code>if</code> statements. It reads a Promise or Context value and integrates with Suspense for loading states:</p>
<pre><code class="language-tsx">'use client';
import { use, Suspense } from 'react';

function UserName({ userPromise }: { userPromise: Promise&lt;User&gt; }) {
  const user = use(userPromise); // suspends until resolved
  return &lt;span&gt;{user.name}&lt;/span&gt;;
}

// Wrap in Suspense in the parent
&lt;Suspense fallback={&lt;Skeleton /&gt;}&gt;
  &lt;UserName userPromise={fetchUser(id)} /&gt;
&lt;/Suspense&gt;
</code></pre>
<p><code>use()</code> reads a Promise. It doesn't fetch anything or manage caching — the Promise has to already exist. For client-initiated fetching, you still need TanStack Query or similar.</p>
<p><strong>Form handling with</strong> <code>useActionState</code> <strong>and</strong> <code>useFormStatus</code></p>
<p>React 19 ships two hooks that cut most of the form boilerplate you're used to writing:</p>
<pre><code class="language-tsx">'use client';
import { useActionState, useFormStatus } from 'react-dom';

function SubmitButton() {
  const { pending } = useFormStatus();
  return &lt;button type="submit" disabled={pending}&gt;{pending ? 'Saving...' : 'Save'}&lt;/button&gt;;
}

function ProfileForm() {
  const [state, formAction] = useActionState(updateProfileAction, { error: null });

  return (
    &lt;form action={formAction}&gt;
      &lt;input name="name" /&gt;
      {state.error &amp;&amp; &lt;p&gt;{state.error}&lt;/p&gt;}
      &lt;SubmitButton /&gt;
    &lt;/form&gt;
  );
}
</code></pre>
<p><code>useActionState</code> wraps a Server Action and gives you the pending state and last result. <code>useFormStatus</code> reads the status of the nearest parent <code>&lt;form&gt;</code> — the <code>SubmitButton</code> above doesn't need props passed down to it at all.</p>
<p><strong>The anti-pattern:</strong> fetching in a Client Component when a Server Component could do it. If you find yourself writing <code>useEffect(() =&gt; { fetch('/api/products').then(...) }, [])</code> in a component that isn't interactive, that component probably shouldn't be a Client Component at all.</p>
<hr />
<h2>8. The Production Checklist</h2>
<p>Before shipping, run through this list. It encodes every mistake described above.</p>
<p><strong>State design</strong></p>
<ul>
<li><p>[ ] State is categorized as UI, server, or shared before any hook is written</p>
</li>
<li><p>[ ] Server state lives in TanStack Query, not <code>useState</code></p>
</li>
<li><p>[ ] No piece of server state is duplicated into local state</p>
</li>
<li><p>[ ] Async state uses a discriminated union (<code>idle | loading | success | error</code>), not three separate booleans</p>
</li>
</ul>
<p><strong>Effects</strong></p>
<ul>
<li><p>[ ] No <code>useEffect</code> used for derived state — computed during render instead</p>
</li>
<li><p>[ ] No <code>useEffect</code> used for data fetching — TanStack Query or Server Components used instead</p>
</li>
<li><p>[ ] No <code>useEffect</code> used for event-response logic — moved to handlers instead</p>
</li>
<li><p>[ ] Every effect that registers something has a cleanup function that deregisters it</p>
</li>
<li><p>[ ] <code>exhaustive-deps</code> ESLint rule is enabled; all warnings resolved</p>
</li>
</ul>
<p><strong>TypeScript</strong></p>
<ul>
<li><p>[ ] No <code>any</code> in hook signatures or return types</p>
</li>
<li><p>[ ] Async state modelled as a discriminated union, not multiple booleans</p>
</li>
<li><p>[ ] Hook tuple returns have explicit type annotations</p>
</li>
<li><p>[ ] Error handling uses <code>unknown</code>, not <code>any</code></p>
</li>
<li><p>[ ] <code>useReducer</code> used for components with more than three related state values</p>
</li>
</ul>
<p><strong>Custom hooks</strong></p>
<ul>
<li><p>[ ] Hook has a stable, documented API (inputs, outputs, side effects)</p>
</li>
<li><p>[ ] SSR-safe: no direct <code>window</code> or <code>localStorage</code> access without <code>typeof window !== 'undefined'</code> guard</p>
</li>
<li><p>[ ] Extracted only when used in 3+ places OR when complexity genuinely warrants isolation</p>
</li>
</ul>
<p><strong>Performance</strong></p>
<ul>
<li><p>[ ] Slow interactions profiled with React DevTools Profiler before any optimization</p>
</li>
<li><p>[ ] Component splitting evaluated before <code>useMemo</code> or <code>useCallback</code></p>
</li>
<li><p>[ ] Heavy components lazy-loaded with <code>lazy()</code> + <code>Suspense</code></p>
</li>
<li><p>[ ] <code>useMemo</code>/<code>useCallback</code> used intentionally, not defensively (let the Compiler handle the rest)</p>
</li>
</ul>
<p><strong>Next.js App Router</strong></p>
<ul>
<li><p>[ ] <code>'use client'</code> boundary is as narrow as possible — component-level, not page-level</p>
</li>
<li><p>[ ] Data fetching happens in Server Components where possible</p>
</li>
<li><p>[ ] Interactive islands are isolated Client Components</p>
</li>
<li><p>[ ] Navigation uses <code>next/navigation</code>, not <code>next/router</code></p>
</li>
</ul>
<hr />
<p>Three weeks after that Friday debugging session, I pulled up the component again. The thirty-two lines were four. The effect was gone. The stale data bug hadn't reappeared once — it disappeared from our error tracking on the next deploy and never came back. The component went from re-rendering on every navigation event to rendering exactly once. The code was obviously correct in a way it hadn't been before — not because it was more complex, but because there was less of it to reason about.</p>
<p>That's the pattern. The improvements are almost always subtractive — less code, fewer places for a value to live, fewer things that can drift. The type system's job isn't to annotate your existing complexity — it's to collapse it by making invalid states unreachable.</p>
<p>If you're building up to these patterns from the foundations, <a href="https://shubhamjha.com/blog/learn-javascript-html-css-react-beginners">this JavaScript, HTML, CSS and React guide for 2026</a> covers the groundwork they rest on.</p>
<p>If you're building a React or Next.js product and want to pressure-test your architecture or bring in production-ready patterns, you can <a href="https://shubhamjha.com/contact">reach out here</a>.</p>
]]></content:encoded></item><item><title><![CDATA[How to Master React and Next.js in 2026]]></title><description><![CDATA[The Shift You Cannot Ignore
The defining change of 2026 isn't a new API or a clever hook. React has moved from client-first to server-native. The React Compiler now handles what developers once manage]]></description><link>https://blog.shubhamjha.com/how-to-master-react-and-next-js-in-2026</link><guid isPermaLink="true">https://blog.shubhamjha.com/how-to-master-react-and-next-js-in-2026</guid><category><![CDATA[React]]></category><category><![CDATA[Next.js]]></category><category><![CDATA[Frontend Development]]></category><category><![CDATA[learning]]></category><category><![CDATA[Web Development]]></category><dc:creator><![CDATA[Shubham Jha]]></dc:creator><pubDate>Tue, 07 Apr 2026 05:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/69d1293c6792e486f6810f5c/744af3ef-b3aa-4992-959d-9a1efa0b61e5.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>The Shift You Cannot Ignore</h2>
<p>The defining change of 2026 isn't a new API or a clever hook. React has moved from <strong>client-first</strong> to <strong>server-native</strong>. The React Compiler now handles what developers once managed manually. The boundary between frontend and cloud infrastructure has dissolved. AI tooling is table stakes - not a nice-to-have.</p>
<p>This changes how learning works. A course that walks you through <code>useState</code> and <code>useEffect</code> isn't a foundation - it's technical debt from day one. The resources below are selected for one reason: do they build engineers who can <em>architect</em> systems, or just developers who follow tutorials?</p>
<p>I've worked through most of what's on this list — some of it while building production systems, some while trying to close gaps I only noticed when something broke in a way I couldn't explain. The React Compiler handles your performance optimizations now. AI tooling generates your boilerplate. What neither produces is judgment - the calls around what to build, what to cut, and what to push back on entirely. That's the gap these resources are selected to close.</p>
<p>They're arranged as a sequence. Start where you are, move down when something stops being challenging.</p>
<hr />
<h2>01. <a href="https://www.freecodecamp.org/learn/front-end-development-libraries-v9/">freeCodeCamp Front End Dev Libraries</a>: The Honest Starting Point</h2>
<p>Most roadmaps skip this one because it's free and therefore assumed to be lightweight. It isn't. The curriculum is one of the most structurally complete in a way most paid courses aren't - theory is never left unapplied, each concept module is followed immediately by something you build.</p>
<p>The sequencing is what sets it apart. It moves from <strong>React Fundamentals</strong> through <strong>state management</strong>, <strong>routing</strong>, <strong>performance</strong>, <strong>testing</strong>, and <strong>CSS frameworks</strong> in an order that mirrors how real codebases are actually structured. <strong>TypeScript</strong> comes last - not as an afterthought, but because that's where it makes sense: once you have enough context to understand why it exists rather than just memorizing syntax.</p>
<p>At the end sits a <strong>certification exam</strong> that requires demonstrated competency, not just completion. And it costs nothing.</p>
<p><strong>Who this is for:</strong> Developers new to the ecosystem, or anyone whose foundation came from tutorials that skipped the unglamorous parts. If any module here surprises you, that's your gap.</p>
<hr />
<h2>02 - <a href="https://nextjs.org/learn">Next.js Learn</a>: Canonical Architecture from the Source</h2>
<p>The official Next.js docs are no longer a reference manual. By 2026 they've become a project-based curriculum built around <strong>React Server Components</strong> and <strong>Partial Pre-rendering</strong> - and it's the only resource that reflects actual Vercel architectural intent rather than a third party's reading of it.</p>
<p>The flagship tutorial has moved beyond the <strong>Dashboard starter</strong>. It now covers AI-First streaming patterns - how to pipe data directly to the UI while managing server-side logic without a traditional API layer. You get the constraints, the reasoning, and the defaults from the people who designed the framework. That's harder to find than it sounds.</p>
<p><strong>Who this is for:</strong> Anyone whose Next.js intuition formed before the <strong>App Router</strong> stabilized. The official docs have changed more in the past two years than most third-party courses have - even experienced developers should check back in.</p>
<hr />
<h2>03 - <a href="https://joyofreact.com/">The Joy of React</a>: Building Mental Models That Last</h2>
<p>Syntax changes. Mental models don't. <strong>Josh W. Comeau's</strong> <em>The Joy of React</em> is built on this premise. Where most courses show you <em>what</em> to write, Josh shows you <em>why</em> it behaves the way it does - through custom visualizers that render the React Fiber tree in motion as state changes propagate.</p>
<p>For 2026, the course covers the React Compiler directly - not just what it does, but how to write code the compiler can actually reason about. The <code>useMemo</code> and <code>useCallback</code> instincts you built up over years of client-side React aren't just unnecessary now. Applied incorrectly, they fight the compiler. Joy of React works those instincts out of you.</p>
<p>If you can only invest in one paid resource this year, I'd pick this one. Architectural knowledge compounds on a solid mental model. Without one, everything else - Epic Web, Frontend Masters, the official docs - becomes memorization.</p>
<p><strong>Who this is for:</strong> Developers who can ship features but feel like they're guessing rather than reasoning. Especially useful if RSC still feels like something you work around rather than reach for.</p>
<hr />
<h2>04 - <a href="https://www.epicweb.dev/full-stack">Epic Web</a>: The Full-Stack Professional Curriculum</h2>
<p>Most courses treat the browser as the whole world. <strong>Kent C. Dodds'</strong> Epic Web treats it as one node in a larger system - and that difference in scope is what separates developers who implement features from engineers who can own a production system when things wrong.</p>
<p>By 2026, the curriculum covers <strong>SQLite</strong> at the Edge, <strong>Passkey authentication</strong>, end-to-end type safety with modern TypeScript, <strong>race condition handling</strong>, optimistic UI, and <strong>cache invalidation</strong> at scale. The project-based structure is unforgiving in the right way: production-grade decisions at every step, no deferring the hard parts.</p>
<p>The gap Epic Web closes isn't syntactic - it's dispositional. Knowing how to handle errors, write tests, and reason about deployment is what makes the difference between a developer who needs supervision and one who can be trusted with a codebase that generates revenue. That transition doesn't come from building more side projects.</p>
<p><strong>Who this is for:</strong> Mid-level developers ready to move into senior or lead roles. If you've never owned a production incident, debugged a race condition under load, or made a caching decision that affected real users - this curriculum that closes that gap.</p>
<hr />
<h2>05 - <a href="https://frontendmasters.com/learn/react/">Frontend Masters</a>: Architecture at Scale</h2>
<p>Frontend Masters answers a different question than the other resources on this list. Not <strong>how do I build this feature?</strong> but <strong>how do I keep a large system from becoming unmaintainable in two years?</strong></p>
<p>The 2026 curriculum from instructors like <strong>Scott Moss</strong> and <strong>Lydia Hallie</strong> covers problems that only surface at scale: splitting large Next.js applications into <strong>micro-frontends</strong> without tanking performance, integrating <strong>LLM agents</strong> into production React components via modern SDKs, tracking down <strong>millisecond-level bottlenecks</strong> with the 2026 iteration of Chrome DevTools. These aren't side project problems. They come up when you're responsible for a system with real traffic, a real team, and consequences when you get the architecture wrong.</p>
<p>At the senior and lead level, your biggest leverage isn't the code you write - it's the decisions that prevent bad code from being written six months from now. Frontend Masters is the only resource here built specifically to develop that judgment.</p>
<p><strong>Who this is for:</strong> Senior developers and technical leads working on existing production systems. If your day-to-day involves inheriting and maintaining a large codebase rather than building greenfield projects, this fits your reality better than anything else on the list. For a concrete starting point on production performance — a real Next.js App Router codebase, a 3.2s LCP, and the specific fixes that moved it — <a href="https://shubhamjha.com/blog/core-web-vitals-nextjs-optimization">this Core Web Vitals case study</a> pairs well with the Frontend Masters content.</p>
<hr />
<h2>06 - AI-Native Learning: The Methodology That Replaced Courses</h2>
<p>The most important resource of 2026 isn't a platform - it's a practice. With tools like <strong>Cursor</strong>, <strong>v0.dev</strong>, and <strong>Claude Code</strong> standard in most workflows, the fastest path to mastery isn't passive consumption anymore. It's deliberate synthesis: use AI to generate complexity, then deconstruct what it produced and rebuild it by hand.</p>
<p>Not letting the AI write your code - using it as a pair programmer who is always available and never impatient, and treating its output as raw material for understanding rather than a final answer. The developers advancing fastest right now already work this way.</p>
<h3>The Three-Step Synthesis Loop</h3>
<p><strong>1. Generate and audit.</strong> Prompt an AI to build a complex feature - a streaming search component with PPR, a Server Action with optimistic UI, a Passkey authentication flow. Study the output critically before running it.</p>
<p><strong>2. Force the explanation.</strong> Ask why each pattern was chosen. <em>Why a Server Action here instead of a Route Handler? Why is this a Client Component when it seems like it could be a Server Component?</em> Vague answers mean the generation was shallow. Push until the reasoning is specific.</p>
<p><strong>3. Rebuild from scratch.</strong> Close the output and rewrite it manually. If you can't, you've found exactly what you don't yet understand - which is more useful than getting the feature shipped.</p>
<p>No course updates fast enough for this. React, Next.js, and TypeScript ship on a weekly cadence now - by the time someone re-records a video, it's already stale. AI-native learning keeps you current by default, because you're always working against the live state of the ecosystem, not a snapshot of it.</p>
<p><strong>Who this is for:</strong> Every level. The features you generate and audit should match where you are in the sequence above.</p>
<hr />
<h2>The Verdict: A Sequence, Not a Menu</h2>
<table>
<thead>
<tr>
<th>Resource</th>
<th>Best for</th>
<th>The bottleneck it fixes</th>
</tr>
</thead>
<tbody><tr>
<td>freeCodeCamp v9</td>
<td>New → Junior</td>
<td>Shaky fundamentals and no verifiable proof of skill</td>
</tr>
<tr>
<td>Next.js Learn</td>
<td>All levels</td>
<td>Outdated architectural intuitions</td>
</tr>
<tr>
<td>Joy of React</td>
<td>Junior → Mid</td>
<td>Guessing rather than reasoning</td>
</tr>
<tr>
<td>Epic Web</td>
<td>Mid → Senior</td>
<td>Front-end skills without production discipline</td>
</tr>
<tr>
<td>Frontend Masters</td>
<td>Senior → Lead</td>
<td>Feature-building without systems thinking</td>
</tr>
<tr>
<td>AI-Native Learning</td>
<td>All levels</td>
<td>Keeping pace with a weekly release cycle</td>
</tr>
</tbody></table>
<p>One question worth sitting with: <em>at what point in this list does the material stop feeling challenging and start feeling familiar?</em> That's your actual level. It's probably one step lower than you assumed.</p>
<p>If you're new to the ecosystem and want to start from the actual foundations — JavaScript, HTML, CSS through to your first React component — <a href="https://shubhamjha.com/blog/learn-javascript-html-css-react-beginners">learning web development in 2026</a> maps out the right sequence before diving into any of the resources above.</p>
<p>If you're building a React + Next.js product and want the engineering and architecture to match what these resources teach, you can explore my <a href="https://shubhamjha.com/projects">projects</a> or <a href="https://shubhamjha.com/contact">contact me</a> to discuss your roadmap.</p>
]]></content:encoded></item><item><title><![CDATA[Learn JavaScript, HTML, CSS and React in 2026]]></title><description><![CDATA[Why most beginners never finish learning web development
They start with the wrong question. What framework should I learn? The real question is: do I actually understand what's happening on the scree]]></description><link>https://blog.shubhamjha.com/learn-javascript-html-css-and-react-in-2026</link><guid isPermaLink="true">https://blog.shubhamjha.com/learn-javascript-html-css-and-react-in-2026</guid><category><![CDATA[HTML5]]></category><category><![CDATA[CSS]]></category><category><![CDATA[JavaScript]]></category><category><![CDATA[React]]></category><category><![CDATA[webdev]]></category><dc:creator><![CDATA[Shubham Jha]]></dc:creator><pubDate>Mon, 06 Apr 2026 02:19:36 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/69d1293c6792e486f6810f5c/89434107-0cb9-464b-8d30-d455f6b4a7e4.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>Why most beginners never finish learning web development</h2>
<p>They start with the wrong question. <em>What framework should I learn?</em> The real question is: <em>do I actually understand what's happening on the screen?</em> Most tutorials drop you into React on day one. You copy code, it runs, and you move on without knowing why it worked.</p>
<p>I've seen this pattern repeatedly — developers who can build features but freeze when something breaks at the DOM level, because they skipped the part that explains what the DOM actually is. The sequence below is the order I wish I'd learned in.</p>
<p>There's a fix for that. It starts with a product card, a plain JavaScript function, and ends with that same card as a React component. Six concepts, in one specific order, each making the next click into place. By the time you write the React version, you won't just know <em>how</em> to write it. You'll know why every single line is there.</p>
<h2>1. JS Basics</h2>
<p>JavaScript is what makes web pages do things: maps that move, content that updates without a reload, buttons that respond. Before you touch the DOM or write a component, you need to think like a programmer, which is different from knowing syntax.</p>
<p>The focus here isn't memorisation. It's learning to ask <em>why</em> code behaves the way it does.</p>
<p>A good early example — why does this function return nothing?</p>
<pre><code class="language-javascript">// Why does this print "undefined"?
function getProduct() {
  const name = "Laptop"
}
console.log(getProduct()) // undefined — no return statement

// Fixed:
function getProduct() {
  const name = "Laptop"
  return name
}
console.log(getProduct()) // "Laptop"
</code></pre>
<p>Seems obvious once you see it. Most beginners spend weeks hitting this wall in different forms because they <em>never stopped to ask the question</em>. Getting into the habit of asking it is most of what this section is about.</p>
<p>The section works through variables, types, functions, objects, arrays, and conditionals, then goes deeper into scope, the call stack, and debugging. Those last three aren't advanced topics. They're what separates developers who can read error messages from developers who can't.</p>
<h2>2. HTML Basics</h2>
<p>HTML is the structure of a web page, the part browsers parse before anything else runs. Get it wrong and nothing else works right, no matter how good your JavaScript is.</p>
<p>The key concept is the Document Object Model: browsers read HTML into a tree of nodes, and that tree is what JavaScript actually operates on. Knowing what the <code>document</code> object is, how <code>querySelector</code> works, and how to attach event listeners turns DOM errors from mysterious into something you can actually fix.</p>
<pre><code class="language-javascript">// Select the product list container from the HTML
const list = document.querySelector("#product-list")

// Listen for a click on any card inside it
list.addEventListener("click", (event) =&gt; {
  if (event.target.tagName === "BUTTON") {
    console.log("Add to cart clicked")
  }
})
</code></pre>
<h2>3. CSS Basics</h2>
<p>An app that looks broken feels broken, even if it works perfectly. CSS is what turns a functional prototype into something people trust enough to use.</p>
<p>The two layout systems worth learning first are flexbox and grid. Not because they're trendy, but because they replaced a decade of float hacks and table abuse, and they're what every modern UI is actually built with.</p>
<p>Here's flexbox laying out the product grid:</p>
<pre><code class="language-css">/* Product grid layout */
.product-list {
  display: flex;
  flex-wrap: wrap;
  justify-content: center;
  gap: 1.5rem;
  padding: 2rem;
}

.product-card {
  display: flex;
  flex-direction: column;
  align-items: flex-start;
  width: 280px;
  border: 1px solid #e2e8f0;
  border-radius: 8px;
  padding: 1.25rem;
}
</code></pre>
<p>Two properties and your grid wraps and centers itself at any screen size. Before flexbox, that required a painful combination of floats, clearfixes, and prayers. Now that the card looks right, the next question is how to make it respond to a click, which is where JavaScript and HTML have to work together.</p>
<h2>4. HTML + JS</h2>
<p>React, Vue, Angular — they're all solving the same problem: wiring JavaScript to HTML without making a mess of it. But before you use a tool that does this for you, it's worth seeing what it actually does.</p>
<p>Most developers who say they're comfortable with React have never written a <code>createElement</code> call, which is exactly why their debugging stops at the component boundary. Here's the manual version, no framework, no build step:</p>
<pre><code class="language-javascript">// No framework, no build step — just JS and the DOM
function renderProduct(product) {
  const card = document.createElement("div")
  card.className = "product-card"
  card.innerHTML = `
    &lt;h2&gt;${product.name}&lt;/h2&gt;
    &lt;p&gt;$${product.price}&lt;/p&gt;
    &lt;button&gt;Add to cart&lt;/button&gt;
  `
  document.getElementById("product-list").appendChild(card)
}

renderProduct({ name: "Wireless Headphones", price: 49.99 })
</code></pre>
<p>Every React component you write later is a cleaner version of this. Writing it manually first is what makes the abstraction feel earned rather than arbitrary.</p>
<p>This section builds a minimal version of what React does: creating nodes, updating them dynamically, and splitting logic into modules using both ES and CommonJS syntax. Writing it by hand once is worth more than reading about it ten times.</p>
<h2>5. Web Server + Vite</h2>
<p>Most beginners skip straight to deploying without knowing what deployment actually means, which is why they spend hours debugging production issues that a five-minute mental model would have prevented. When you type a URL into a browser, your computer sends a request to a remote server, which sends back files. That's it.</p>
<p>Node.js lets you run JavaScript on that server. Vite takes your JavaScript, potentially dozens of files, and bundles it into something a browser can load efficiently. Unbundled JavaScript has a ceiling most beginners don't notice until they hit it hard.</p>
<p>Scaffolding a new project with Vite takes one command:</p>
<pre><code class="language-bash">npm create vite@latest my-app -- --template react
cd my-app &amp;&amp; npm install &amp;&amp; npm run dev
</code></pre>
<p>Two seconds later you have a local dev server with hot module replacement, meaning the browser updates the exact component you edited without a full page reload. That's what makes front-end development fast to iterate on. Understanding that Vite is assembling and serving those files is what makes debugging it possible.</p>
<h2>6. React JS</h2>
<p>React is a JavaScript library for building UIs. What makes it useful is that it handles DOM updates for you: describe what the UI should look like given some state, and React figures out the minimum changes needed to get there.</p>
<p>The same product card from Section 4, now as a React component:</p>
<pre><code class="language-jsx">// ProductCard.jsx
import { useState } from "react"

function ProductCard({ name, price }) {
  const [added, setAdded] = useState(false)

  return (
    &lt;div className="product-card"&gt;
      &lt;h2&gt;{name}&lt;/h2&gt;
      &lt;p&gt;${price}&lt;/p&gt;
      &lt;button onClick={() =&gt; setAdded(true)}&gt;{added ? "✓ Added" : "Add to cart"}&lt;/button&gt;
    &lt;/div&gt;
  )
}

export default ProductCard
</code></pre>
<p>Same output. Less code. The button updates without touching the DOM directly because React tracks the state change and handles the re-render.</p>
<p>React components, JSX, hooks, state, reactivity, the component lifecycle — each one makes more sense because you've already seen what it replaces.</p>
<p>Web development has a lot of layers. Most guides hide that by dropping you into the top one. The order here is deliberate: each section exists because the next one needs it. Once the React fundamentals click, the next question is performance — <a href="https://shubhamjha.com/blog/core-web-vitals-nextjs-optimization">Next.js Core Web Vitals 2026</a> shows what slows real production apps down, and it's usually not what beginners expect.</p>
<p>Once this foundation is solid, the next step is learning which React and Next.js patterns matter at a production level — <a href="https://shubhamjha.com/blog/how-to-master-react-nextjs">mastering React and Next.js in 2026</a> maps the resources worth investing in at each stage.</p>
<p>Want to see how these concepts come together in a real product? Browse my <a href="https://shubhamjha.com/projects">projects</a> or <a href="https://shubhamjha.com/contact">reach out</a> — happy to talk through what you're building.</p>
]]></content:encoded></item><item><title><![CDATA[Next.js Core Web Vitals 2026: Why LCP Isn't Just Your Images]]></title><description><![CDATA[The logistics portal I inherited had a 3.2 second LCP. That number had been sitting in a Notion doc labelled known issues for two quarters. Everyone knew it was bad. Nobody could tell you exactly why.]]></description><link>https://blog.shubhamjha.com/next-js-core-web-vitals-2026-why-lcp-isn-t-just-your-images</link><guid isPermaLink="true">https://blog.shubhamjha.com/next-js-core-web-vitals-2026-why-lcp-isn-t-just-your-images</guid><category><![CDATA[Next.js]]></category><category><![CDATA[performance]]></category><category><![CDATA[webdev]]></category><category><![CDATA[React]]></category><dc:creator><![CDATA[Shubham Jha]]></dc:creator><pubDate>Sun, 05 Apr 2026 04:47:05 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/69d1293c6792e486f6810f5c/eab2eaa0-2736-4d22-8ee4-c1fa18a18d67.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<hr />
<p>The <a href="https://shubhamjha.com/projects/shipglobal/index.html">logistics portal</a> I inherited had a 3.2 second LCP. That number had been sitting in a Notion doc labelled <em>known issues</em> for two quarters. Everyone knew it was bad. Nobody could tell you exactly why. If you've already added <code>next/image</code>, turned on compression, and you're still stuck in the high 70s on Lighthouse, the problem is almost certainly not your images.</p>
<p>When we finally fixed it — really fixed it, not just ran Lighthouse and called it a day — page load dropped 40%. Repeat orders went up 74%. Average order value climbed 12%. I'm not claiming performance caused all of that. But 23% of vendors were dropping off in their first session. When you stop losing a quarter of your users before they've done anything, everything else you're working on gets a fairer shot.</p>
<p>Most Core Web Vitals content is written for marketing sites. It tells you to compress images and defer scripts. That's fine for a WordPress blog. For a data-heavy Next.js App Router application, the standard checklist doesn't get you past 75. Google's published thresholds (LCP under 2.5s, CLS under 0.1) are the floor, not the target. If your users are power users who touch your product dozens of times a day, they notice every stutter.</p>
<hr />
<h2>LCP in a Next.js app is probably not your images</h2>
<p>Open Chrome DevTools, run a performance trace, and look at what the LCP element actually is before you touch a single image. If it's text or a data-driven component, image optimisation is the wrong fix.</p>
<p>The LCP element was a dashboard stats card: a <code>&lt;div&gt;</code> with numbers pulled from three separate API calls. The browser was waiting for all three before it could paint anything meaningful in the viewport. The hero image was fine. The data was the bottleneck.</p>
<p>Some of the things that killed our LCP had nothing to do with assets:</p>
<ul>
<li><p>A waterfall of three sequential API calls on first render, each waiting for the previous</p>
</li>
<li><p>A <code>useEffect</code> that fetched critical above-the-fold data client-side instead of server-side</p>
</li>
<li><p>A large <code>"use client"</code> boundary at the page level that forced the entire page to hydrate before any data could render</p>
</li>
</ul>
<p>The fix wasn't clever, but it wasn't just <em>move to</em> <code>Promise.all</code> either. First, <code>fetchRevenue</code> had to be decoupled from the orders response: the original call passed <code>orders.period</code> as a dependency, making true parallelisation impossible until that contract changed. Once decoupled, we moved all three fetches to the server with <code>Promise.all</code> and streamed secondary content below the fold with <code>Suspense</code>. The dashboard stats rendered in the first paint instead of the third.</p>
<pre><code class="language-tsx">// Before: client-side waterfall
const [stats, setStats] = useState(null)
useEffect(() =&gt; {
  fetchOrderCount().then(async (orders) =&gt; {
    const revenue = await fetchRevenue(orders.period)
    const shipments = await fetchShipments()
    setStats({ orders, revenue, shipments })
  })
}, [])

// After: server-side parallel fetch
// Note: refactored fetchRevenue to accept date range from URL params instead of chaining off orders
async function DashboardStats() {
  const [orders, revenue, shipments] = await Promise.all([fetchOrderCount(), fetchRevenue(), fetchShipments()])
  return &lt;StatsCard orders={orders} revenue={revenue} shipments={shipments} /&gt;
}
</code></pre>
<p>The <code>useEffect</code> version waited sequentially for three round trips. The server version waited for the slowest of three parallel requests, and the result arrived in the initial HTML payload, not after hydration.</p>
<h3>The <code>"use client"</code> boundary that's silently inflating your bundle</h3>
<p>The most common hidden LCP regression in Next.js App Router apps: a page-level <code>"use client"</code> that got added for one interactive element and never revisited. Teams added it for a dropdown, a toast, a modal. The entire component tree beneath that boundary ships as client JavaScript and hydrates before rendering.</p>
<p>Push the boundary down to the smallest component that actually needs interactivity. A search input is a client component. The page layout, the data table, the navigation: those don't need to be. If Server Components and the App Router are still coming together for you, <a href="https://shubhamjha.com/blog/how-to-master-react-nextjs">mastering React and Next.js in 2026</a> has the right sequence to build that intuition before performance work makes sense.</p>
<pre><code class="language-tsx">// Before: entire page is a client component
"use client"
export default function OrdersPage() {
  // 800 lines of component, all shipped to client
}

// After: only the search is a client component
export default async function OrdersPage() {
  const orders = await fetchOrders()
  return (
    &lt;main&gt;
      &lt;OrderSearch /&gt; {/* "use client" */}
      &lt;OrderTable orders={orders} /&gt; {/* server component, no JS shipped */}
    &lt;/main&gt;
  )
}
</code></pre>
<p>Moving to proper island architecture cut the client JavaScript bundle by roughly 35%. That reduced Time to Interactive directly, which improved INP scores as well. If you're still working out which components belong on the server vs. the client, <a href="https://shubhamjha.com/blog/building-production-ready-react-apps">building production-ready React apps</a> covers the hook and component architecture patterns that make these boundaries easier to enforce consistently.</p>
<p>The API waterfall and boundary fixes got us most of the way. The last LCP gains came from images — and not the fixes you'd expect.</p>
<h3>Images: the <code>priority</code> attribute is probably doing more harm than good</h3>
<p>If you're already using <code>next/image</code>, the remaining gains are in details most teams skip.</p>
<p>The biggest remaining issue was <code>priority</code> abuse. Some teams (including ours) mark multiple images as <code>priority</code> to ensure they preload. The problem: <code>priority</code> adds a <code>&lt;link rel="preload"&gt;</code> tag for each image. With three or four of them, the browser competes for bandwidth on resources it doesn't all need immediately.</p>
<p>One <code>priority={true}</code>, on the LCP candidate. Everything else loads lazily.</p>
<p>The second issue was missing <code>sizes</code> on responsive images. Without <code>sizes</code>, Next.js generates a srcset but the browser defaults to <code>100vw</code> as the assumed display width. On a 375px Retina screen at 2x DPR, that targets a 750px image, which for a narrow content column can be 2–3× more than necessary.</p>
<pre><code class="language-tsx">&lt;Image
  src="/hero.webp"
  alt="Dashboard preview"
  width={1200}
  height={630}
  priority
  sizes="(max-width: 768px) 100vw, (max-width: 1200px) 50vw, 800px"
/&gt;
</code></pre>
<p>The <code>sizes</code> attribute tells the browser exactly which image to download at each viewport width. On mobile, this alone can save hundreds of kilobytes per page load.</p>
<hr />
<h2>CLS at 0.18: why vendors couldn't explain why the product felt broken</h2>
<p>CLS is deceptively hard to debug because it often doesn't appear in Lighthouse. Lighthouse measures CLS on a simulated load with a clean cache. Real CLS happens when:</p>
<ul>
<li><p>A user has slow network and fonts load late</p>
</li>
<li><p>A banner or cookie consent appears after the initial paint</p>
</li>
<li><p>A data-driven component changes height after content loads</p>
</li>
</ul>
<p>Our CLS was 0.18. In practice, vendors saw the order table jump down when the pagination bar loaded. A small thing. It happened on every page visit. Multiply that by 30 visits a day per vendor across hundreds of vendors and it's a constant source of friction that nobody files a bug report about. Nobody files a bug that says "the page jumped." They just quietly stop using the product.</p>
<h3>Fonts cause layout shift even when they load "correctly"</h3>
<p>Fallback fonts have different metrics than your custom font. When Inter loads, text that was rendering in Arial reflows to match Inter's line height, letter spacing, and word spacing. Paragraphs shift. Buttons resize. That's your CLS.</p>
<p><code>next/font</code> handles the loading. The real fix is font metric override: CSS descriptors that make your fallback font match your custom font's dimensions closely enough that the reflow is imperceptible.</p>
<pre><code class="language-ts">import { Inter } from "next/font/google"

const inter = Inter({
  subsets: ["latin"],
  display: "swap",
  fallback: ["system-ui", "Arial"],
  adjustFontFallback: true, // Next.js calculates override metrics automatically
})
</code></pre>
<p><code>adjustFontFallback: true</code> generates <code>size-adjust</code>, <code>ascent-override</code>, <code>descent-override</code>, and <code>line-gap-override</code> for the fallback. The visual difference between fallback and loaded font becomes small enough that layout doesn't shift meaningfully.</p>
<h3>Dynamic content: reserving space before you know the size</h3>
<p>The hardest CLS to fix is from content whose size you don't know yet: banners, notification bars, data-driven cards, ad slots. The naive solution is to avoid adding things dynamically. The real solution is to reserve space.</p>
<p>For fixed-height elements like banners, use a min-height wrapper even when the content is empty:</p>
<pre><code class="language-tsx">&lt;div style={{ minHeight: "48px" }}&gt;{banner &amp;&amp; &lt;Banner message={banner.message} /&gt;}&lt;/div&gt;
</code></pre>
<p>For data-driven content where you don't know the final height, skeleton loaders with accurate proportions are better than no loaders. A skeleton that's 80px tall and content that's 120px tall still causes a shift.</p>
<p>The largest CLS contributor was the order stats row: four cards that loaded with real data after the page rendered. Each card had a different final height depending on the number inside. We fixed it by setting a fixed card height and truncating overflowing numbers, then exposing a tooltip for the full value. CLS went from 0.18 to 0.04. The page stopped moving.</p>
<hr />
<h2>Why performance degrades after you fix it — and how to break the cycle</h2>
<p>Performance regresses because improvements and code changes happen in different places. Lighthouse scores feel good locally and break in production. You need to measure in both places, for different reasons.</p>
<p>Locally, Lighthouse tells you what's theoretically possible. Production RUM (real user monitoring) tells you what's actually happening.</p>
<p>For production monitoring, the Web Vitals JS library piped into your analytics is the minimum viable setup:</p>
<pre><code class="language-ts">import { onLCP, onINP, onCLS } from "web-vitals"
import type { Metric } from "web-vitals"

function sendToAnalytics(metric: Metric) {
  const payload = JSON.stringify({
    name: metric.name,
    value: metric.value,
    rating: metric.rating, // "good", "needs-improvement", "poor"
    page: window.location.pathname,
  })
  // Quick start: navigator.sendBeacon("/api/vitals", payload) — sends as text/plain
  navigator.sendBeacon("/api/vitals", new Blob([payload], { type: "application/json" }))
}

onLCP(sendToAnalytics)
onINP(sendToAnalytics)
onCLS(sendToAnalytics)
</code></pre>
<p>One caveat: fire this on a sample of sessions (10–20%) rather than every user, or you'll flood your analytics endpoint on high-traffic pages. Add a <code>Math.random() &lt; 0.1</code> guard around the beacon call in production.</p>
<p>This gives you per-page breakdown. You want to know that your homepage LCP is 1.8s but your order history page is 3.4s, because those are different problems with different fixes.</p>
<p>The second piece is a performance budget in CI. Not a hard block (that creates friction), but a warning that goes to Slack or fails a check:</p>
<pre><code class="language-js">// lighthouserc.js
module.exports = {
  ci: {
    assert: {
      assertions: {
        "largest-contentful-paint": ["warn", { maxNumericValue: 2500 }],
        "cumulative-layout-shift": ["error", { maxNumericValue: 0.1 }],
        "total-blocking-time": ["warn", { maxNumericValue: 300 }],
      },
    },
  },
}
</code></pre>
<p>Make CLS a hard error. Make LCP a warning. Layout stability is non-negotiable. Load time is something to improve over time.</p>
<p>The budget and the monitoring tell you where you are. The process is what actually moves the number.</p>
<h2>A workflow that compounds instead of one that spikes</h2>
<p>Single performance fixes don't compound. A workflow does. It doesn't need to be elaborate.</p>
<p>The loop we settled on:</p>
<ul>
<li><p>One baseline measurement per sprint on three key pages (home, a data-heavy listing page, a form flow)</p>
</li>
<li><p>One performance task per sprint, focused on the current worst-performing metric on the worst-performing page</p>
</li>
<li><p>One regression check in PR review: if a PR adds a new <code>"use client"</code> boundary at a high level, it gets flagged</p>
</li>
</ul>
<p>That's it. No performance sprints. No big-bang optimization projects. Small, consistent, measured.</p>
<p>In six months of running this loop, we went from a team that did occasional performance "fixes" to a team where performance kept improving passively because the worst regressions never made it to production. For the broader Next.js and React patterns that make this kind of iteration sustainable at scale, <a href="https://shubhamjha.com/blog/building-scalable-web-apps">building scalable web apps in 2026</a> is a useful companion read.</p>
<hr />
<h2>What the numbers actually mean — and the metric that didn't show up in Lighthouse</h2>
<p>The 40% load time improvement, 74% repeat order increase, 12% AOV growth: I want to be honest about attribution. We also redesigned the portal, improved mobile layouts, and fixed navigation architecture in the same period. Performance wasn't the only variable.</p>
<p>The metric I'm most proud of didn't show up in Lighthouse at all: support tickets from vendors dropped to near zero. That wasn't purely a performance win. The React migration cleaned up brittle UI, and rethinking the information architecture meant vendors could find what they needed without calling support. But a fast, stable UI that doesn't jump around removes an entire category of frustration before it becomes a ticket. CLS at 0.18 means vendors watch content shift on every page load. That's not a bug they can articulate. It just makes the product feel broken in a way they can't explain.</p>
<p>The 23% first-session drop-off is the number I'll stake a claim on. When your LCP is 3.2 seconds, a quarter of your users have decided to close the tab before they've seen a single piece of your UI. When it drops to 1.9 seconds, those people stay. What they do once they stay is a product problem, not a performance problem.</p>
<p>Find your equivalent of the 23% number. It's in your analytics: session duration by page load time bucket, conversion rate by connection speed, bounce rate on your heaviest pages. The data is there. Use it to make the argument, because performance is a product problem disguised as a technical one, and nobody funds a Lighthouse score.</p>
<p>That Notion doc still exists. It's mostly empty now.</p>
<p>If your team has a performance number that's been sitting in a backlog for too long, I work with engineering teams on Next.js performance, architecture, and the delivery practices that make improvements stick. Browse my <a href="https://shubhamjha.com/projects">projects</a> or <a href="https://shubhamjha.com/contact">reach out</a> to talk through your situation.</p>
]]></content:encoded></item></channel></rss>