Technical SEO for SaaS: The Checklist Your Dev Team Needs
Your devs don't care about SEO. Give them this checklist and they'll accidentally make your site rankable.

Your dev team built a great product. Fast. Responsive. Users love it.
But Google? Google can't even see half of it.
That's the problem with most SaaS sites. Devs optimize for users, not crawlers. And when you're running a JavaScript-heavy Single Page Application (SPA), those are two different problems.
This isn't a blog post about why technical SEO matters. You already know it does. This is the checklist you hand to your dev team so they can actually fix it.
No fluff. No "best practices." Just the stuff that breaks your rankings.
The Big Picture: Why SaaS SEO Is Different
Most SEO guides assume you're running WordPress. You're not.
Your marketing site might be Next.js. Your app is a React SPA. You've got authenticated routes, client-side routing, soft 404s everywhere, and your sitemap includes URLs that return blank pages to Googlebot.
Standard SEO advice doesn't work here.
The good news? Once you fix the technical foundation, SaaS sites rank faster than content sites. Clean URL structure, fast load times, and structured data give you an edge.
The bad news? If you skip even one of these steps, you're invisible.
JavaScript Rendering: The Thing Breaking Your Indexing
Google can run JavaScript. But it doesn't want to.
Here's how it works:
- Googlebot crawls your HTML
- If it sees JavaScript, it queues your page for rendering
- Later (sometimes days later), it runs a headless Chrome browser to execute your JS
- Only then does it see your actual content
If your critical content lives client-side, you're gambling on whether Google will bother rendering it. And if you're getting a non-200 status code before rendering (like a redirect or 404), Google might skip rendering entirely.
The fix: Server-Side Rendering (SSR) or Static Site Generation (SSG)
For any page you want ranked, render the HTML server-side. Not "hybrid." Not "dynamic rendering." Full SSR or SSG.
Next.js example:
// Good - Server component (default in Next.js 13+)
export default async function Page() {
const data = await fetchData()
return <div>{data.title}</div>
}
// Bad - Client-side only
'use client'
export default function Page() {
const [data, setData] = useState(null)
useEffect(() => { fetchData().then(setData) }, [])
return <div>{data?.title}</div>
}If you must use client-side rendering, make sure your initial HTML includes:
- Links in real
<a href>tags (not onClick divs) - Meta tags in the
<head>(not injected by JS) - At least some visible content crawlers can index
Don't use URL fragments for navigation. Google ignores #/page/2. Use real URLs with the History API (/page/2).
Soft 404s Will Kill Your Crawl Budget
Your SPA returns 200 OK for every URL, even /this-page-doesnt-exist.
Google sees that as a valid page. It crawls it. Indexes it. Wastes your crawl budget on garbage.
Fix:
- Return proper 404 status codes from your server for non-existent routes
- Or inject a
<meta name="robots" content="noindex">tag on client-rendered error pages
But never put noindex in your initial HTML if you want the page indexed. Google might see it before your JS removes it.
Core Web Vitals: What Actually Matters
Google cares about three metrics:
- Largest Contentful Paint (LCP) - how fast your main content loads
- Cumulative Layout Shift (CLS) - how much your page jumps around
- Interaction to Next Paint (INP) - how responsive your page is to clicks
INP replaced First Input Delay (FID) in March 2024. If you're still optimizing for FID, stop.
INP Is the Hardest One
INP measures how long it takes your page to respond to every user interaction. Not just the first click - every click.
If your SPA has long-running JavaScript tasks, your INP score is trash.
How to fix it:
- Break up long tasks - anything over 50ms blocks the main thread
- Defer non-critical JS - use
asyncordeferon script tags - Optimize event handlers - don't run heavy logic on every click
- Use code splitting - smaller bundles = faster parsing
Tools:
- Chrome DevTools Performance panel (look for "Long Tasks")
- Lighthouse (check Total Blocking Time as a proxy for INP)
- Google Search Console (real user data under "Core Web Vitals")
LCP: Optimize Your Hero Image
Your LCP element is usually your hero image or headline. Load it fast.
Quick wins:
- Use modern image formats (WebP or AVIF)
- Preload your LCP image:
<link rel="preload" as="image" href="/hero.webp"> - Serve images from a CDN
- Lazy-load everything except above-the-fold content
CLS: Stop Shifting Layout
If your page jumps when fonts load, ads appear, or images render, you're losing points.
Fix:
- Set
widthandheighton images so the browser reserves space - Use
font-display: swapcarefully (or preload fonts) - Reserve space for dynamic content (ads, banners, etc.)
Target: LCP under 2.5s, CLS under 0.1, INP under 200ms.
Crawlability: Let Google See Your Site
robots.txt Is Not for Hiding Pages
robots.txt tells crawlers what not to crawl. It does NOT prevent indexing.
If you disallow a URL in robots.txt, Google can still index it if someone links to it. You'll just see "A description is not available" in search results.
What to block with robots.txt:
- Internal search results (
/search?q=) - Filtered pages with no SEO value (
/products?sort=price&filter=color) - Staging and admin areas
What NOT to block:
- CSS and JavaScript files (Google needs them to render your page)
- Pages you want de-indexed (use
noindexinstead)
Use noindex to Control Indexing
If you don't want a page in Google, add this to the <head>:
<meta name="robots" content="noindex">Or use the HTTP header:
X-Robots-Tag: noindexCritical rule: Don't block a noindex page with robots.txt. Google needs to crawl it to see the noindex tag.
Canonicalization: Pick One URL
Duplicate content wastes crawl budget and splits ranking signals.
Use rel="canonical" to tell Google which version of a page to index:
<link rel="canonical" href="https://example.com/page">Common mistakes:
- Canonicalizing paginated pages to page 1 (don't - each page should be self-referencing)
- Using relative URLs instead of absolute
- Sending conflicting signals (canonical to A, redirect to B)
Best practice:
- Every page should have a canonical tag (even if it points to itself)
- Use 301 redirects to consolidate old URLs
- Canonical tags should be in the initial HTML, not injected by JS
Sitemaps: Help Google Find Your Pages
Your sitemap is a list of URLs you want indexed. Make it accurate.
Rules:
- Max 50,000 URLs per sitemap file
- Max 50MB uncompressed
- Use a sitemap index if you need multiple files
What to include:
- Only canonical URLs
- Only pages that return
200 OK - Only pages you want indexed
What to exclude:
- Redirected URLs
- Noindexed pages
- Error pages (404, 500)
- Duplicate content
- Login, cart, search result pages
Use <lastmod> Correctly (or Don't Use It)
Google only trusts <lastmod> if it's accurate. If you update the date every time you regenerate the sitemap (without changing content), Google ignores it.
Rule: Only update <lastmod> when you actually change the page content.
Submit your sitemap to Google Search Console and add it to your robots.txt:
Sitemap: https://example.com/sitemap.xmlStructured Data: Make Your Listings Stand Out
Schema markup won't help you rank, but it makes your search listings richer.
For SaaS, focus on two types:
1. Organization Schema (Homepage)
{
"@context": "https://schema.org",
"@type": "Organization",
"name": "Your SaaS",
"url": "https://example.com",
"logo": "https://example.com/logo.png",
"description": "What you do"
}2. Product + Offer Schema (Pricing Pages)
{
"@context": "https://schema.org",
"@type": "Product",
"name": "Pro Plan",
"description": "Advanced features",
"offers": {
"@type": "Offer",
"price": "49.00",
"priceCurrency": "USD",
"availability": "https://schema.org/InStock"
}
}This makes you eligible for merchant listings with price and availability shown in search results.
Where to put it:
- In the initial HTML (not injected by JS)
- At the bottom of the
<body>in a<script type="application/ld+json">tag
Test it:
- Google Rich Results Test
- Google Search Console > Enhancements
SaaS-Specific Issues
App vs. Marketing Site
Your app (app.example.com) shouldn't be indexed. It's behind authentication. It has no SEO value.
Your marketing site (www.example.com) should be fully indexable.
How to separate them:
- App: Return
401 Unauthorizedfor unauthenticated requests - Marketing: Fully crawlable, no auth walls
Don't let Google waste crawl budget on session-specific app routes.
Faceted Navigation
Filters and sorting create thousands of URLs:
/products?color=red/products?color=red&size=large/products?color=red&size=large&sort=price
This is index bloat. Most of these pages have no search demand.
Strategy:
- Block low-value combinations with
robots.txt - Noindex filtered pages with no unique value
- Canonicalize near-duplicates to the main category page
- Optimize high-demand filter combos as dedicated landing pages
Pagination
Each page in a series should have a unique URL and be self-referencing:
<!-- On /category?page=2 -->
<link rel="canonical" href="https://example.com/category?page=2">Don't canonicalize all pages to page 1. That tells Google to ignore pages 2+.
Link pages together with standard <a href> tags:
<a href="/category?page=3">Next</a>Don't use #page=3 or "Load More" buttons without fallback links.
HTTPS and Security
HTTPS is a ranking signal. Google prefers HTTPS pages over HTTP.
Checklist:
- Implement site-wide HTTPS (including images, CSS, JS)
- 301 redirect all HTTP URLs to HTTPS
- Use HSTS header to force HTTPS in browsers
- Fix mixed content warnings (resources loaded over HTTP on HTTPS pages)
- Keep SSL certificates current
Don't accidentally block Googlebot with your CSP.
If you use a Content Security Policy header, make sure it allows Google's rendering domains. Test with Google's URL Inspection tool.
The Checklist (Copy This to Your Dev Team)
Rendering & Indexing:
- Use SSR/SSG for all marketing and content pages
- Serve real HTML to crawlers (not blank pages that need JS)
- Use
<a href>tags for navigation, not onClick handlers - Return proper 404 status codes (not soft 404s)
- Meta tags in initial HTML, not injected by JS
Performance:
- LCP under 2.5s (optimize hero images, use CDN)
- INP under 200ms (break up long JS tasks)
- CLS under 0.1 (set image dimensions, reserve space for dynamic content)
- Preload critical resources (fonts, hero images)
- Use modern image formats (WebP/AVIF)
Crawl Control:
- Don't block CSS/JS in robots.txt
- Use
noindexfor pages you don't want indexed - Canonical tags on every page (in initial HTML)
- 301 redirects for old/duplicate URLs
- Sitemap with only canonical, indexable URLs
Structured Data:
- Organization schema on homepage
- Product + Offer schema on pricing pages
- Schema in initial HTML (not injected by JS)
- Validated with Google Rich Results Test
SaaS-Specific:
- App routes return 401 for unauthenticated users
- Block/noindex low-value filtered pages
- Paginated pages have unique URLs and self-referencing canonicals
- Site-wide HTTPS with 301 redirects from HTTP
Monitoring:
- Google Search Console set up
- Core Web Vitals passing for key pages
- No soft 404s or indexing errors in GSC
- Sitemap submitted and processing
But Technical SEO Won't Rank You Alone
Here's the thing.
You can have perfect technical SEO. Fast site. Clean code. Flawless indexing.
And you still won't rank.
Because technical SEO is table stakes. It gets you in the game. It doesn't win the game.
What wins? Authority.
And for SaaS, authority means backlinks from sites Google trusts. Wikipedia. Reddit. Hacker News. Industry publications.
The problem? You can't just ask for those links. You have to earn them. And earning them takes years of content creation, outreach, and hoping someone notices.
There's a Faster Way
That's where Revised comes in.
We don't build links. We find them.
We acquire dead domains that already have backlinks from authoritative sources. Then we redirect those links to your site. Contextual, relevant, high-authority backlinks that transfer real ranking power.
Your dev team handles the technical SEO. We handle the authority.
How it works:
- We crawl Wikipedia, Reddit, HN, and other trusted sources for broken links
- We acquire the dead domains those links point to
- We redirect the contextual backlinks to your site
- You rank higher. Faster.
Technical SEO gets Google to see your site. Authority gets Google to rank it.
You need both.
See how it works or get started now.
TL;DR:
- Use SSR/SSG for pages you want ranked
- Fix soft 404s and serve proper status codes
- Optimize Core Web Vitals (especially INP)
- Use
noindexfor pages you don't want indexed (not robots.txt) - Add canonical tags to every page (in initial HTML)
- Implement Organization and Product schema
- Block low-value filtered/paginated pages
- Make your app routes return 401 for crawlers
- Technical SEO is necessary but not sufficient - you need authority too
More Articles You Might Like

AI and SEO in 2026: What's Changed and What Actually Works
AI Overviews now appear in nearly a third of searches. ChatGPT answers more questions than some countries have people. Here's what's real, what's hype, and how to adapt your SEO strategy for 2026.

How to Measure SEO Success: The Metrics That Actually Matter
Stop obsessing over vanity metrics. Here are the SEO KPIs that technical founders and startup marketers should actually track - and how to set up proper measurement.

SEO for Ecommerce: How to Rank Product Pages in 2025
Product pages are where the money is. But most ecommerce sites treat them as an afterthought. Here's how to actually rank your products in 2025.