Technical SEO for SaaS: The Checklist Your Dev Team Needs
Showed a dev team why Google couldn't see half their site. They fixed it in two days. Here's the checklist I gave them.

Your dev team built a great product. Fast. Responsive. Users love it.
But Google? Google can't even see half of it.
That's the problem with most SaaS sites. Devs optimize for users, not crawlers. And when you're running a JavaScript-heavy Single Page Application (SPA), those are two different problems.
This isn't a blog post about why technical SEO matters. You already know it does. This is the checklist you hand to your dev team so they can actually fix it.
No fluff. No "best practices." Just the stuff that breaks your rankings.
Why SaaS SEO is different
Most SEO guides assume you're running WordPress. You're not.
Your marketing site might be Next.js. Your app is a React SPA. You've got authenticated routes, client-side routing, soft 404s everywhere, and your sitemap includes URLs that return blank pages to Googlebot.
Standard SEO advice doesn't work here.
The good news? Once you fix the technical foundation, SaaS sites rank faster than content sites. Clean URL structure, fast load times, and structured data give you an edge.
The bad news? If you skip even one of these steps, you're invisible.
JavaScript rendering: the thing breaking your indexing
Google can run JavaScript. But it doesn't want to.
Here's how it works. Googlebot crawls your HTML. If it sees JavaScript, it queues your page for rendering. Later (sometimes days later), it runs a headless Chrome browser to execute your JS. Only then does it see your actual content.
If your critical content lives client-side, you're gambling on whether Google will bother rendering it. And if you're getting a non-200 status code before rendering (like a redirect or 404), Google might skip rendering entirely.
The fix: Server-Side Rendering (SSR) or Static Site Generation (SSG)
For any page you want ranked, render the HTML server-side. Not "hybrid." Not "dynamic rendering." Full SSR or SSG.
Next.js example:
// Good - Server component (default in Next.js 13+)
export default async function Page() {
const data = await fetchData()
return <div>{data.title}</div>
}
// Bad - Client-side only
'use client'
export default function Page() {
const [data, setData] = useState(null)
useEffect(() => { fetchData().then(setData) }, [])
return <div>{data?.title}</div>
}If you must use client-side rendering, make sure your initial HTML includes links in real <a href> tags (not onClick divs), meta tags in the <head> (not injected by JS), and at least some visible content crawlers can index.
Don't use URL fragments for navigation. Google ignores #/page/2. Use real URLs with the History API (/page/2).
Soft 404s will kill your crawl budget
Your SPA returns 200 OK for every URL, even /this-page-doesnt-exist.
Google sees that as a valid page. It crawls it. Indexes it. Wastes your crawl budget on garbage.
Fix it by returning proper 404 status codes from your server for non-existent routes. Or inject a <meta name="robots" content="noindex"> tag on client-rendered error pages.
But never put noindex in your initial HTML if you want the page indexed. Google might see it before your JS removes it.
Core Web Vitals: what actually matters
Google cares about three metrics.
Largest Contentful Paint (LCP) measures how fast your main content loads. Cumulative Layout Shift (CLS) measures how much your page jumps around. Interaction to Next Paint (INP) measures how responsive your page is to clicks.
INP replaced First Input Delay (FID) in March 2024. If you're still optimizing for FID, stop.
INP is the hardest one
INP measures how long it takes your page to respond to every user interaction. Not just the first click, every click.
If your SPA has long-running JavaScript tasks, your INP score is trash.
How to fix it: break up long tasks (anything over 50ms blocks the main thread), defer non-critical JS (use async or defer on script tags), optimize event handlers (don't run heavy logic on every click), and use code splitting (smaller bundles = faster parsing).
Tools. Chrome DevTools Performance panel (look for "Long Tasks"). Lighthouse (check Total Blocking Time as a proxy for INP). Google Search Console (real user data under "Core Web Vitals").
LCP: optimize your hero image
Your LCP element is usually your hero image or headline. Load it fast.
Quick wins: use modern image formats (WebP or AVIF), preload your LCP image with <link rel="preload" as="image" href="/hero.webp">, serve images from a CDN, and lazy-load everything except above-the-fold content.
CLS: stop shifting layout
If your page jumps when fonts load, ads appear, or images render, you're losing points.
Set width and height on images so the browser reserves space. Use font-display: swap carefully (or preload fonts). Reserve space for dynamic content (ads, banners, etc.).
Target: LCP under 2.5s, CLS under 0.1, INP under 200ms.
Crawlability: let Google see your site
robots.txt is not for hiding pages
robots.txt tells crawlers what not to crawl. It does NOT prevent indexing.
If you disallow a URL in robots.txt, Google can still index it if someone links to it. You'll just see "A description is not available" in search results.
What to block with robots.txt. Internal search results (/search?q=). Filtered pages with no SEO value (/products?sort=price&filter=color). Staging and admin areas.
What NOT to block. CSS and JavaScript files (Google needs them to render your page). Pages you want de-indexed (use noindex instead).
Use noindex to control indexing
If you don't want a page in Google, add this to the <head>:
<meta name="robots" content="noindex">Or use the HTTP header:
X-Robots-Tag: noindexCritical rule: Don't block a noindex page with robots.txt. Google needs to crawl it to see the noindex tag.
Canonicalization: pick one URL
Duplicate content wastes crawl budget and splits ranking signals.
Use rel="canonical" to tell Google which version of a page to index:
<link rel="canonical" href="https://example.com/page">Common mistakes. Canonicalizing paginated pages to page 1 (don't, each page should be self-referencing). Using relative URLs instead of absolute. Sending conflicting signals (canonical to A, redirect to B).
Best practice. Every page should have a canonical tag (even if it points to itself). Use 301 redirects to consolidate old URLs. Canonical tags should be in the initial HTML, not injected by JS.
Sitemaps: help Google find your pages
Your sitemap is a list of URLs you want indexed. Make it accurate.
Rules. Max 50,000 URLs per sitemap file. Max 50MB uncompressed. Use a sitemap index if you need multiple files.
What to include. Only canonical URLs. Only pages that return 200 OK. Only pages you want indexed.
What to exclude. Redirected URLs. Noindexed pages. Error pages (404, 500). Duplicate content. Login, cart, search result pages.
Use <lastmod> correctly (or don't use it)
Google only trusts <lastmod> if it's accurate. If you update the date every time you regenerate the sitemap (without changing content), Google ignores it.
Rule: Only update <lastmod> when you actually change the page content.
Submit your sitemap to Google Search Console and add it to your robots.txt:
Sitemap: https://example.com/sitemap.xmlStructured data: make your listings stand out
Schema markup won't help you rank, but it makes your search listings richer.
For SaaS, focus on two types.
1. Organization Schema (Homepage)
{
"@context": "https://schema.org",
"@type": "Organization",
"name": "Your SaaS",
"url": "https://example.com",
"logo": "https://example.com/logo.png",
"description": "What you do"
}2. Product + Offer Schema (Pricing Pages)
{
"@context": "https://schema.org",
"@type": "Product",
"name": "Pro Plan",
"description": "Advanced features",
"offers": {
"@type": "Offer",
"price": "49.00",
"priceCurrency": "USD",
"availability": "https://schema.org/InStock"
}
}This makes you eligible for merchant listings with price and availability shown in search results.
Where to put it. In the initial HTML (not injected by JS). At the bottom of the <body> in a <script type="application/ld+json"> tag.
Test it. Google Rich Results Test. Google Search Console > Enhancements.
SaaS-specific issues
App vs. marketing site
Your app (app.example.com) shouldn't be indexed. It's behind authentication. It has no SEO value.
Your marketing site (www.example.com) should be fully indexable.
How to separate them. App: Return 401 Unauthorized for unauthenticated requests. Marketing: Fully crawlable, no auth walls.
Don't let Google waste crawl budget on session-specific app routes.
Faceted navigation
Filters and sorting create thousands of URLs. /products?color=red. /products?color=red&size=large. /products?color=red&size=large&sort=price.
This is index bloat. Most of these pages have no search demand.
Strategy. Block low-value combinations with robots.txt. Noindex filtered pages with no unique value. Canonicalize near-duplicates to the main category page. Optimize high-demand filter combos as dedicated landing pages.
Pagination
Each page in a series should have a unique URL and be self-referencing:
<!-- On /category?page=2 -->
<link rel="canonical" href="https://example.com/category?page=2">Don't canonicalize all pages to page 1. That tells Google to ignore pages 2+.
Link pages together with standard <a href> tags:
<a href="/category?page=3">Next</a>Don't use #page=3 or "Load More" buttons without fallback links.
HTTPS and security
HTTPS is a ranking signal. Google prefers HTTPS pages over HTTP.
Checklist. Implement site-wide HTTPS (including images, CSS, JS). 301 redirect all HTTP URLs to HTTPS. Use HSTS header to force HTTPS in browsers. Fix mixed content warnings (resources loaded over HTTP on HTTPS pages). Keep SSL certificates current.
Don't accidentally block Googlebot with your CSP.
If you use a Content Security Policy header, make sure it allows Google's rendering domains. Test with Google's URL Inspection tool.
The checklist (copy this to your dev team)
Rendering & Indexing. Use SSR/SSG for all marketing and content pages. Serve real HTML to crawlers (not blank pages that need JS). Use <a href> tags for navigation, not onClick handlers. Return proper 404 status codes (not soft 404s). Meta tags in initial HTML, not injected by JS.
Performance. LCP under 2.5s (optimize hero images, use CDN). INP under 200ms (break up long JS tasks). CLS under 0.1 (set image dimensions, reserve space for dynamic content). Preload critical resources (fonts, hero images). Use modern image formats (WebP/AVIF).
Crawl Control. Don't block CSS/JS in robots.txt. Use noindex for pages you don't want indexed. Canonical tags on every page (in initial HTML). 301 redirects for old/duplicate URLs. Sitemap with only canonical, indexable URLs.
Structured Data. Organization schema on homepage. Product + Offer schema on pricing pages. Schema in initial HTML (not injected by JS). Validated with Google Rich Results Test.
SaaS-Specific. App routes return 401 for unauthenticated users. Block/noindex low-value filtered pages. Paginated pages have unique URLs and self-referencing canonicals. Site-wide HTTPS with 301 redirects from HTTP.
Monitoring. Google Search Console set up. Core Web Vitals passing for key pages. No soft 404s or indexing errors in GSC. Sitemap submitted and processing.
But technical SEO won't rank you alone
Here's the thing.
You can have perfect technical SEO. Fast site. Clean code. Flawless indexing.
And you still won't rank.
Because technical SEO is table stakes. It gets you in the game. It doesn't win the game.
What wins? Authority.
And for SaaS, authority means backlinks from sites Google trusts. Wikipedia. Reddit. Hacker News. Industry publications.
The problem? You can't just ask for those links. You have to earn them. And earning them takes years of content creation, outreach, and hoping someone notices.
There's a faster way
That's where Revised comes in.
We don't build links. We find them.
We acquire dead domains that already have backlinks from authoritative sources. Then we redirect those links to your site. Contextual, relevant, high-authority backlinks that transfer real ranking power.
Your dev team handles the technical SEO. We handle the authority.
How it works. We crawl Wikipedia, Reddit, HN, and other trusted sources for broken links. We acquire the dead domains those links point to. We redirect the contextual backlinks to your site. You rank higher. Faster.
Technical SEO gets Google to see your site. Authority gets Google to rank it.
You need both.
See how it works or get started now.
TL;DR. Use SSR/SSG for pages you want ranked. Fix soft 404s and serve proper status codes. Optimize Core Web Vitals (especially INP). Use noindex for pages you don't want indexed (not robots.txt). Add canonical tags to every page (in initial HTML). Implement Organization and Product schema. Block low-value filtered/paginated pages. Make your app routes return 401 for crawlers. Technical SEO is necessary but not sufficient. You need authority too.
More Articles You Might Like

Email Marketing for Small Business: A No-Fluff Starter Guide
I ignored email marketing for two years because it felt old-school. Then I watched a competitor's tiny newsletter outperform my entire social media strategy. Here's the starter guide I wish I had.

Online Marketing Strategies for Small Business (That Actually Work in 2026)
I burned $1,500 on Facebook ads and got three email signups. One was my mum. After six years of trying every channel as a bootstrapped founder, I can finally tell you which ones are worth your time.

Website Conversion Rate: How to Turn Traffic Into Customers
My first SaaS hit 5,000 monthly visitors and I celebrated. Then I checked the numbers. Eleven signups. Three trials. Zero paying customers. I had a traffic problem disguised as a success story.