Technical SEO is the foundation that makes your content discoverable. Before worrying about keywords or backlinks, you need to ensure search engines can find, crawl, and understand your pages.
This guide covers the essential technical SEO elements every website needs—with practical code examples for Next.js and other modern frameworks.
What is Technical SEO?
Technical SEO focuses on the infrastructure that helps search engines access and understand your content:
- Crawlability - Can search engines find your pages?
- Indexability - Will search engines add your pages to their index?
- Understandability - Do search engines know what your content is about?
Content SEO (keywords, topics, quality) matters, but only after technical SEO is solid. Think of it like building a house: technical SEO is the foundation; content is the interior design.
Essential Meta Tags
Meta tags live in your HTML <head> and tell browsers and search engines about your page.
Title Tag
The most important meta tag. It appears in:
- Browser tabs
- Search engine results
- Social shares (as fallback)
<title>Technical SEO Fundamentals | KuduTek Blog</title>
Best practices:
- Keep under 60 characters (Google truncates longer titles)
- Put important keywords near the beginning
- Make each page's title unique
- Include your brand name (usually at the end)
In Next.js:
export const metadata: Metadata = {
title: 'Technical SEO Fundamentals | KuduTek Blog',
}
Meta Description
A summary that appears in search results below your title. Doesn't directly affect rankings, but affects click-through rate.
<meta name="description" content="Master the technical foundations of SEO. Learn how to implement meta tags, sitemaps, and Search Console for your website." />
Best practices:
- Keep under 160 characters
- Include a call to action when appropriate
- Accurately describe the page content
- Make each description unique
In Next.js:
export const metadata: Metadata = {
description: 'Master the technical foundations of SEO...',
}
Canonical URL
Tells search engines the "official" URL for a page when multiple URLs show the same content.
<link rel="canonical" href="https://kudutek.com/blog/web-optimization/seo/technical-seo-fundamentals" />
When to use:
- Pages accessible via multiple URLs (with/without www, trailing slashes)
- Paginated content
- Content syndicated to other sites
- URL parameters for tracking
In Next.js:
export const metadata: Metadata = {
alternates: {
canonical: '/blog/web-optimization/seo/technical-seo-fundamentals',
},
}
Robots Meta Tag
Controls how search engines handle the page:
<!-- Allow indexing (default) -->
<meta name="robots" content="index, follow" />
<!-- Block indexing -->
<meta name="robots" content="noindex, nofollow" />
<!-- Index but don't follow links -->
<meta name="robots" content="index, nofollow" />
In Next.js:
export const metadata: Metadata = {
robots: {
index: true,
follow: true,
googleBot: {
index: true,
follow: true,
'max-video-preview': -1,
'max-image-preview': 'large',
'max-snippet': -1,
},
},
}
Open Graph Tags (Social Sharing)
Open Graph tags control how your content appears when shared on Facebook, LinkedIn, and other platforms.
Essential OG Tags
<meta property="og:title" content="Technical SEO Fundamentals" />
<meta property="og:description" content="Master the technical foundations of SEO..." />
<meta property="og:url" content="https://kudutek.com/blog/web-optimization/seo/technical-seo-fundamentals" />
<meta property="og:type" content="article" />
<meta property="og:image" content="https://kudutek.com/api/og?title=Technical%20SEO" />
Twitter Cards
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:title" content="Technical SEO Fundamentals" />
<meta name="twitter:description" content="Master the technical foundations..." />
<meta name="twitter:image" content="https://kudutek.com/api/og?title=Technical%20SEO" />
In Next.js:
export const metadata: Metadata = {
openGraph: {
title: 'Technical SEO Fundamentals',
description: 'Master the technical foundations of SEO...',
url: '/blog/web-optimization/seo/technical-seo-fundamentals',
type: 'article',
images: [
{
url: '/api/og?title=Technical%20SEO%20Fundamentals',
width: 1200,
height: 630,
alt: 'Technical SEO Fundamentals',
},
],
},
twitter: {
card: 'summary_large_image',
title: 'Technical SEO Fundamentals',
description: 'Master the technical foundations of SEO...',
},
}
Pro tip: Create dynamic OG images instead of static ones. See our Structured Data guide for implementation details.
Sitemap.xml
A sitemap tells search engines about all the pages on your site and when they were last updated.
Basic Sitemap Structure
<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
<url>
<loc>https://kudutek.com/</loc>
<lastmod>2025-12-22</lastmod>
<changefreq>weekly</changefreq>
<priority>1.0</priority>
</url>
<url>
<loc>https://kudutek.com/blog</loc>
<lastmod>2025-12-22</lastmod>
<changefreq>daily</changefreq>
<priority>0.8</priority>
</url>
</urlset>
Dynamic Sitemap in Next.js
// src/app/sitemap.ts
import { MetadataRoute } from 'next'
import { getAllPosts } from '@/lib/blog'
export default function sitemap(): MetadataRoute.Sitemap {
const posts = getAllPosts()
const blogUrls = posts.map((post) => ({
url: `https://kudutek.com/blog/${post.category}/${post.slug}`,
lastModified: new Date(post.date),
changeFrequency: 'monthly' as const,
priority: 0.7,
}))
return [
{
url: 'https://kudutek.com',
lastModified: new Date(),
changeFrequency: 'weekly',
priority: 1,
},
{
url: 'https://kudutek.com/blog',
lastModified: new Date(),
changeFrequency: 'daily',
priority: 0.8,
},
...blogUrls,
]
}
Submitting Your Sitemap
- Go to Google Search Console
- Select your property
- Navigate to Sitemaps (under Indexing)
- Enter your sitemap URL:
https://yoursite.com/sitemap.xml - Click Submit
Robots.txt
Robots.txt tells search engine crawlers which parts of your site they can access.
Basic Structure
# Allow all crawlers to access everything
User-agent: *
Allow: /
# Point to sitemap
Sitemap: https://kudutek.com/sitemap.xml
Common Patterns
# Block a specific directory
User-agent: *
Disallow: /admin/
Disallow: /api/
Disallow: /private/
# Block specific file types
User-agent: *
Disallow: /*.pdf$
# Allow specific crawler full access
User-agent: Googlebot
Allow: /
In Next.js
// src/app/robots.ts
import { MetadataRoute } from 'next'
export default function robots(): MetadataRoute.Robots {
return {
rules: {
userAgent: '*',
allow: '/',
disallow: ['/api/', '/admin/'],
},
sitemap: 'https://kudutek.com/sitemap.xml',
}
}
Warning: Don't use robots.txt to hide sensitive content. It's publicly visible and only a suggestion—malicious bots ignore it. Use authentication for truly private content.
Google Search Console Setup
Search Console is your window into how Google sees your site.
Getting Started
- Go to Google Search Console
- Click "Add Property"
- Choose verification method:
- Domain (recommended): Add DNS TXT record
- URL prefix: Add HTML file or meta tag
Essential Reports to Monitor
Performance Report:
- Total clicks, impressions, CTR, average position
- Which queries bring traffic
- Which pages perform best
Coverage Report:
- How many pages are indexed
- Errors preventing indexing
- Pages excluded and why
Core Web Vitals:
- LCP, FID/INP, CLS scores
- URLs needing improvement
- Mobile vs desktop performance
Links Report:
- External sites linking to you
- Most linked pages
- Internal linking structure
Common Issues and Fixes
| Issue | Cause | Solution | |-------|-------|----------| | "Discovered - not indexed" | Low quality or duplicate | Improve content, add canonical | | "Crawled - not indexed" | Content deemed not valuable | Enhance content quality | | "Blocked by robots.txt" | Robots.txt too restrictive | Update robots.txt rules | | "404 not found" | Broken links | Fix or redirect URLs |
Internal Linking
Internal links help search engines discover content and understand site structure.
Best practices:
- Link from high-authority pages to important content
- Use descriptive anchor text (not "click here")
- Create logical content hierarchies
- Fix broken internal links regularly
Example:
Learn more about implementing [structured data schemas](/blog/web-optimization/structured-data/how-to-achieve-perfect-seo-score) for rich search results.
Common SEO Mistakes
- Missing meta descriptions - Every page should have one
- Duplicate titles - Each page needs a unique title
- Blocking important pages - Check robots.txt isn't too restrictive
- Ignoring mobile - Google uses mobile-first indexing
- Slow page speed - Performance affects rankings (see Core Web Vitals guide)
- No HTTPS - SSL is a ranking factor
- Broken links - Audit regularly and fix
Testing Your Implementation
Tools
| Tool | Purpose | |------|---------| | Google Rich Results Test | Validate structured data | | PageSpeed Insights | Performance analysis | | Mobile-Friendly Test | Mobile compatibility | | Screaming Frog | Comprehensive site audit |
Manual Checks
- View page source - verify meta tags are present
- Share URL on social - check OG image appears
- Search
site:yourdomain.com- see what's indexed - Check Search Console - monitor for errors
Next Steps
Technical SEO is the foundation, but there's more to web optimization:
- Enhance search appearance: Add structured data for rich results
- Improve speed: Optimize Core Web Vitals
- Measure results: Set up GA4 with proper tracking
- See the big picture: Read our Complete Guide to Web Optimization
Technical SEO isn't glamorous, but it's essential. Get these fundamentals right, and you've built a solid foundation for everything else.
This post is part of the Web Optimization series. Check out the other guides for a complete picture of optimizing your website.