Author: Kellie Watson

  • How to Fix “Discovered – Currently Not Indexed” in Google Search Console

    How to Fix “Discovered – Currently Not Indexed” in Google Search Console

    If you manage a website and regularly check Google Search Console, you may encounter the indexing status “Discovered – currently not indexed.”

    This message can be confusing because it suggests that Google knows about the page but has not yet crawled or indexed it.

    In many cases this status is temporary. However, when pages remain in this state for an extended period, it may indicate underlying technical or structural issues affecting how search engines interact with your website.

    Understanding what this status means and how to diagnose the cause is the first step toward resolving the issue.

    What “Discovered – Currently Not Indexed” Means

    When Google identifies a page, it adds the URL to its crawl queue. This discovery may happen through:

    • XML sitemaps
    • internal links
    • external links
    • previously crawled pages

    When the page appears as:

    Discovered – currently not indexed

    it means Google has found the URL but has not yet crawled it.

    This is different from:

    Crawled – currently not indexed

    where Google has already visited the page but decided not to index it.

    Why Google May Delay Crawling a Page

    There are several reasons Google may delay crawling pages that it has discovered.

    Crawl Budget Limitations

    Google allocates a crawl budget to each website, which determines how many pages it crawls within a given timeframe.

    If a site contains many low-value or duplicate pages, Google may delay crawling other pages until it determines which content is worth prioritising.

    This issue is more common on larger websites but can also occur when site structure is inefficient.

    Weak Internal Linking

    Google often prioritises pages that are well connected through internal links.

    If a newly created page is not linked from important sections of the website, Google may treat it as a low priority.

    Adding internal links from relevant pages can help signal that the content is important.

    Sitemap Configuration Issues

    XML sitemaps help search engines discover new URLs.

    If a page appears as “Discovered – currently not indexed,” it is worth confirming that:

    • the page appears in the XML sitemap
    • the sitemap is submitted to Google Search Console
    • the sitemap does not contain large numbers of low-value pages

    Clean, well-structured sitemaps help search engines prioritise crawling.

    Slow Website Performance

    If a website is slow or unstable, Google may reduce its crawl activity.

    Server response times, excessive scripts, or poorly optimised hosting environments can all affect how frequently Google crawls a site.

    Improving site performance can sometimes increase crawl activity.

    Low Site Authority or Trust Signals

    For newer websites or sites with limited authority, Google may crawl pages less frequently.

    As the site gains links, traffic and overall trust signals, crawl frequency often improves.

    How to Diagnose the Issue

    If pages remain in the “Discovered – currently not indexed” state for a long period, it is worth reviewing several technical factors.

    Start by checking:

    • internal linking to the page
    • whether the page is included in the XML sitemap
    • server response times and site performance
    • crawl activity in Search Console
    • overall site quality signals

    These checks often reveal why Google has delayed crawling the page.

    How to Encourage Google to Crawl the Page

    While you cannot force Google to crawl a page immediately, several steps can increase the likelihood that it will be crawled sooner.

    Helpful actions include:

    • adding internal links from other relevant pages
    • ensuring the page is listed in your XML sitemap
    • improving page performance and load speed
    • requesting indexing through Google Search Console

    In many cases, Google will crawl the page naturally once these signals are improved.

    When Technical Investigation May Be Required

    If a large number of pages remain undiscovered or uncrawled, the issue may be related to broader technical SEO problems.

    These may include:

    • inefficient site architecture
    • crawl budget issues
    • large numbers of low-value pages
    • server performance problems
    • sitemap or indexing configuration issues

    Diagnosing these problems often requires a detailed technical review of how search engines interact with the site.

    Need Help Diagnosing Indexing Issues?

    If your website has many pages stuck in the “Discovered – currently not indexed” state and the cause is unclear, a deeper technical investigation may be required.

    MarkIT Space provides technical SEO audits and diagnostics to identify and resolve indexing issues affecting search visibility.

  • Why Your Website Is Crawled but Not Indexed

    Why Your Website Is Crawled but Not Indexed

    Seeing pages listed as “Crawled – currently not indexed” in Google Search Console can be confusing and frustrating.

    This status means that Google has successfully crawled the page but has decided not to include it in its search index. As a result, the page will not appear in Google search results.

    While this situation does not necessarily indicate a technical error, it often signals that something about the page or the website is preventing Google from considering it valuable enough to index.

    Understanding why this happens is the first step toward resolving the issue.

    What “Crawled – Currently Not Indexed” Means

    When Googlebot visits a page, it analyses the content and decides whether the page should be included in Google’s index.

    If Google crawls a page but does not add it to the index, it may appear in Search Console with the status:

    Crawled – currently not indexed

    This means:

    • Google successfully accessed the page
    • the page was analysed by Google
    • Google chose not to index it at this time

    The page may still be indexed in the future, but it is not currently eligible to appear in search results.

    Common Reasons Pages Are Crawled but Not Indexed

    There are several reasons why Google may crawl a page but decide not to index it.

    Thin or Low-Value Content

    Pages with minimal or repetitive content may not be considered valuable enough to include in Google’s index.

    Examples include:

    • very short pages with little information
    • pages that repeat similar content across multiple URLs
    • automatically generated pages with little unique value

    Improving the depth and usefulness of the content may increase the likelihood of indexing.

    Duplicate Content

    If Google detects multiple pages with very similar content, it may choose to index only one version.

    This often occurs when:

    • pages have similar titles and content
    • product or service pages are duplicated
    • parameters create multiple versions of the same page

    Canonical tags and consistent site structure help prevent this issue.

    Weak Internal Linking

    Google often discovers and prioritises pages through internal links.

    If a page is not linked from important parts of the site, Google may treat it as low priority.

    Adding internal links from relevant pages can help search engines better understand the importance of the content.

    Crawl Budget and Site Quality Signals

    On larger websites, Google may limit how many pages it chooses to index based on overall site quality.

    If a site contains many low-value or duplicate pages, Google may delay indexing other pages until it determines which content is worth indexing.

    Improving overall site quality can help address this issue.

    Recently Published Pages

    New pages sometimes appear in this state temporarily while Google evaluates them.

    In many cases, pages are indexed after additional crawls or once they receive more internal links.

    How to Diagnose the Issue

    If pages remain in the “Crawled – currently not indexed” state for an extended period, a technical review may be necessary.

    Start by checking:

    • the content quality of the page
    • canonical tags and duplicate content
    • internal linking structure
    • sitemap inclusion
    • crawl and indexing reports in Google Search Console

    These factors can often reveal why Google has chosen not to index the page.

    Can You Force Google to Index a Page?

    Submitting a page for indexing in Google Search Console can sometimes encourage Google to revisit it.

    However, if the underlying issue remains unresolved, Google may continue to exclude the page from the index.

    Addressing the root cause of the problem is usually more effective than repeatedly requesting indexing.

    When Technical SEO Investigation Is Needed

    If multiple pages on a website remain unindexed despite being crawled, the issue may be related to deeper technical or structural problems.

    These can include:

    • indexing conflicts across the site
    • duplicate content patterns
    • internal linking issues
    • incorrect canonicalisation
    • broader site quality signals

    Identifying the root cause often requires a detailed technical review of how search engines interact with the website.

    Need Help Diagnosing Indexing Problems?

    If your website has pages stuck in the “Crawled – currently not indexed” state and the cause is unclear, the issue may require a deeper technical investigation.

    MarkIT Space provides technical SEO audits and diagnostics to identify and resolve indexing issues affecting website visibility.

  • Common Technical SEO Issues That Prevent Websites from Ranking

    When a website struggles to rank in search results, the issue is not always related to content or backlinks. In many cases, underlying technical problems prevent search engines from properly crawling, indexing or understanding the website.

    Technical SEO focuses on the structural and technical factors that influence how search engines interact with a site. When these elements are misconfigured, even well-written content may fail to appear in search results.

    This article explains several common technical SEO issues that can prevent websites from ranking and how they can be diagnosed.

    Crawling Problems

    Search engines must first be able to crawl a website before its pages can be indexed.

    Crawling issues may occur when:

    • pages are blocked by robots.txt rules
    • internal linking is weak or incomplete
    • site navigation prevents search engines from discovering pages
    • crawl errors prevent access to important content

    Ensuring search engines can access and navigate a website effectively is a fundamental part of technical SEO.

    Indexing Issues

    Even if a page can be crawled, it may not always be included in Google’s index.

    Common causes of indexing problems include:

    • pages marked with a noindex directive
    • incorrect canonical tags pointing to other pages
    • duplicate content causing Google to ignore pages
    • low-value pages that Google chooses not to index

    Indexing problems are often identified using the Page Indexing report in Google Search Console.

    Site Speed and Performance Problems

    Page speed and performance are important components of both user experience and search visibility.

    Websites with slow loading times or unstable layouts may struggle to meet Google’s Core Web Vitals standards.

    Performance issues may be caused by:

    • large or unoptimised images
    • excessive JavaScript
    • inefficient server response times
    • poorly optimised themes or plugins

    Addressing these issues can significantly improve website performance.

    Structured Data and Schema Markup Errors

    Structured data helps search engines understand the content and purpose of a webpage.

    When schema markup is implemented incorrectly, search engines may ignore it or report errors in Google Search Console.

    Common schema issues include:

    • missing required properties
    • invalid structured data formatting
    • conflicting schema generated by multiple plugins
    • incorrect schema types used for page content

    Resolving these issues helps search engines interpret the website more accurately.

    Duplicate Content and Canonicalisation Issues

    Duplicate content occurs when similar or identical pages appear at multiple URLs.

    This can confuse search engines and prevent them from determining which page should be indexed.

    Common causes include:

    • incorrect canonical tags
    • URL parameters creating duplicate pages
    • category and tag archives generating duplicate content
    • multiple versions of the same page

    Proper canonicalisation helps search engines identify the correct version of a page.

    JavaScript Rendering Problems

    Many modern websites rely heavily on JavaScript to generate page content.

    However, search engines may sometimes struggle to fully render JavaScript-based pages.

    This can lead to situations where:

    • content is not visible to search engines
    • internal links are not discovered
    • structured data is not processed correctly

    Technical analysis is often required to determine whether rendering issues are affecting a website.

    Diagnosing Technical SEO Issues

    Because many technical SEO problems occur behind the scenes, identifying them often requires a detailed technical review.

    Diagnosing these issues typically involves analysing:

    • crawl behaviour
    • indexing reports
    • site performance metrics
    • structured data validation
    • internal linking and site architecture

    A comprehensive technical SEO audit can help identify the root causes of these issues and provide recommendations for resolving them.

    Need Help Diagnosing Technical SEO Issues?

    If your website is struggling to rank and the cause is unclear, the issue may be related to technical SEO.

    MarkIT Space provides technical SEO audits and website diagnostics to identify and resolve issues affecting search visibility.

  • Why Your Core Web Vitals Are Failing (And How to Fix Them)

    Core Web Vitals are performance metrics used by Google to measure the speed, responsiveness and visual stability of a website.

    If your site is failing Core Web Vitals assessments, it can affect both user experience and search visibility. Many websites struggle with these metrics because performance issues often originate from underlying technical factors such as scripts, images, server configuration or inefficient code.

    This guide explains what Core Web Vitals measure, why websites fail them, and what steps can help improve performance.

    What Core Web Vitals Measure

    Core Web Vitals focus on three key metrics that influence how users experience a webpage.

    Largest Contentful Paint (LCP)

    Measures how long it takes for the main content of a page to become visible. A slow LCP indicates that users must wait too long before the page appears to load.

    Interaction to Next Paint (INP)

    Measures how responsive a page is when users interact with it, such as clicking buttons or links.

    Cumulative Layout Shift (CLS)

    Measures visual stability and tracks whether page elements move unexpectedly during loading.

    Together, these metrics help determine whether a page provides a fast and stable user experience.

    How to Check Your Core Web Vitals

    You can check Core Web Vitals performance using several tools.

    Google Search Console

    Search Console provides a Core Web Vitals report showing which pages on your site are classified as Poor, Needs Improvement, or Good.

    PageSpeed Insights analyses a specific page and provides detailed recommendations for improving performance.

    Lighthouse

    The Lighthouse performance audit in Chrome DevTools can also help identify issues affecting page speed and user experience.

    Common Causes of Core Web Vitals Problems

    Many websites fail Core Web Vitals due to technical issues within the site’s code or configuration.

    Large or Unoptimised Images

    Large image files can significantly delay page loading times and increase Largest Contentful Paint.

    Images should be compressed and served in modern formats such as WebP where possible.

    Excessive JavaScript

    Heavy JavaScript files can delay page rendering and slow down interaction responsiveness.

    Scripts from analytics tools, marketing platforms and plugins often contribute to performance issues.

    Render-Blocking Resources

    CSS or JavaScript that must load before the page can render may delay the appearance of content.

    Optimising these resources can help improve loading performance.

    Poor Caching Configuration

    Caching helps browsers store resources locally so pages load faster on repeat visits.

    If caching is not configured correctly, the server may need to regenerate resources for every page load.

    Slow Server Response TimeServer performance can affect how quickly a page begins loading.

    Hosting infrastructure, database queries and inefficient code can all influence response times.

    Plugin Conflicts in WordPress

    WordPress websites often use multiple plugins, which can introduce performance issues if they load excessive scripts or conflict with each other.

    Identifying problematic plugins is often a key step in resolving performance problems.

    Improving Core Web Vitals

    Improving Core Web Vitals typically involves a combination of technical improvements, including:

    • optimising images and media
    • reducing unnecessary JavaScript
    • improving caching and resource delivery
    • removing unused scripts or plugins
    • optimising server performance
    • reviewing theme and plugin behaviour

    Because performance issues often have multiple causes, diagnosing the underlying problem is usually the most important step.

    Need Help Improving Core Web Vitals?

    If your website is failing Core Web Vitals assessments and the cause is unclear, the issue may require deeper technical investigation.

    MarkIT Space provides Core Web Vitals optimisation and technical SEO diagnostics to identify and resolve performance issues affecting website speed and user experience.

  • How to Fix Schema Markup Errors in Google Search Console

    Schema markup helps search engines understand the structure and meaning of website content. When implemented correctly, structured data can make pages eligible for enhanced search results such as review stars, FAQs, product information and other rich results.

    However, schema markup errors are common, particularly when structured data is generated automatically by plugins or implemented incorrectly.

    If Google Search Console reports schema markup errors, it usually means that the structured data on your website is incomplete, invalid or conflicting with other markup on the page.

    This guide explains the most common schema markup errors and how they can be resolved.

    What Schema Markup Errors Mean

    Google Search Console reports structured data issues when it detects problems with the schema markup on your website.

    These problems typically fall into three categories:

    • invalid schema markup
    • missing required fields
    • conflicting structured data

    When these errors occur, the page may still be indexed, but it may not qualify for rich results in search.

    Where to Find Schema Markup Errors

    The easiest way to identify schema markup problems is through Google Search Console.

    You can find structured data issues by navigating to:

    Search Console → Enhancements

    This section lists all detected schema types and highlights any errors or warnings affecting them.

    Each issue typically includes a description of the problem and the pages affected.

    Common Schema Markup Errors

    There are several types of errors that commonly appear in Search Console.

    Missing Required Fields

    Many schema types require specific properties to qualify for rich results.

    For example:

    • product schema may require price and availability
    • review schema may require ratingValue
    • article schema may require headline

    If required fields are missing, Google will flag the structured data as incomplete.

    Invalid Structured Data Format

    Schema markup must follow the correct syntax and structure.

    Errors may occur when:

    • properties are incorrectly formatted
    • schema types are used incorrectly
    • required nesting is missing

    Even small syntax errors can prevent Google from recognising the markup.

    Conflicting Schema Markup

    Sometimes multiple plugins generate schema markup on the same page.

    This can lead to conflicts such as:

    • duplicate schema types
    • inconsistent structured data
    • overlapping markup

    When this occurs, search engines may ignore the markup entirely.

    Incorrect Schema Type

    Using the wrong schema type for the page content can also cause issues.

    For example:

    • using Product schema on a service page
    • applying Review schema incorrectly
    • marking generic pages as articles

    Choosing the correct schema type is essential for structured data to work properly.

    How to Test Schema Markup

    After fixing schema issues, it is important to test the markup to confirm that it is valid.

    Google provides two tools that are commonly used:

    This tool checks whether the page is eligible for rich results.

    This tool validates structured data against schema.org standards.

    Testing helps ensure that the markup is recognised correctly by search engines.

    Why Schema Markup Errors Happen

    Many websites rely on plugins to generate schema markup automatically.

    While these tools are convenient, they often produce incomplete or conflicting structured data, particularly when multiple SEO or schema plugins are installed.

    Other common causes include:

    • outdated schema implementations
    • theme-generated structured data
    • incorrect manual schema code
    • JavaScript-based markup not being rendered correctly

    Diagnosing these issues often requires a detailed review of the structured data being generated across the site.

    Need Help Fixing Schema Markup Errors?

    If your website is experiencing schema markup errors or structured data warnings in Google Search Console, the issue may require deeper technical investigation.

    MarkIT Space provides schema markup implementation and troubleshooting services to diagnose and resolve structured data issues.

    If your website is experiencing schema errors, feel free to get in touch.

  • Why WordPress Pages Are Not Indexing in Google

    Why WordPress Pages Are Not Indexing in Google

    If your WordPress pages are not appearing in Google search results, the issue is often related to a technical configuration problem rather than the quality of the content itself.

    Search engines rely on several signals to determine whether a page should be crawled and indexed. When something prevents Google from properly accessing or interpreting your website, pages may remain unindexed even after they have been published.

    This article explains some of the most common reasons WordPress pages fail to index and what you can do to resolve the issue.

    How Google Indexing Works

    Before diagnosing indexing issues, it helps to understand how the process works.

    Google typically follows three steps when discovering new pages:

    1. Crawling – Googlebot discovers pages by following links or reading sitemaps.
    2. Indexing – Google analyses the page content and decides whether to include it in its index.
    3. Ranking – Once indexed, the page may appear in search results for relevant queries.

    If something interferes with the crawling or indexing stages, the page may never appear in search results.

    Common Reasons WordPress Pages Are Not Indexing

    Common Reasons WordPress Pages Are Not Indexing

    Pages Are Set to “No Index”

    WordPress SEO plugins or theme settings may accidentally mark pages as noindex, which tells search engines not to include them in search results.

    Check:

    • SEO plugin settings
    • page-level SEO settings
    • robots meta tags

    If a page is set to noindex, Google will ignore it.

    Search Engines Are Discouraged in WordPress Settings

    WordPress includes a setting that can prevent search engines from indexing the entire site.

    Check the following:

    Settings → Reading → Discourage search engines from indexing this site

    If this option is enabled, Google may not index your pages.

    The Page Is Blocked by Robots.txt

    The robots.txt file controls which parts of a website search engines are allowed to crawl.

    If important pages are disallowed in robots.txt, Google will not be able to access them.

    The Page Is Not Linked Internally

    Google often discovers new pages through internal links.

    If a page is published but not linked anywhere on the site, Google may not discover it easily.

    Adding links from:

    • blog posts
    • navigation menus
    • category pages

    can help search engines find the page.

    XML Sitemap Issues

    Most WordPress SEO plugins generate an XML sitemap that helps search engines discover new pages.

    Problems may occur if:

    • the page is missing from the sitemap
    • the sitemap is not submitted to Google Search Console
    • the sitemap contains errors

    Checking the sitemap can help confirm whether Google can discover the page.

    Duplicate Content or Canonical Issues

    Sometimes pages appear unique to users but search engines interpret them as duplicates.

    Incorrect canonical tags may tell Google that another page should be indexed instead.

    Google Has Chosen Not to Index the Page

    In some cases, Google crawls a page but decides not to include it in the index.

    This can occur when:

    • the page appears too similar to other content
    • the page is considered low value
    • the site has broader quality or technical issues

    Google Search Console often provides clues when this happens.

    How to Check If a Page Is Indexed

    To quickly check whether a page is indexed, you can search in Google using:

    site:yourdomain.com/page-url

    If the page does not appear in results, it may not be indexed.

    You can also check the Page Indexing report in Google Search Console, which provides more detailed information about why pages may not be indexed.

    Diagnosing WordPress Indexing Problems

    If pages remain unindexed despite being published correctly, the issue is often technical.

    Diagnosing the problem may involve checking:

    • robots.txt rules
    • sitemap configuration
    • page-level SEO settings
    • internal linking structure
    • crawl behaviour in Google Search Console

    Because multiple factors can affect indexing, identifying the root cause is the most important step.

    Need Help Fixing WordPress Indexing Issues?

    If your WordPress pages are not indexing and the cause is unclear, the issue may require deeper technical investigation.

    MarkIT Space provides WordPress troubleshooting and technical SEO diagnostics to identify and resolve issues affecting search visibility.

    If your website is experiencing indexing problems, feel free to get in touch.