Hi! My search console says Googlebot has been blocked by my robots.txt even though my txt file has allowed access to it. Does anyone have the same issue?
Could you help me?
File link: www.thegamingproject.co/robots.txt
There is something wrong with your robots.txt.
If you want to make googlebot crawler all your website pages, you can update it as follows:
User-agent: *
Allow: /
Sitemap: https://www.thegamingproject.co/sitemap.xml
For example this websiteâs robots.txt - clipartkey: https://www.clipartkey.com/robots.txt
Keep in mind that if you want to get traffic from Google search engines, donât block googlebot. Otherwise itâs hard to get Google traffic.
Hi! Thanks for the reply!
So this is what my txt file says:
User-agent: *
Disallow:
Which basically means any bot is allowed to crawl. I checked with The Web Robots Pages & thatâs exactly what is allowed. Everyother crawler on the net says the txt file grants access, but the live URL test on google says the bot is blocked.
You are wrong.
Disallow means ânot be allowedâ, any bots canât access to you website.
Allow means âbe allowedâ, any bots can access to your website.
User-agent: * means âany botsâ
The "Disallow: /" tells the robot that it should not visit any pages on the site.
Please check the article carefully.
Oh. All right. Iâve updated it to : www.thegamingproject.co/robots.txt
Is this all right?
Again thanks so much
Well done. Hope your site gets Google likes..âŚ
This was incorrect advice. The OP showed that his robots.txt contained
User-agent: *
Disallow:
Which seems to be the default for Webflow.
âDisallow: â (with no slash) means â disallow nothingâ, which is the equivalent to âAllow: /â or âallow allâ.