Introduction to Yin Tao: How should you write the robots file to get a high ranking in Baidu?

Introduction to Yin Tao: How should you write the robots file to get a high ranking in Baidu?

How should I write the robots file to rank higher in Baidu? Friends who believe in SEO know that they need to write a contract for the files in the root directory of the robots before going online.

How to write robots file to rank high in Baidu

What are robots?

When Baidu spider visits a website, it will first check whether there is a plain text file called robots.txt under the root domain of the website (a file that the spider needs to visit when crawling the website). This file is used to indicate the spider's crawling boundaries on your website.

If you do not modify the robots.txt file, the spider will crawl your backend when crawling the website. Including your JS and CSS files means that your website is transparent in front of spiders.

What are the consequences of crawling the background? Some friends who don’t understand may ask

If the spider crawls the backend of your website, then the location of the backend of the website will be included.

Then when you search on Baidu, the search engine may exclude your background search, and the consequences can be imagined. A friend with a little hacking skills can break into your backend in minutes. Isn't that scary?

Robots general format

User-agent:* defines the blocked search engine name. Baidu (Baiduspide), google (Googlebot), 360 (360Spider), etc.

* represents all search engines

Disallow: Do not allow crawling and inclusion

For example: the background name is dede, so if I don’t want spiders to visit it, I would write: /dede/

“/” and “/” are exact matches

"/" trivial match

"$" matches the end of line character

"*" matches 0 or more characters

Allow (permit crawling, usually not written, just admit it, of course, if there are special requirements, you may write it)

#: Description Annotation

Upgrade knowledge

Block directories from crawling

Block spiders from crawling the inc folder under the root directory and all its contents, and the index.html file under the wap directory under the root directory.

How to write robots.txt:

User-agent:*

Disallow:/inc/ (prevent crawling the contents inside the inc folder)

Disallow:/wap/index.html (prevent crawling of index.html files in the wap directory)

Block a directory but grab a file under it

1. Block all spiders from crawling the wap folder under the root directory, but crawl the files with the suffix html inside

How to write robots.txt:

User-agent:*

Disallow:/wap/ (prevents crawling of contents inside the wap folder)

Allow::/wap/*.html (permits crawling files with the suffix html under wap)

2. Prevent crawling of all folders and files with the "wap" character in the root directory. Here we need to use the (/normal match) writing method

User-agent:*

Disallow:/wap (one “/” is fine)

3. Protect private folders or files

While preventing search engines from crawling certain private folders, it also reflects the directory structure of the website, guessing the website's backend processing system, background, etc. (This is basically not used in normal websites), we might as well use the broad writing method to protect important files.

For example: to prevent crawling /inli, you might as well write it as follows. Of course, the premise is that there are no folders or files with these characters in front of them in your root directory for the spider to crawl.

User-agent:*

Disallow:/inli

Block dynamic URLs

Sometimes dynamic pages may be the same as static pages, resulting in duplicate inclusion. (Affects spider friendliness)

Block dynamic URLs

User-agent:*

Disallow: /*?*

Only URLs with the suffix ".html" are allowed to be accessed

User-agent:*

Allow:.html$

Disallow:/

Block dead links

Submit broken links to Baidu Webmaster Platform

Robots prevent spiders from crawling broken links. The writing method is the same as above, so it is better to include a complete path

User-agent:*

Disallow: (website domain name)

Block links to pages that are not included in Baidu rankings

Writing method:

Add a nofollow note directly to the page link that does not need Baidu ranking

>arel="nofollow" href="website location" <landing>/a<

Location of sitemap index in robots.txt

The best place to place the sitamap (website map) is below robots.txt, and the spider will crawl there first according to the principle mentioned above.

Sitemap: "Website location" + "sitemap.xml"

Sitemap: "Website location" + "sitemap.html"

<<:  Three steps to launch the event!

>>:  How to prevent user churn starting from the user life cycle?

Recommend

Does your oCPC advertising really work?

oCPC is now a familiar concept. You may not have ...

Why doesn't it rain diamonds on Earth?

Recently, a US scientific research team inferred ...

Samsung S6 flash failure revealed

Recently, some foreign users have reported that S...

Android Studio 1.0 RC released

Android Studio 1.0 RC is released and available i...

Children's Day poster collection

To be honest, This issue of the Children's Da...

The iOS15.1 system channel is closed and cannot be downgraded!

According to the news from the Institute of Core ...

Congratulations to Chinese scientists! New breakthroughs in 2023

A century of change Technological innovation is t...

What are those small, smelly yellow particles that come out of your mouth?

One minute with the doctor, the postures are cons...

Essential circle marketing guide for brands

Nowadays, with the continuous development of the ...

Mission accomplished! Goodbye, Tianzhou-5!

At 9:13 a.m. Beijing time on September 12, 2023, ...