ARE YOU STILL USING THESE OUTDATED SEO STRATEGIES | INVENTORY OF GOOGLE SEO STRATEGIES THAT HAVE EXPIRED IN 2021

 

Recently, I have seen PRD documents of some SEO peers and found that the keyword density and nofollow tags are still in the document, and the priority is relatively high.After searching on the Internet, we can see the outdated and ineffective Google SEO strategy everywhere.

Thinking about it, search technology has been evolving since Google went online . From the earliest PR algorithm, to the current RankBrain, TensorFlow, and search strategies Mobile First Indexing, EAT, etc.At the same time, Google SEO is also constantly developing , from the earliest keyword density to the current Google Search Console, mobile strategy, voice, video search, etc.

In fact, many SEO strategies have failed, but they are still being applied. So it's time for 2021 to sort out some of the Google SEO strategies that have expired , and hope to add them.

Invalidation strategy 1-Googlebot cannot crawl JS content

Previous situation

Early search engine crawlers did not have the ability to parse JavaScript, and could only get content in HTML.Therefore, if the content is in JS, it cannot be crawled by Googlebot , and the content needs to be in HMTL.

The current situation

Googlebot already has the ability to parse simple JavaScript , and has added a link between crawling and indexing: rendering (parse JS). This is also in line with the current trend of WEB mobilization and solves the problem of widespread use of JS in mobile web pages.

The following is the current flow chart of Googlebot crawling:

But it should be noted that:

  1. Googlebot can only parse simple JS, if it is content that requires interaction to display or more complex JS , Googlebot still cannot crawl
  2. If JS content requires additional requests to obtain , it will also waste crawler resources

Note: Google document "Understanding JavaScript SEO Basics" https://developers.google.com/search/docs/guides/javascript-seo-basics?hl=en_cn

Current recommendations

If you only consider SEO, use the form of synchronous content output (SSR)If you want to balance performance and SEO, you need to adopt a technical architecture in which crawlers and users use SSR and CSR respectively .

Invalidation strategy 2-URL must be static

Previous situation

Early URLs should be static, mainly based on the following 2 reasons:

  1. At the earliest stage, static technology appeared earlier than dynamic technology
  2. There are many dynamic URL parameters , such as invalid parameters and disordered parameters, which are not conducive to SEO

The current situation

It is technically the era of dynamic data acquisition, and Google has treated dynamic and static URLs equally in 2008 (there is an official document explaining this).For example, the world's largest website building platform WordPress default URL is xxx.com?p=[id]form.

Note:

  • Google "Dynamic URL and Static URL" https://webmaster-cn.googleblog.com/2008/10/blog-post.html

Current recommendations

The core principles of URL:

  1. Simple to read and easy to click
  2. URL is unique to avoid weight dispersion
  3. Need to pay attention to avoid URL parameter problems , such as tracking parameters, sorting parameters, session ID parameters, etc.

You can refer to Google's URL settings:https://developers.google.com/search/docs/advanced/guidelines/links-crawlable?hl=zh_cnURL is simple and readable, from the URL that links crawlable can see relevant content; add the last hl=zh_cnparameter, indicating that the Chinese document.

Note: Google "Keep it simple URL structure" https://developers.google.com/search/docs/advanced/guidelines/url-structure

Failure strategy 3-nofollow to avoid dispersing weights

Previous situation

Nofollow was originally set up to deal with untrusted links on the website , such as spam in the comment section of the blog.In fact, at the very beginning, there is no control over the weight . And Zac and Guoping also discussed this detail at the time. The following is a screenshot:

Note: zac "Will nofollow waste PR and weight? 》Https://www.seozac.com/google/nofollow-debate/

The current situation

Google for more scenes segments introduced more labels, rel="sponsored", rel="ugc", rel="nofollow".Therefore, rel="nofollow"there is no function of controlling weight transfer at all.

Note: Google "Explain to Google the intention of your outbound links" https://developers.google.com/search/docs/advanced/guidelines/qualify-outbound-links?hl=zh_cn

Current recommendations

To select the appropriate label segments according to the scene, more importantly, do not expect to use rel="nofollow"to control the transmission weights .

Failure Strategy 4-Keyword Density/Number of Times

Previous situation

Relevance has always been the primary factor in ranking, and the judgment of relevance is achieved through text matching , such as keyword density, frequency, and location , when the semantic understanding is weak in the early stage .Therefore, 1/3 of the early SEO research was on keyword density and location. For example, 6% or 8% is better for keywords. Should the calculation of keyword density include header and footer parts (the other 2/3 are on external links, Keywords above).

The current situation

Google's ability to understand the semantics of text has been greatly enhanced, and it has long since been used to make relevance judgments based on text matching of keywords . For example, the following example:The search query was "Left Eye Straight Bar". Google corrected the typo and understood that it was about eyelid twitching, eyelid tremor, and left eye twitching content, and returned the corresponding results .If you follow the rules of text matching, you should return the result of a web page that completely contains the "left eye straight line" (it was true that many people made typos in the early days).

Therefore, Google can already recognize the semantics of web pages and Query, and return results based on the semantics.

Current recommendations

Give up the strategy of keyword density and frequency and use semantics to judge the quality of web content.What is considered high-quality web content? The core is whether to meet the needs of users, which includes primary and secondary needs.For example, a webpage with "Left Eye Jumping", high-quality web content should contain these parts:

  • Is the left eye jumping all the time related to "the left eye jumping for money, the right eye jumping for disaster"
  • The left eye keeps jumping is the cause of health
  • Several possible health reasons that the left eye keeps jumping
  • How to solve and avoid the left eye keeps jumping
  • Additional content: easy-to-use eye drops, conscientious eye hospitals, recommended eye care habits

Failure strategy 5-PC is the core page of SEO

Previous situation

In the early days, there were only PC web pages, so Google’s overall strategy was based on PCs , and many operating students also ignored mobile web pages when doing event pages.

The current situation

Mobile traffic has already exceeded 50%, and Google will also fully launch the " Mobile Index First Algorithm " (MFI algorithm) in 2021 to determine the ranking of PC webpages by mobile webpages.Therefore, there is no mobile web page, at least 50% of the traffic is lacking, and after the MFI algorithm is launched, 70% may be lost.

Current recommendations

  • If the website has not completed the MFI switching, it is necessary to adapt the MFI algorithm as soon as possible .
  • The construction of daily channels should be based on mobile web pages to ensure that mobile web pages are rich in content and complete internal link modules .

At last

Google technology and algorithms have been evolving, and the above-mentioned SEO recommendations and effective strategies will certainly fail.What we can do is to embrace change, to develop continuous learning and habits, and hope to encourage each other .

Post a Comment

0 Comments