比GPT-4更强大的AI助手Claude 3横空出世,详细注册教程!

Image
现在AI聊天机器人真是太卷了,比国内新能源汽车还卷,学的速度还没有他们研发的速度快。Anthropic公司(前OpenAI公司出走员工创立),宣布他们推出的Claude 3 Opus版模型,在一系列基准测试上全面超越了大名鼎鼎的OpenAI GPT-4。很多人第一时间测试,发现确实比较好用。本文和大家一起看下如何注册使用强大的Claude3模型。 Claude3厉害在哪? Claude 3完美支持中文输入,支持超长text文本、PDF文档上传分析,在写作、编程、翻译、图片信息识别等方面更是不在话下。 Claude 3系列这次总共推出了3个模型,满足不同需求场景: Claude 3 Haiku:免费,设计为轻便快捷版本,适用于需要快速响应和处理的任务。 Claude 3 Sonnet:免费,提供技巧与速度的最佳结合,平衡了性能和响应时间,适合大多数应用场景。 Claude 3 Opus:收费,20刀/月。作为系列中性能最强大的版本,专为解决复杂问题和处理大量数据设计,支持高达20万Token的输入。 从各大测试机构测试结果来看,能力排名来看Claude 3 Opus>GPT4.0>Claude2≈GPT3.5。 想先免费体验Claude 3模型的,可以参考另外一篇文章《还不快来试试?免翻、免注册,完全免费使用所有大语言模型(ChatGPT4、Claude3、Gemini pro)》 注册步骤:1:准备 科学上网 邮箱:用来接收邮箱验证码,建议使用谷歌Gmail、微软outlook、Proton邮箱。国内的实测也可以注册。 国外手机号:用来接受验证码。建议用俄罗斯的SMS-Activate专业接码平台https://sms-activate.org/?ref=3762298,价格便宜,支持中文,支持支付宝充值。 前往SMS-Activate 注册步骤2:访问Claude官网 打开 浏览器无痕模式 ,访问Claude官网:https://claude.ai/login 注册步骤3:邮箱验证 输入准备好的邮箱地址,点击【Continue with email】 邮箱会收到验证码,输入验证码,点击【Continue with login code】 注册步骤4:国外手机号验证 跳出验证手机验证码界面 我们到SMS-Activate接码平台,https://sms-act

ARE YOU STILL USING THESE OUTDATED SEO STRATEGIES | INVENTORY OF GOOGLE SEO STRATEGIES THAT HAVE EXPIRED IN 2021

 

Recently, I have seen PRD documents of some SEO peers and found that the keyword density and nofollow tags are still in the document, and the priority is relatively high.After searching on the Internet, we can see the outdated and ineffective Google SEO strategy everywhere.

Thinking about it, search technology has been evolving since Google went online . From the earliest PR algorithm, to the current RankBrain, TensorFlow, and search strategies Mobile First Indexing, EAT, etc.At the same time, Google SEO is also constantly developing , from the earliest keyword density to the current Google Search Console, mobile strategy, voice, video search, etc.

In fact, many SEO strategies have failed, but they are still being applied. So it's time for 2021 to sort out some of the Google SEO strategies that have expired , and hope to add them.

Invalidation strategy 1-Googlebot cannot crawl JS content

Previous situation

Early search engine crawlers did not have the ability to parse JavaScript, and could only get content in HTML.Therefore, if the content is in JS, it cannot be crawled by Googlebot , and the content needs to be in HMTL.

The current situation

Googlebot already has the ability to parse simple JavaScript , and has added a link between crawling and indexing: rendering (parse JS). This is also in line with the current trend of WEB mobilization and solves the problem of widespread use of JS in mobile web pages.

The following is the current flow chart of Googlebot crawling:

But it should be noted that:

  1. Googlebot can only parse simple JS, if it is content that requires interaction to display or more complex JS , Googlebot still cannot crawl
  2. If JS content requires additional requests to obtain , it will also waste crawler resources

Note: Google document "Understanding JavaScript SEO Basics" https://developers.google.com/search/docs/guides/javascript-seo-basics?hl=en_cn

Current recommendations

If you only consider SEO, use the form of synchronous content output (SSR)If you want to balance performance and SEO, you need to adopt a technical architecture in which crawlers and users use SSR and CSR respectively .

Invalidation strategy 2-URL must be static

Previous situation

Early URLs should be static, mainly based on the following 2 reasons:

  1. At the earliest stage, static technology appeared earlier than dynamic technology
  2. There are many dynamic URL parameters , such as invalid parameters and disordered parameters, which are not conducive to SEO

The current situation

It is technically the era of dynamic data acquisition, and Google has treated dynamic and static URLs equally in 2008 (there is an official document explaining this).For example, the world's largest website building platform WordPress default URL is xxx.com?p=[id]form.

Note:

  • Google "Dynamic URL and Static URL" https://webmaster-cn.googleblog.com/2008/10/blog-post.html

Current recommendations

The core principles of URL:

  1. Simple to read and easy to click
  2. URL is unique to avoid weight dispersion
  3. Need to pay attention to avoid URL parameter problems , such as tracking parameters, sorting parameters, session ID parameters, etc.

You can refer to Google's URL settings:https://developers.google.com/search/docs/advanced/guidelines/links-crawlable?hl=zh_cnURL is simple and readable, from the URL that links crawlable can see relevant content; add the last hl=zh_cnparameter, indicating that the Chinese document.

Note: Google "Keep it simple URL structure" https://developers.google.com/search/docs/advanced/guidelines/url-structure

Failure strategy 3-nofollow to avoid dispersing weights

Previous situation

Nofollow was originally set up to deal with untrusted links on the website , such as spam in the comment section of the blog.In fact, at the very beginning, there is no control over the weight . And Zac and Guoping also discussed this detail at the time. The following is a screenshot:

Note: zac "Will nofollow waste PR and weight? 》Https://www.seozac.com/google/nofollow-debate/

The current situation

Google for more scenes segments introduced more labels, rel="sponsored", rel="ugc", rel="nofollow".Therefore, rel="nofollow"there is no function of controlling weight transfer at all.

Note: Google "Explain to Google the intention of your outbound links" https://developers.google.com/search/docs/advanced/guidelines/qualify-outbound-links?hl=zh_cn

Current recommendations

To select the appropriate label segments according to the scene, more importantly, do not expect to use rel="nofollow"to control the transmission weights .

Failure Strategy 4-Keyword Density/Number of Times

Previous situation

Relevance has always been the primary factor in ranking, and the judgment of relevance is achieved through text matching , such as keyword density, frequency, and location , when the semantic understanding is weak in the early stage .Therefore, 1/3 of the early SEO research was on keyword density and location. For example, 6% or 8% is better for keywords. Should the calculation of keyword density include header and footer parts (the other 2/3 are on external links, Keywords above).

The current situation

Google's ability to understand the semantics of text has been greatly enhanced, and it has long since been used to make relevance judgments based on text matching of keywords . For example, the following example:The search query was "Left Eye Straight Bar". Google corrected the typo and understood that it was about eyelid twitching, eyelid tremor, and left eye twitching content, and returned the corresponding results .If you follow the rules of text matching, you should return the result of a web page that completely contains the "left eye straight line" (it was true that many people made typos in the early days).

Therefore, Google can already recognize the semantics of web pages and Query, and return results based on the semantics.

Current recommendations

Give up the strategy of keyword density and frequency and use semantics to judge the quality of web content.What is considered high-quality web content? The core is whether to meet the needs of users, which includes primary and secondary needs.For example, a webpage with "Left Eye Jumping", high-quality web content should contain these parts:

  • Is the left eye jumping all the time related to "the left eye jumping for money, the right eye jumping for disaster"
  • The left eye keeps jumping is the cause of health
  • Several possible health reasons that the left eye keeps jumping
  • How to solve and avoid the left eye keeps jumping
  • Additional content: easy-to-use eye drops, conscientious eye hospitals, recommended eye care habits

Failure strategy 5-PC is the core page of SEO

Previous situation

In the early days, there were only PC web pages, so Google’s overall strategy was based on PCs , and many operating students also ignored mobile web pages when doing event pages.

The current situation

Mobile traffic has already exceeded 50%, and Google will also fully launch the " Mobile Index First Algorithm " (MFI algorithm) in 2021 to determine the ranking of PC webpages by mobile webpages.Therefore, there is no mobile web page, at least 50% of the traffic is lacking, and after the MFI algorithm is launched, 70% may be lost.

Current recommendations

  • If the website has not completed the MFI switching, it is necessary to adapt the MFI algorithm as soon as possible .
  • The construction of daily channels should be based on mobile web pages to ensure that mobile web pages are rich in content and complete internal link modules .

At last

Google technology and algorithms have been evolving, and the above-mentioned SEO recommendations and effective strategies will certainly fail.What we can do is to embrace change, to develop continuous learning and habits, and hope to encourage each other .

Comments

Popular posts from this blog

Money Robot Review 2023 Tutorial ($67/mo 7 days Free) [Updated]

How is the new blog application for Google AdSense account approved?

How to block invalid AdSense traffic and clicks on a WordPress site