SEO Tools - LinkGraph High authority link building services, white hat organic outreach. Thu, 15 Dec 2022 12:42:58 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 https://linkgraph.io/wp-content/uploads/2021/06/cropped-LinkGraph-Favicon-32x32.png SEO Tools - LinkGraph 32 32 Content Pruning Guide for Content Managers and SEOs https://linkgraph.io/blog/content-pruning/ https://linkgraph.io/blog/content-pruning/#respond Fri, 09 Dec 2022 05:47:54 +0000 https://linkgraph.io/?p=23597 If you’re looking for ways to improve the SEO of your website, content pruning is an effective method to consider.  Content pruning is all about removing any […]

The post Content Pruning Guide for Content Managers and SEOs appeared first on LinkGraph.

]]>
If you’re looking for ways to improve the SEO of your website, content pruning is an effective method to consider. 

Content pruning is all about removing any unnecessary content from your website, which can help your website rank higher in search engine results and can also improve user experience.

In this article, you will learn how to do content pruning for SEO, from identifying what needs to be pruned to taking steps to ensure that it’s done correctly. Keep reading to find out more.

What is Content Pruning?

Content pruning involves a thorough review of all content on a website to identify any content that could be seen as irrelevant or low quality from the perspective of search engine algorithms. Once identified, content pruning involves removing those low-quality pages from the website or replacing them with better content. 

A part of regular website maintenance is making sure that all of the content on the website is up-to-date and relevant to the website’s goals. This can help ensure that a website has content that is useful to its visitors and provides value to users who arrive from search engines. 

By removing low-quality or outdated content, websites can see improved search visibility for their highest-value, highest-converting web pages.

Why Remove Content From My Website?

Content production takes time and resources. So you may be wondering: Why remove content from my website after all the work it took to create it?

Although it may feel counterintuitive to your content strategy, content pruning can actually have major benefits to your search engine performance. This is even more true for websites that have a robust SEO content strategy and are publishing new web pages on a regular basis.

Some of those benefits include the following:

  • Improve a website’s visibility by allowing search engines to index your best and highest-converting web pages
  • Ensure visitors are presented with the most up-to-date information
  • Provide higher-quality content and a better user experience
  • Prevent visitors from seeing any low-quality pages
  • Ensure your crawl budget is spent on rank-worthy content

Any content that sits on your websites that doesn’t pull its weight in either traffic or conversions isn’t actually bringing value to your business.

By taking the time to regularly prune their content, content managers can ensure that their website is performing at its best. That’s why sometimes content pruning is the right choice for your content strategy.

What Makes a Web Page Prunable?

Here are some of the qualities to look for when searching for content on your website that may need to be pruned.

Low-Quality

Bad content can have a negative impact on a website’s rankings. Low-quality content can include pages with duplicate content, thin content, or other qualities. It can also include pages that are not user-friendly or are low on useful information. Content pruning can help to identify and remove such pages, allowing search engines to easily index the more relevant, more quality content on the website.

Duplicate Content

Duplicate content is web pages with the same content or similar content that search engine crawlers do not identify as distinct. Google does not want to see duplicate content on a website unless it has been clearly identified as such via a canonical tag.

Thin Content

Thin content is often short-form content that doesn’t provide any real value to the user. Although there is no exact content length that Google privileges in search engine results, experiments have shown that longer, in-depth content tends to rank higher.

Most often, thin content can be combined with other pages on your website to provide a more comprehensive, in-depth answer to a topic, question, or keyword query. Combining content, or redirecting thin pages to more in-depth ones, are also a part of the content pruning process.

Outdated Content

The reality is, the content on our website will become outdated over time. This is why creating evergreen content is important, however, it’s unlikely that your long-form content will last forever without the need for updating. Trends, technologies, and knowledge will change, and web pages should include the most up-to-date, useful information for search engines. 

Also, outdated information can be confusing for visitors and lead to a poor user experience. Removing outdated content can ensure that visitors are presented with the most relevant and useful information.

Under-performing Content

If you have a web page on your website that does not get traffic or conversions, what value is it bringing your business? If the web page does not rank in search results, convert users, or is not a vital part of the buyer journey, it doesn’t really have a place on your website unless you take the time to improve its content.

How to Find Pages for Content Pruning

You can use the Page Pruning tool in the SearchAtlas dashboard to discover pages that may be eligible for content pruning.

To find the tool, navigate to Site Auditor > Page Pruning. 

This tool will show you any pages that are eligible for pruning due to a variety of reasons:

  • Low organic impressions
  • Low clicks
  • Indexability
  • Total ranking keywords
  • Average position
  • Content quality/scores

Remember, just because a page appears on this list doesn’t mean that it has to be pruned/deleted, but that it may be eligible based on its performance metrics.

Next Steps for Page Pruning

Once you have reviewed the software’s suggestions and confirmed that the pages are eligible for pruning, here are the next options for you to take.

1. Improve the Content on the Page

The underperformance of the page may rest in the fact that the content is thin or is not providing a comprehensive answer to the user’s questions. 

You can look to the “Boostable,” tab in the Page Pruning tool to identify those pages that just might need a slight content score boost.

The URLs that are listed here are already earning organic impressions but are not seeing as much success in organic traffic. Most likely, Google sees those pages as relevant but is not ranking them on the first page as a result of the content.  

You can use the SEO Content Assistant in your SearchAtlas dashboard to discover ways to strengthen and improve your content. Or, use the on-page audit tool to see what on-page elements may be impacting your performance.

Follow the guided suggestions for focus terms, headings, questions, and internal links. Include them on the page to make the content more rank worthy. 

2. Update the Content to be More Evergreen

If your content covers trends or keywords that have seasonal search volume, that may impact their underperformance.

Consider updating the content with more evergreen information so the content has a longer shelf life in the SERPs.

Also, make sure that the information on the page is up-to-date with accurate, relevant information. Over time, links may break or content may become outdated. Updating your blogs and articles every 1-2 years should be a part of your regular website maintenance.

3. Build Backlinks to the Page

If both the SEO Content Assistant and on-page content school confirm that your content has high scores and is rank-worthy, you may just need a bit more of a link boost.

Backlinks are Google’s number one ranking factor. If you don’t have very many backlinks pointing to your web page, that may be a reason why it is not ranking on the first page.

You can use the backlink research tool in your dashboard to see which of your web pages have the least amount of link equity.

Consider investing in a link-building campaign in order to improve the off-site signals of the content. Doing so is likely to improve the overall keyword rankings, impressions, and organic clicks.

4. Reoptimize the Page for a Different Keyword

Another possible explanation for your content’s poor performance may be keyword related. 

Some keywords are more competitive than others. If you optimized the page for a keyword that is out of reach, reoptimization may be your next step.

When choosing keywords for SEO, you want to make sure your website has a realistic chance of ranking for the target keyword. Websites with higher Domain Authority will stand a better chance of ranking for competitive keywords.

At LinkGraph, we suggest keywords that are less than or equal to your website’s Domain Authority.

So once you find a more realistic goal, optimize for that keyword instead. This will likely involve changing metadata, website copy, and headings on the page. But it can make a huge difference in improving organic performance.

5. Redirect the Page to a More High-Quality One

A page may be flagged for pruning because Google is ranking a more helpful piece of content on your website.

This is known as keyword cannibalization. It happens when two pieces of content are very similar and Google doesn’t know which to promote. If there is a page that is ranking less often but is similar in relevance, you can do your “content pruning,” by adding a 301 redirect from the less comprehensive page to the better performing. 

6. Combine Thin Content into a More Comprehensive Resource

If you have a series of pages that are thin on content but relate to a similar cluster of keywords, consider combining those pages into a more useful, long-form resource.

Why? Because Google likes to promote content that satisfies users’ search intent. That means not only answering their initial questions but all of the additional questions that might follow regarding that primary topic. 

So before giving up on that set of keywords entirely, combine those various pages into one page. Then, see if the overall keyword rankings improve.

7. Consider Removing the Page Entirely

This is the last step you want to consider after you have concluded that none of the above steps help to elevate content performance.

The reality is, if a piece of content is not driving website traffic, converting users, or an essential part of the buying journey, it doesn’t really deserve space on your website. 

Take the ultimate step and trim that content branch off your tree.

Conclusion

Making content pruning a regular part of your website maintenance is a good habit to get into. This is especially true for websites that publish a lot of content and have a robust SEO strategy.

You can also use the same SearchAtlas tools to scale up your content marketing and blog strategy. Connect with one of our product managers to learn more about our enterprise SEO software platform.

The post Content Pruning Guide for Content Managers and SEOs appeared first on LinkGraph.

]]>
https://linkgraph.io/blog/content-pruning/feed/ 0
What is a robots.txt file and Common Issues With Implementation https://linkgraph.io/blog/what-is-robots-txt-file/ https://linkgraph.io/blog/what-is-robots-txt-file/#respond Tue, 15 Nov 2022 15:43:11 +0000 https://linkgraph.io/?p=23381 Although a more technical part of SEO optimization, robots.txt files are a great way to improve the crawling and indexing of your website. This article will break […]

The post What is a robots.txt file and Common Issues With Implementation appeared first on LinkGraph.

]]>
Although a more technical part of SEO optimization, robots.txt files are a great way to improve the crawling and indexing of your website.

This article will break down all of the details related to robots.txt and highlight common issues related to its implementation. 

If you run a seo audit in the SearchAtlas site auditor, you may see issues flagged related to your robots.txt file. You can use this article to troubleshoot and resolve those issues.

What are robots.txt Files?

The robots.txt file tells web crawlers which areas of your website they are allowed to access and which areas they are not allowed to access. It contains a list of user-agent strings (the name of the bot), the robots directive, and the paths (URLs) to which the robot is denied access.

When you create a website, you may want to restrict access to certain areas pages of your search engine crawlers and prevent specific pages from being indexed. Your Robots.txt file is where web crawlers will understand where they do and do not have access.

Robots.txt is a good way to protect your site’s privacy or to prevent search engines from indexing content that is not rank-worthy or ready for public consumption.

Also, if you don’t want your website to be accessed by other common web crawlers like Applebot, Ahrefbots, or others, you can prevent them from crawling your pages via your robots.txt file.

Where is my robots.txt File Located?

Robots.txt file is a text file that is placed in the root directory of a website. If you don’t yet have a robots.txt file, you will need to upload it to your site.

If the web crawler cannot find the robots.txt file at the root directory of your website, it will assume there is no file and proceed with crawling all of your web pages that are accessible via links. 

How you upload the file will depend on your website and server architecture. You may need to get in contact with your hosting provider to do so.

If you want our team at LinkGraph to configure or upload robots.txt for your website, order our Technical SEO Fundamentals package in the order builder. 

Why Should I Care About robots.txt?

The robots.txt file is considered a fundamental part of technical SEO best practice.

Why? Because search engines discover and understand our websites entirely through their crawlers. The robots.txt file is the best way to communicate to those crawlers directly.

Some of the primary benefits of robots.txt are the following: 

  • Improved crawling efficiency
  • Prevent less valuable pages from getting indexed (e.g. Thank you pages, confirmation pages, etc.)
  • Prevents duplicate content and any penalties as a result
  • Keeps content away from searchers that is not necessarily of high-value

How Does robots.txt Work?

When a search engine robot encounters a robots.txt file, it will read the file and obey the instructions. 

For example, if Googlebot comes across the following in a robots.txt:

User-agent: googlebot
Disallow: /confirmation-page/

It also won’t be able to access the page to crawl and index. It also won’t be able to access any other of the pages in that subdirectory, including:

  • /confirmation-page/meeting/
  • /confirmation-page/order/
  • /confirmation-page/demo/

If a URL is not specified in the robots.txt file, then the robot is free to crawl the page as it normally would.

Best Practices for robots.txt

Here are the most important things to keep in mind when implementing robots.txt:

  1. robots.txt is a text file, so it must be encoded in UTF-8 format
  2. robots.txt is case sensitive, and the file must be named “robots.txt”
  3. The robots.txt file must be placed at the root directory of your website
  4. It’s best practice to only have one robots.txt available on your (sub)domain
  5. You can only have one group of directives per user agent
  6. Be as specific as possible as to avoid accidentally blocking access to entire areas of your website, for example, blocking an entire subdirectory rather than just a specific page located within that subdirectory
  7. Don’t use the noindex directive in your robots.txt
  8. robots.txt is publicly available, so make sure your file doesn’t reveal to curious or malicious users the parts of your website that are confidential
  9. robots.txt is not a substitute for properly configuring robots tags on each individual web page

Common Issues Related to robots.txt

When it comes to website crawlers, there are some common issues that arise when a site’s robots.txt file is not configured properly.

Here are some of the most common ones that occur and will be flagged by your SearchAtlas site audit report if they are present on your website.

1. robots.txt not present

This issue will be flagged if you do not have a robots.txt file or if it is not located in the correct place.

To resolve this issue, you will simply need to create a robots.txt and then add it to the root directory of your website.

2. robots.txt is present on a non-canonical domain variant

To follow robots.txt best practice, you should only have one robotxt.txt file for the (sub)domain where the file is hosted.

If you have a robots.txt located on a (sub)domain that is not the canonical variant, it will be flagged in the site auditor.

Non-canonical domain variants are those pages that are considered duplicate pages, or copies of master pages on your website. If your canonical tags are properly formatted, only the master version of the page will be considered the canonical domain, and that is the version of the page where your file should be located.

For example, let’s say your canonical variant is

  • https://www.website.com/

Your robots file should be located at:

  • https://www.website.com/robots.txt

In contrast, it should not be located at:

  • https://website.com/robots.txt
  • http://website.com/robots.txt

To resolve this issue, you will want to update the location of your robots.txt. Or, you’ll need to 301 redirect the other non-canonical variants of the robots.txt to the actual canonical version.

3. Invalid directives or syntax included in robots.txt

Including invalid robots directives or syntax can cause crawlers to still access the pages you don’t want them to access. 

If the site auditor identifies invalid directives in your robots.txt, it will show you a list of the specific directives that contain the errors.

Resolving this issue involves editing your robots.txt to include the proper directives and the proper formatting.

4. robots.txt should reference an accessible sitemap

It is considered best practice to reference your XML sitemap at the bottom of your robots.txt file. This helps search engine bots easily locate your sitemap. 

If your XML sitemap is not referenced in your robots file, it will be flagged with the following message in the Site Auditor.

To resolve the issue, add a reference to your sitemap at the bottom of your txt file. 

5. robots.txt should not include a crawl directive

The crawl-delay directive instructs some search engines to slow down their crawling, which causes new content and content updates to be picked up later. 

This is undesired, as you want search engines to pick up on changes to your website as quickly as possible.

For this reason, the SearchAtlas site auditor will flag a robots.txt file that includes a crawl directive.

Conclusion

A properly configured robots.txt file can be very impactful for your SEO. However, the opposite is also true. Do robots.txt incorrectly, and you can create huge problems for your SEO performance.

So if you’re unsure, it may be best to work with professionals to properly configure your robots.txt. Connect with one of our SEO professionals to learn more about our technical SEO services.

The post What is a robots.txt file and Common Issues With Implementation appeared first on LinkGraph.

]]>
https://linkgraph.io/blog/what-is-robots-txt-file/feed/ 0
How to Whitelist the SearchAtlas Site Crawler on Cloudflare https://linkgraph.io/blog/whitelist-monitoring-on-cloudflare/ https://linkgraph.io/blog/whitelist-monitoring-on-cloudflare/#respond Tue, 01 Nov 2022 15:18:15 +0000 https://linkgraph.io/?p=18440 If you want to run a site audit and monitor your site health using SearchAtlas, sometimes our site crawlers get blocked by CDNs.  Content Delivery Networks (CDNs) […]

The post How to Whitelist the SearchAtlas Site Crawler on Cloudflare appeared first on LinkGraph.

]]>
If you want to run a site audit and monitor your site health using SearchAtlas, sometimes our site crawlers get blocked by CDNs. 

Content Delivery Networks (CDNs) are a group of servers that help deliver website content faster. If your website relies on a CDN to increase the performance of your website, then it’s possible the CDN’s firewall will block our crawler from accessing your site.

Resolving this issue is fairly straightforward and requires whitelisting the SearchAtlas site crawler. This post will explain how to do so on Cloudflare, as it is the most popular CDN provider for webmasters.

If you are using another CDN, the process is similar, and can still be done by following the steps outlined below. 

Getting Started

SearchAtlas uses one primary IP address to monitor websites:

  • 35.243.150.225

You will need to whitelist this IP address in order to run successful seo audits via SearchAtlas.

How to Whitelist SearchAtlas’s IP Address in Cloudflare

Follow these steps to proceed with whitelisting:

  1. Login to your Cloudflare account
  2. Click Firewall icon and then on the Tools tab
  3. List SearchAtlas’s crawl IP address under the IP Access Rules
  4. Enter the IP Address: 35.243.150.225
  5. Choose the action “Whitelist”
  6. Choose the domain that you want our crawler to have access to
  7. Click “Add”

After performing the above steps, SearchAtlas will have access to your website for monitoring.

For help and a better understanding of Cloudflare’s IP Access Rules, see Cloudflare’s support documentation.

Need help Whitelisting on Cloudflare?

If you are still unable to successfully run a site audit, please contact our support team between 9am-6pm (PST), M-F, via the chat feature in your dashboard. Or, book a software demo with someone from our team.

The post How to Whitelist the SearchAtlas Site Crawler on Cloudflare appeared first on LinkGraph.

]]>
https://linkgraph.io/blog/whitelist-monitoring-on-cloudflare/feed/ 0
The Most Important HTTP Status Codes for SEO https://linkgraph.io/blog/what-are-http-status-codes/ https://linkgraph.io/blog/what-are-http-status-codes/#respond Tue, 25 Oct 2022 16:35:09 +0000 https://linkgraph.io/?p=18385 Whenever a user types a web page url into their web browser and hits enter, they send a request to the web server to access that specific […]

The post The Most Important HTTP Status Codes for SEO appeared first on LinkGraph.

]]>
Whenever a user types a web page url into their web browser and hits enter, they send a request to the web server to access that specific website. The web server responds with the requested page (plus any additional resources, like images or scripts, that the page needs), and the browser displays the page. It also returns a HTTP status code along with every request.

Most of the time, these HTTP status codes are not shown because the request was successful. However, when the server cannot access the requested resource, it will provide an explanation of why it wasn’t successful via a specific response status code.

This list of HTTP status codes will define the most common types of response codes you might see and those that may be impacting your SEO performance.

What are HTTP Status Codes?

The HTTP status code is a three-digit number that tells the browser what happened when it tried to connect to the server. HTTP status codes communicate to the web browser and its user whether or not their request was successful.

HTTP Status codes are a big part of SEO because successful requests to the origin server make a better experience for search engine crawlers and for website visitors.

In contrast, response status codes that indicate errors or a missing target resource can signal to users, and Google, that the website owner is not doing the necessary maintenance of their website.

Types of Status Codes

color coded list of the different types of http status codes

There are five different series of status codes. All status codes are three digits. The beginning digit highlights the type of status code returned by the server.

  • 1xx: Provides Information
  • 2xx: Indicates Success
  • 3xx: Redirected page, meaning the page has been moved to another url
  • 4xx: Client error, meaning something is wrong with the requested web page
  • 5xx: Server error, meaning something occurred with the server’s connection

Most Common HTTP Status Codes

There are 60+ possible status codes, but some of them are more common than others. Some are important also when thinking about search engine crawlers and what is happening when they follow links to various urls on our websites.

200: Success

Pages in the 200 series are what you are aiming for. They communicate that the request was successful and the server has created a new resource. 2xx codes indicate that the server is working properly and the site visitor and client (or website) are all connecting properly.

Whenever a 200 status code is not found, the SearchAtlas site auditor will flag it in your report with the following message:

  • Status code not 200

301: Permanent Redirect

Arguably one of the most important status codes for SEO purposes, 301 redirects communicate that a web page has been permanently moved to a new location or a new url. When a user enters the url in their browser, or clicks on a link with the old url, they will be redirected to the new url of the page.

301 Redirects, when used properly, can help improve your SEO. They ensure that you do not lose link equity when moving or updating content on your website. For this reason, the SearchAtlas Site Auditor does flag issues related to 301 redirects when crawling and analyzing your site.

Some issues related to 301s that you might see highlighted in your issues report include:

  • 301 Does Not Redirect to HTTPS: 301 redirects should take users to the HTTPS version of a web page, as it provides a safer browser experience for users.
  • Redirect url not lowercase: Redirect urls should be lowercase so search engine crawlers do not mistake the new page as duplicate content or a duplicate version of the page
  • Internal links with 301 redirects: Google looks down upon internal links with 301 redirects. It prefers that webmasters update their links with the new urls of the relocated pages.

404: Not Found

Status codes in the 400 series are generally used when the client has made a request that the server can’t fulfill. 

screenshot of a 404 status code error

For example, the 400 status code is used when the client requests a resource that doesn’t exist. The 401 status code is used when the client doesn’t have the appropriate authentication credentials. The 408 status code is used when the client makes a request that’s longer than the server is willing to wait for.

404s are not only bad for the user experience of your website, they are particularly bad for your SEO performance. If search engine crawlers are being repeatedly sent to unavailable or dead pages, Google is less likely to see your website as providing valuable content or a high quality page experience to users.

For this reason, the following 404 status code errors will be flagged in your site auditor report:

  • Url gives soft 404

What Causes a 404 Response Code?

Here are some of the potential reasons why a url might be 404ing and how to resolve the issue:

  • Deleted/Moved page: The page’s content may have been deleted or moved, causing a broken link. To fix, adding a 301 redirect would send the user and search engine crawlers to the new version of the page.
  • Incorrect URL: The url was incorrectly typed into the browser’s address bar or the wrong url was added to a link. Double check that your links are using the right urls.
  • Caching problems: The browser may cache the 404 error page instead of the actual content. Therefore, you keep seeing the error even when the website works for everyone else.
  • Missing Asset: If there is a missing asset, such as an image, CSS, or JavaScript file, it can generate a 404 error. The missing asset needs to be updated or replaced.

500: Internal Server Error

Status codes in the 500 series are general error messages. They are used when the server encountered an error while processing the request. These errors can often feel a bit like a mystery.

For example, the 500 status code is used when the server can’t find the requested resource. The 501 status code is used when the server can’t find the requested resource because it’s been moved. The 502 status code is used when the server can’t process the request because it’s overloaded.

If you’re web page is returning a 500 status code error, try the following fixes:

  • Refresh your browser: This is the best place to start. A second request to the server may produce a successful https status code.
  • Delete web browser cookies: Doing this may help reload the web page. 
  • Deactivate a plugin: Especially if the 500 http status code recently followed the installation of a plugin. It is possible that the plugin conflicts with some other software, or a software update makes the system incompatible.
  • Come back later: It’s possible that future requests at a later time will be successful.

How Do You Know What the HTTP Status Code Is?

It’s important to investigate web page urls on your website that are producing an invalid response. Why? Because they can prevent users from arriving at the requested resource.

Resolving them can mean better keyword rankings and fewer site visitors bouncing away from your website.

There are two primary ways that you can check the response codes of your web pages.

Use your Google Search Console Account

In your GSC account, navigate to Index > Pages.

You’ll find a display summarizing various errors related to indexing. Messages about 404s or 500 errors will appear in this list.

Click on the error to then analyze the impacted pages more closely.

SearchAtlas Site Auditor Report

The SearchAtlas Site Auditor will check the HTTP response codes of your webpages. It will also flag any issues it identifies in relationship to the status code.

After running your site audit, navigate to the Issues tab in the Site Auditor dashboard. 

Click on the “Page URL” category. 

Look for any error messages mentioning HTTP status codes, and then click, “See Affected Pages.”

You’ll see a complete list of all of the pages on your website that are not returning 200 status codes.

Hand this list to your web developer to resolve the issue, or connect with one of our SEO experts to determine your next steps for resolving the issues.

Conclusion

Now that you understand the most important HTTP status codes for SEO, you can hopefully resolve any errors on your web pages.

But if you are still not quite sure why your web page urls are returning specific HTTP status codes, you may want to reach out to a technical SEO agency to see if they can help you resolve the issue. Our team at LinkGraph is here to help!

 

 

The post The Most Important HTTP Status Codes for SEO appeared first on LinkGraph.

]]>
https://linkgraph.io/blog/what-are-http-status-codes/feed/ 0
Open Graph Tags Implementation & Best Practices https://linkgraph.io/blog/open-graph-tags/ https://linkgraph.io/blog/open-graph-tags/#respond Mon, 12 Sep 2022 16:54:39 +0000 https://linkgraph.io/?p=18001 An important element that many websites forget to include on their web pages are open graph meta tags. These tags are important for ensuring that your content […]

The post Open Graph Tags Implementation & Best Practices appeared first on LinkGraph.

]]>
An important element that many websites forget to include on their web pages are open graph meta tags. These tags are important for ensuring that your content is appealing and clickable when shared on popular social media sites. 

Here is a guide to what open graph tags are, why they matter, and how to implement them across your web pages.

What Are Open Graph Tags?

Open Graph tags are a set of HTML tags that help you control how your content is displayed on social media websites.

When someone shares one of your web pages on a social network like Facebook or LinkedIn, open graph communicates how to display and format the shared content.   

On the frontend, open graph tags make your content appear with the title, image, and display that you prefer, like this:

screenshot of a facebook post that is using open graph meta tags to better display content

But without Open Graph meta tags on your web pages, your content on social media sites may look like this:

a facebook post with missing open graph image tag

Which post would you be more likely to click on?

Are Open Graph Tags Important?

Open Graph is important because it allows you to control how your content appears when it is shared on social media.

They allow platforms to pull in specific details about the website and its contents to create a richer, more engaging social media post. It is very similar to a schema.org markup.

Open Graph is a great way to ensure that your content looks its best, displays your branding, and projects your content as valuable whenever it is shared online.

Are Open Graph Tags a Ranking Factor?

Open Graph is not a ranking factor that Google relies on when promoting web pages. But they still can indirectly impact your SEO.

How? Because social media is a great avenue for sharing content to grow brand visibility and site authority. If a webmaster sees your content on a social media site and enjoys it, they may choose to link to it on their own web pages, giving you a backlink and valuable link equity.

But if your content doesn’t utilize open graph meta tags, the content looks less appealing whenever shared on social networks or other popular sites, making users less likely to click on it, and webmasters less likely to link to it.

Who Uses OG Tags?

Open Graph Tags are used on major social media platforms, but some of the most important are:

  • Facebook
  • LinkedIn
  • Google+
  • WhatsApp
  • Slack

Twitter has their own version of Open Graph which are called Twitter Cards. To learn more, check out our post on Twitter Cards best practices.

Types of Open Graph

There are a number of different tags that you should be implementing on your web page.

If you do not have these open graph tags present on your web pages, they will be flagged in your SearchAtlas Site Audit Report as “Missing”.

og:title

The title of the webpage is important for your users understanding the purpose and focus of your content. In HTML, the og:title tag looks like this:

screenshot of an og:title tag in HTML

Best practices for og:titles include the following:

  • Should be present on all shareable web pages on your website
  • Character count should be between 40-60
  • Focused on accuracy, brevity, and clickability

og:description

Similar to a meta description for SEO, the og:description should offer a brief description of the web page’s content to give the user more reason to click.

In HTML, the og:description looks like this:

screenshot of an og:description tag in HTML

Best practices for og:description include:

  • Should be present on all shareable web pages on your website
  • Character count should be between 120-160
  • Accompanies the title to make a clickable, sharable piece of content

og:image

The og:image is arguably one of the most important open graph tags because of the visual nature of social media platforms.

The og:image tag includes the URL of an image that represents the content of the webpage. In HTML, it will look like this:

screenshot of an og:image tag in HTML

Ideally, you should also use the og:image:width and og:image:height tags to let the social media websites know the size of your image.

screenshot of og:image tags

Best practices for og:image open graph include:

  • Should be present on all shareable web pages on your website 
  • Uses images with a 1.91:1 ratio for the best clarity on desktop and mobile
  • Also includes og:image:width and og:image:height to ensure that the image loads properly when it is shared

og:url

The og:url tag tells social media websites the url of the webpage. It looks like this in HTML:

screenshot of an of:url open graph tag in HTML

Best practices for og:url include the following:

  • Follows SEO-friendly url best practices
  • Points to the canonical url, especially if they are multiple versions of the page

og:type

The og:type tag helps social media sites understand the type of webpage (e.g. “article”, “blog”, “website” “video” etc.). In HTML, it looks like this:

screenshot of an og:type tag in HTML

Best practices for og:type include the following:

  • Use “article” for blogs and “website” for all other pages
  • Only define one type per page

og:locale

For international websites with content in multiple languages, the og:locale tag indicates the language of the web page content. In HTML, it looks like this:

screenshot of an og:locale tag with a fr country code

Best practices for og: locale are pretty simple:

  • Use the appropriate country code (Here’s a complete list of HTML country codes)
  • Only use for pages that are not written in English

Common Mistakes with Open Graph

There are a few common issues that webmasters make with open graph tags that will be flagged in your Site Auditor report.

Missing Open Graph Tags

The most common mistake that webmasters make is they simply do not include open graph tags on their web pages. 

If open graph is missing from a web page, you will see this issue displayed in your SearchAtlas site audit report.

screenshot of a missing open graph message in the SearchAtlas site auditor

This is an easy fix, and simply involves adding the missing open graph tag into the HTML header of your web page (according to the best practices listed above).

Improper Character Counts

For og:title and og:description, your character counts will be flagged if they do not follow best practices.

screenshot of an open graph character issue in SearchAtlas

To resolve this issue, you will need to either shorten or lengthen the character count of your open graph tags. 

This can be done by editing the HTML of your individual web pages.

How to Implement Open Graph on your Web Pages

There are two primary ways that you can add open graph tags to your web pages.

The first is utilizing a popular plugin. For WordPress websites, the most common used by webmasters is Yoast SEO. 

screenshot of yoast SEO plugin

Other popular CMS like Wix, Shopify, and others have plugins that make adding open graphs simple. 

You can also add open graph tags manually by editing your web page HTML code. Open graph tags should go in the <header> section of your website.

HTML text editor in WordPress

To ensure you have the required fields, you can use a tool like an open graph generator.

Conclusion

If you need assistance adding open graph to your website, our team of web development experts are here to help. Simply request a proposal from our team.

You can also run a site audit yourself after registering for a trial of our SEO software.

The post Open Graph Tags Implementation & Best Practices appeared first on LinkGraph.

]]>
https://linkgraph.io/blog/open-graph-tags/feed/ 0