Technical SEO - LinkGraph High authority link building services, white hat organic outreach. Tue, 15 Nov 2022 22:41:24 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 https://linkgraph.io/wp-content/uploads/2021/06/cropped-LinkGraph-Favicon-32x32.png Technical SEO - LinkGraph 32 32 What is a robots.txt file and Common Issues With Implementation https://linkgraph.io/blog/what-is-robots-txt-file/ https://linkgraph.io/blog/what-is-robots-txt-file/#respond Tue, 15 Nov 2022 15:43:11 +0000 https://linkgraph.io/?p=23381 Although a more technical part of SEO optimization, robots.txt files are a great way to improve the crawling and indexing of your website. This article will break […]

The post What is a robots.txt file and Common Issues With Implementation appeared first on LinkGraph.

]]>
Although a more technical part of SEO optimization, robots.txt files are a great way to improve the crawling and indexing of your website.

This article will break down all of the details related to robots.txt and highlight common issues related to its implementation. 

If you run a seo audit in the SearchAtlas site auditor, you may see issues flagged related to your robots.txt file. You can use this article to troubleshoot and resolve those issues.

What are robots.txt Files?

The robots.txt file tells web crawlers which areas of your website they are allowed to access and which areas they are not allowed to access. It contains a list of user-agent strings (the name of the bot), the robots directive, and the paths (URLs) to which the robot is denied access.

When you create a website, you may want to restrict access to certain areas pages of your search engine crawlers and prevent specific pages from being indexed. Your Robots.txt file is where web crawlers will understand where they do and do not have access.

Robots.txt is a good way to protect your site’s privacy or to prevent search engines from indexing content that is not rank-worthy or ready for public consumption.

Also, if you don’t want your website to be accessed by other common web crawlers like Applebot, Ahrefbots, or others, you can prevent them from crawling your pages via your robots.txt file.

Where is my robots.txt File Located?

Robots.txt file is a text file that is placed in the root directory of a website. If you don’t yet have a robots.txt file, you will need to upload it to your site.

If the web crawler cannot find the robots.txt file at the root directory of your website, it will assume there is no file and proceed with crawling all of your web pages that are accessible via links. 

How you upload the file will depend on your website and server architecture. You may need to get in contact with your hosting provider to do so.

If you want our team at LinkGraph to configure or upload robots.txt for your website, order our Technical SEO Fundamentals package in the order builder. 

Why Should I Care About robots.txt?

The robots.txt file is considered a fundamental part of technical SEO best practice.

Why? Because search engines discover and understand our websites entirely through their crawlers. The robots.txt file is the best way to communicate to those crawlers directly.

Some of the primary benefits of robots.txt are the following: 

  • Improved crawling efficiency
  • Prevent less valuable pages from getting indexed (e.g. Thank you pages, confirmation pages, etc.)
  • Prevents duplicate content and any penalties as a result
  • Keeps content away from searchers that is not necessarily of high-value

How Does robots.txt Work?

When a search engine robot encounters a robots.txt file, it will read the file and obey the instructions. 

For example, if Googlebot comes across the following in a robots.txt:

User-agent: googlebot
Disallow: /confirmation-page/

It also won’t be able to access the page to crawl and index. It also won’t be able to access any other of the pages in that subdirectory, including:

  • /confirmation-page/meeting/
  • /confirmation-page/order/
  • /confirmation-page/demo/

If a URL is not specified in the robots.txt file, then the robot is free to crawl the page as it normally would.

Best Practices for robots.txt

Here are the most important things to keep in mind when implementing robots.txt:

  1. robots.txt is a text file, so it must be encoded in UTF-8 format
  2. robots.txt is case sensitive, and the file must be named “robots.txt”
  3. The robots.txt file must be placed at the root directory of your website
  4. It’s best practice to only have one robots.txt available on your (sub)domain
  5. You can only have one group of directives per user agent
  6. Be as specific as possible as to avoid accidentally blocking access to entire areas of your website, for example, blocking an entire subdirectory rather than just a specific page located within that subdirectory
  7. Don’t use the noindex directive in your robots.txt
  8. robots.txt is publicly available, so make sure your file doesn’t reveal to curious or malicious users the parts of your website that are confidential
  9. robots.txt is not a substitute for properly configuring robots tags on each individual web page

Common Issues Related to robots.txt

When it comes to website crawlers, there are some common issues that arise when a site’s robots.txt file is not configured properly.

Here are some of the most common ones that occur and will be flagged by your SearchAtlas site audit report if they are present on your website.

1. robots.txt not present

This issue will be flagged if you do not have a robots.txt file or if it is not located in the correct place.

To resolve this issue, you will simply need to create a robots.txt and then add it to the root directory of your website.

2. robots.txt is present on a non-canonical domain variant

To follow robots.txt best practice, you should only have one robotxt.txt file for the (sub)domain where the file is hosted.

If you have a robots.txt located on a (sub)domain that is not the canonical variant, it will be flagged in the site auditor.

Non-canonical domain variants are those pages that are considered duplicate pages, or copies of master pages on your website. If your canonical tags are properly formatted, only the master version of the page will be considered the canonical domain, and that is the version of the page where your file should be located.

For example, let’s say your canonical variant is

  • https://www.website.com/

Your robots file should be located at:

  • https://www.website.com/robots.txt

In contrast, it should not be located at:

  • https://website.com/robots.txt
  • http://website.com/robots.txt

To resolve this issue, you will want to update the location of your robots.txt. Or, you’ll need to 301 redirect the other non-canonical variants of the robots.txt to the actual canonical version.

3. Invalid directives or syntax included in robots.txt

Including invalid robots directives or syntax can cause crawlers to still access the pages you don’t want them to access. 

If the site auditor identifies invalid directives in your robots.txt, it will show you a list of the specific directives that contain the errors.

Resolving this issue involves editing your robots.txt to include the proper directives and the proper formatting.

4. robots.txt should reference an accessible sitemap

It is considered best practice to reference your XML sitemap at the bottom of your robots.txt file. This helps search engine bots easily locate your sitemap. 

If your XML sitemap is not referenced in your robots file, it will be flagged with the following message in the Site Auditor.

To resolve the issue, add a reference to your sitemap at the bottom of your txt file. 

5. robots.txt should not include a crawl directive

The crawl-delay directive instructs some search engines to slow down their crawling, which causes new content and content updates to be picked up later. 

This is undesired, as you want search engines to pick up on changes to your website as quickly as possible.

For this reason, the SearchAtlas site auditor will flag a robots.txt file that includes a crawl directive.

Conclusion

A properly configured robots.txt file can be very impactful for your SEO. However, the opposite is also true. Do robots.txt incorrectly, and you can create huge problems for your SEO performance.

So if you’re unsure, it may be best to work with professionals to properly configure your robots.txt. Connect with one of our SEO professionals to learn more about our technical SEO services.

The post What is a robots.txt file and Common Issues With Implementation appeared first on LinkGraph.

]]>
https://linkgraph.io/blog/what-is-robots-txt-file/feed/ 0
How to Whitelist the SearchAtlas Site Crawler on Cloudflare https://linkgraph.io/blog/whitelist-monitoring-on-cloudflare/ https://linkgraph.io/blog/whitelist-monitoring-on-cloudflare/#respond Tue, 01 Nov 2022 15:18:15 +0000 https://linkgraph.io/?p=18440 If you want to run a site audit and monitor your site health using SearchAtlas, sometimes our site crawlers get blocked by CDNs.  Content Delivery Networks (CDNs) […]

The post How to Whitelist the SearchAtlas Site Crawler on Cloudflare appeared first on LinkGraph.

]]>
If you want to run a site audit and monitor your site health using SearchAtlas, sometimes our site crawlers get blocked by CDNs. 

Content Delivery Networks (CDNs) are a group of servers that help deliver website content faster. If your website relies on a CDN to increase the performance of your website, then it’s possible the CDN’s firewall will block our crawler from accessing your site.

Resolving this issue is fairly straightforward and requires whitelisting the SearchAtlas site crawler. This post will explain how to do so on Cloudflare, as it is the most popular CDN provider for webmasters.

If you are using another CDN, the process is similar, and can still be done by following the steps outlined below. 

Getting Started

SearchAtlas uses one primary IP address to monitor websites:

  • 35.243.150.225

You will need to whitelist this IP address in order to run successful seo audits via SearchAtlas.

How to Whitelist SearchAtlas’s IP Address in Cloudflare

Follow these steps to proceed with whitelisting:

  1. Login to your Cloudflare account
  2. Click Firewall icon and then on the Tools tab
  3. List SearchAtlas’s crawl IP address under the IP Access Rules
  4. Enter the IP Address: 35.243.150.225
  5. Choose the action “Whitelist”
  6. Choose the domain that you want our crawler to have access to
  7. Click “Add”

After performing the above steps, SearchAtlas will have access to your website for monitoring.

For help and a better understanding of Cloudflare’s IP Access Rules, see Cloudflare’s support documentation.

Need help Whitelisting on Cloudflare?

If you are still unable to successfully run a site audit, please contact our support team between 9am-6pm (PST), M-F, via the chat feature in your dashboard. Or, book a software demo with someone from our team.

The post How to Whitelist the SearchAtlas Site Crawler on Cloudflare appeared first on LinkGraph.

]]>
https://linkgraph.io/blog/whitelist-monitoring-on-cloudflare/feed/ 0
The Most Important HTTP Status Codes for SEO https://linkgraph.io/blog/what-are-http-status-codes/ https://linkgraph.io/blog/what-are-http-status-codes/#respond Tue, 25 Oct 2022 16:35:09 +0000 https://linkgraph.io/?p=18385 Whenever a user types a web page url into their web browser and hits enter, they send a request to the web server to access that specific […]

The post The Most Important HTTP Status Codes for SEO appeared first on LinkGraph.

]]>
Whenever a user types a web page url into their web browser and hits enter, they send a request to the web server to access that specific website. The web server responds with the requested page (plus any additional resources, like images or scripts, that the page needs), and the browser displays the page. It also returns a HTTP status code along with every request.

Most of the time, these HTTP status codes are not shown because the request was successful. However, when the server cannot access the requested resource, it will provide an explanation of why it wasn’t successful via a specific response status code.

This list of HTTP status codes will define the most common types of response codes you might see and those that may be impacting your SEO performance.

What are HTTP Status Codes?

The HTTP status code is a three-digit number that tells the browser what happened when it tried to connect to the server. HTTP status codes communicate to the web browser and its user whether or not their request was successful.

HTTP Status codes are a big part of SEO because successful requests to the origin server make a better experience for search engine crawlers and for website visitors.

In contrast, response status codes that indicate errors or a missing target resource can signal to users, and Google, that the website owner is not doing the necessary maintenance of their website.

Types of Status Codes

color coded list of the different types of http status codes

There are five different series of status codes. All status codes are three digits. The beginning digit highlights the type of status code returned by the server.

  • 1xx: Provides Information
  • 2xx: Indicates Success
  • 3xx: Redirected page, meaning the page has been moved to another url
  • 4xx: Client error, meaning something is wrong with the requested web page
  • 5xx: Server error, meaning something occurred with the server’s connection

Most Common HTTP Status Codes

There are 60+ possible status codes, but some of them are more common than others. Some are important also when thinking about search engine crawlers and what is happening when they follow links to various urls on our websites.

200: Success

Pages in the 200 series are what you are aiming for. They communicate that the request was successful and the server has created a new resource. 2xx codes indicate that the server is working properly and the site visitor and client (or website) are all connecting properly.

Whenever a 200 status code is not found, the SearchAtlas site auditor will flag it in your report with the following message:

  • Status code not 200

301: Permanent Redirect

Arguably one of the most important status codes for SEO purposes, 301 redirects communicate that a web page has been permanently moved to a new location or a new url. When a user enters the url in their browser, or clicks on a link with the old url, they will be redirected to the new url of the page.

301 Redirects, when used properly, can help improve your SEO. They ensure that you do not lose link equity when moving or updating content on your website. For this reason, the SearchAtlas Site Auditor does flag issues related to 301 redirects when crawling and analyzing your site.

Some issues related to 301s that you might see highlighted in your issues report include:

  • 301 Does Not Redirect to HTTPS: 301 redirects should take users to the HTTPS version of a web page, as it provides a safer browser experience for users.
  • Redirect url not lowercase: Redirect urls should be lowercase so search engine crawlers do not mistake the new page as duplicate content or a duplicate version of the page
  • Internal links with 301 redirects: Google looks down upon internal links with 301 redirects. It prefers that webmasters update their links with the new urls of the relocated pages.

404: Not Found

Status codes in the 400 series are generally used when the client has made a request that the server can’t fulfill. 

screenshot of a 404 status code error

For example, the 400 status code is used when the client requests a resource that doesn’t exist. The 401 status code is used when the client doesn’t have the appropriate authentication credentials. The 408 status code is used when the client makes a request that’s longer than the server is willing to wait for.

404s are not only bad for the user experience of your website, they are particularly bad for your SEO performance. If search engine crawlers are being repeatedly sent to unavailable or dead pages, Google is less likely to see your website as providing valuable content or a high quality page experience to users.

For this reason, the following 404 status code errors will be flagged in your site auditor report:

  • Url gives soft 404

What Causes a 404 Response Code?

Here are some of the potential reasons why a url might be 404ing and how to resolve the issue:

  • Deleted/Moved page: The page’s content may have been deleted or moved, causing a broken link. To fix, adding a 301 redirect would send the user and search engine crawlers to the new version of the page.
  • Incorrect URL: The url was incorrectly typed into the browser’s address bar or the wrong url was added to a link. Double check that your links are using the right urls.
  • Caching problems: The browser may cache the 404 error page instead of the actual content. Therefore, you keep seeing the error even when the website works for everyone else.
  • Missing Asset: If there is a missing asset, such as an image, CSS, or JavaScript file, it can generate a 404 error. The missing asset needs to be updated or replaced.

500: Internal Server Error

Status codes in the 500 series are general error messages. They are used when the server encountered an error while processing the request. These errors can often feel a bit like a mystery.

For example, the 500 status code is used when the server can’t find the requested resource. The 501 status code is used when the server can’t find the requested resource because it’s been moved. The 502 status code is used when the server can’t process the request because it’s overloaded.

If you’re web page is returning a 500 status code error, try the following fixes:

  • Refresh your browser: This is the best place to start. A second request to the server may produce a successful https status code.
  • Delete web browser cookies: Doing this may help reload the web page. 
  • Deactivate a plugin: Especially if the 500 http status code recently followed the installation of a plugin. It is possible that the plugin conflicts with some other software, or a software update makes the system incompatible.
  • Come back later: It’s possible that future requests at a later time will be successful.

How Do You Know What the HTTP Status Code Is?

It’s important to investigate web page urls on your website that are producing an invalid response. Why? Because they can prevent users from arriving at the requested resource.

Resolving them can mean better keyword rankings and fewer site visitors bouncing away from your website.

There are two primary ways that you can check the response codes of your web pages.

Use your Google Search Console Account

In your GSC account, navigate to Index > Pages.

You’ll find a display summarizing various errors related to indexing. Messages about 404s or 500 errors will appear in this list.

Click on the error to then analyze the impacted pages more closely.

SearchAtlas Site Auditor Report

The SearchAtlas Site Auditor will check the HTTP response codes of your webpages. It will also flag any issues it identifies in relationship to the status code.

After running your site audit, navigate to the Issues tab in the Site Auditor dashboard. 

Click on the “Page URL” category. 

Look for any error messages mentioning HTTP status codes, and then click, “See Affected Pages.”

You’ll see a complete list of all of the pages on your website that are not returning 200 status codes.

Hand this list to your web developer to resolve the issue, or connect with one of our SEO experts to determine your next steps for resolving the issues.

Conclusion

Now that you understand the most important HTTP status codes for SEO, you can hopefully resolve any errors on your web pages.

But if you are still not quite sure why your web page urls are returning specific HTTP status codes, you may want to reach out to a technical SEO agency to see if they can help you resolve the issue. Our team at LinkGraph is here to help!

 

 

The post The Most Important HTTP Status Codes for SEO appeared first on LinkGraph.

]]>
https://linkgraph.io/blog/what-are-http-status-codes/feed/ 0
SEO for PDFs: Get your PDFs Ranking in the SERPs https://linkgraph.io/blog/seo-for-pdfs/ https://linkgraph.io/blog/seo-for-pdfs/#respond Fri, 14 Oct 2022 16:57:44 +0000 https://linkgraph.io/?p=18273 There are times when a PDF file is the type of content that brings the most value to your audience. And just like with your traditional html […]

The post SEO for PDFs: Get your PDFs Ranking in the SERPs appeared first on LinkGraph.

]]>
There are times when a PDF file is the type of content that brings the most value to your audience. And just like with your traditional html pages, SEO for PDFs can help them earn keyword rankings so your target audience can discover them through organic search.

Although not always the greatest for SEO, Google does index PDFs, and sometimes ranks them, meaning that your executive reports, white papers, survey results, or other PDF content should be using SEO best practices and be optimized for search.

How Does Google See PDF Files?

PDF files are treated like regular pages by search engine crawlers. Google converts PDFs into HTML, and then indexes the HTML version of the page.

In the SERPs, the PDF tag will be visible next to the title tag. This lets the user know that they will be directed to an indexed version of a PDF page rather than a traditional web page.

How to Optimize PDFs for SEO

Because Google does treat PDF files like regular web pages, that means all of the same on-page SEO best practices still apply.

Have an SEO-Friendly File Name

Just like Google looks to file names of your images to understand their relevance to your web page content, your PDF file name should communicate to Google what your content is about. 

Best practices for optimized file names include:

  • Short and Sweet
  • Includes target keyword
  • Accurate description of PDF content

Optimize your PDF Title and Meta Description for your Target Keywords

SEO Meta Tags like the title and meta description of your PDF will be visible to users in the SERPs. They will be used by Google to understand the primary topic of your PDF.

Including your target keywords in these elements of your PDF, and those with Keyword Difficulty scores that are achievable for your website, will help improve your ranking potential.

If you are using Adobe Acrobat Pro, you can edit the title of the PDF via:

  • File > Properties
  • Edit your title in the Title Field

Similarly, you can also edit your meta description by clicking:

  • File > Properties
  • Click Additional Metadata

Edit your meta description in the Description field

Designate Headings in your PDF Documents

Just like you use heading tags to help users navigate your web pages (and search engines understand the topical depth of your content), PDFs should also leverage headings.

Most likely, your white paper, report, or PDF document has some natural structure separated by headings. Make sure that you take the time to specify h1-h6 tags in Adobe Acrobat.

  • Navigate to the Tags icon on the left sidebar
  • Click on the text you want to specify as an h1-h6
  • Right click on the tag > Properties
  • Select the relevant heading from the Type dropdown list

Include Internal Links to your Website

Having internal links will help direct users to other relevant pages on your website. 

They also signal to Google the range of content that your website offers and what other topic areas you cover.

Google will follow the links in your PDFs, meaning you can use them to spread link equity to other pages.

Don’t Save PDFs as Images

Google is going to struggle to crawl and render the content of your PDF if it is saved as an image file. It also makes it more difficult for users if they want to highlight text.

PDF Issues Flagged in the SearchAtlas Site Auditor

The SearchAtlas site auditor will flag an issue if a web page is linking to a .doc file instead of a .pdf.

.doc and .docx are not seen as SEO best practices because they will not be included in Google’s index, meaning missed opportunities to get your content in front of more audiences.

And because .doc are not compatible for all users like .pdf docs, including PDFs instead provides a better experience for website visitors.

Should I Be Using PDFs?

In general, a web page is more likely to get in front of more audiences because of how easily can be understood and indexed by search engines.

PDFs are generally seen as not great for SEO. But when you do have PDF content, you should make the most of it!

So the short answer is don’t make PDFs a big part of your SEO content strategy. But for content like annual reports and white papers that are in PDF form, optimize them!

 

The post SEO for PDFs: Get your PDFs Ranking in the SERPs appeared first on LinkGraph.

]]>
https://linkgraph.io/blog/seo-for-pdfs/feed/ 0
Open Graph Tags Implementation & Best Practices https://linkgraph.io/blog/open-graph-tags/ https://linkgraph.io/blog/open-graph-tags/#respond Mon, 12 Sep 2022 16:54:39 +0000 https://linkgraph.io/?p=18001 An important element that many websites forget to include on their web pages are open graph meta tags. These tags are important for ensuring that your content […]

The post Open Graph Tags Implementation & Best Practices appeared first on LinkGraph.

]]>
An important element that many websites forget to include on their web pages are open graph meta tags. These tags are important for ensuring that your content is appealing and clickable when shared on popular social media sites. 

Here is a guide to what open graph tags are, why they matter, and how to implement them across your web pages.

What Are Open Graph Tags?

Open Graph tags are a set of HTML tags that help you control how your content is displayed on social media websites.

When someone shares one of your web pages on a social network like Facebook or LinkedIn, open graph communicates how to display and format the shared content.   

On the frontend, open graph tags make your content appear with the title, image, and display that you prefer, like this:

screenshot of a facebook post that is using open graph meta tags to better display content

But without Open Graph meta tags on your web pages, your content on social media sites may look like this:

a facebook post with missing open graph image tag

Which post would you be more likely to click on?

Are Open Graph Tags Important?

Open Graph is important because it allows you to control how your content appears when it is shared on social media.

They allow platforms to pull in specific details about the website and its contents to create a richer, more engaging social media post. It is very similar to a schema.org markup.

Open Graph is a great way to ensure that your content looks its best, displays your branding, and projects your content as valuable whenever it is shared online.

Are Open Graph Tags a Ranking Factor?

Open Graph is not a ranking factor that Google relies on when promoting web pages. But they still can indirectly impact your SEO.

How? Because social media is a great avenue for sharing content to grow brand visibility and site authority. If a webmaster sees your content on a social media site and enjoys it, they may choose to link to it on their own web pages, giving you a backlink and valuable link equity.

But if your content doesn’t utilize open graph meta tags, the content looks less appealing whenever shared on social networks or other popular sites, making users less likely to click on it, and webmasters less likely to link to it.

Who Uses OG Tags?

Open Graph Tags are used on major social media platforms, but some of the most important are:

  • Facebook
  • LinkedIn
  • Google+
  • WhatsApp
  • Slack

Twitter has their own version of Open Graph which are called Twitter Cards. To learn more, check out our post on Twitter Cards best practices.

Types of Open Graph

There are a number of different tags that you should be implementing on your web page.

If you do not have these open graph tags present on your web pages, they will be flagged in your SearchAtlas Site Audit Report as “Missing”.

og:title

The title of the webpage is important for your users understanding the purpose and focus of your content. In HTML, the og:title tag looks like this:

screenshot of an og:title tag in HTML

Best practices for og:titles include the following:

  • Should be present on all shareable web pages on your website
  • Character count should be between 40-60
  • Focused on accuracy, brevity, and clickability

og:description

Similar to a meta description for SEO, the og:description should offer a brief description of the web page’s content to give the user more reason to click.

In HTML, the og:description looks like this:

screenshot of an og:description tag in HTML

Best practices for og:description include:

  • Should be present on all shareable web pages on your website
  • Character count should be between 120-160
  • Accompanies the title to make a clickable, sharable piece of content

og:image

The og:image is arguably one of the most important open graph tags because of the visual nature of social media platforms.

The og:image tag includes the URL of an image that represents the content of the webpage. In HTML, it will look like this:

screenshot of an og:image tag in HTML

Ideally, you should also use the og:image:width and og:image:height tags to let the social media websites know the size of your image.

screenshot of og:image tags

Best practices for og:image open graph include:

  • Should be present on all shareable web pages on your website 
  • Uses images with a 1.91:1 ratio for the best clarity on desktop and mobile
  • Also includes og:image:width and og:image:height to ensure that the image loads properly when it is shared

og:url

The og:url tag tells social media websites the url of the webpage. It looks like this in HTML:

screenshot of an of:url open graph tag in HTML

Best practices for og:url include the following:

  • Follows SEO-friendly url best practices
  • Points to the canonical url, especially if they are multiple versions of the page

og:type

The og:type tag helps social media sites understand the type of webpage (e.g. “article”, “blog”, “website” “video” etc.). In HTML, it looks like this:

screenshot of an og:type tag in HTML

Best practices for og:type include the following:

  • Use “article” for blogs and “website” for all other pages
  • Only define one type per page

og:locale

For international websites with content in multiple languages, the og:locale tag indicates the language of the web page content. In HTML, it looks like this:

screenshot of an og:locale tag with a fr country code

Best practices for og: locale are pretty simple:

  • Use the appropriate country code (Here’s a complete list of HTML country codes)
  • Only use for pages that are not written in English

Common Mistakes with Open Graph

There are a few common issues that webmasters make with open graph tags that will be flagged in your Site Auditor report.

Missing Open Graph Tags

The most common mistake that webmasters make is they simply do not include open graph tags on their web pages. 

If open graph is missing from a web page, you will see this issue displayed in your SearchAtlas site audit report.

screenshot of a missing open graph message in the SearchAtlas site auditor

This is an easy fix, and simply involves adding the missing open graph tag into the HTML header of your web page (according to the best practices listed above).

Improper Character Counts

For og:title and og:description, your character counts will be flagged if they do not follow best practices.

screenshot of an open graph character issue in SearchAtlas

To resolve this issue, you will need to either shorten or lengthen the character count of your open graph tags. 

This can be done by editing the HTML of your individual web pages.

How to Implement Open Graph on your Web Pages

There are two primary ways that you can add open graph tags to your web pages.

The first is utilizing a popular plugin. For WordPress websites, the most common used by webmasters is Yoast SEO. 

screenshot of yoast SEO plugin

Other popular CMS like Wix, Shopify, and others have plugins that make adding open graphs simple. 

You can also add open graph tags manually by editing your web page HTML code. Open graph tags should go in the <header> section of your website.

HTML text editor in WordPress

To ensure you have the required fields, you can use a tool like an open graph generator.

Conclusion

If you need assistance adding open graph to your website, our team of web development experts are here to help. Simply request a proposal from our team.

You can also run a site audit yourself after registering for a trial of our SEO software.

The post Open Graph Tags Implementation & Best Practices appeared first on LinkGraph.

]]>
https://linkgraph.io/blog/open-graph-tags/feed/ 0