Throughout this case study, you will learn the techniques that were used to increase transactions on an e-commerce website by over 90%. This was done by building a custom, but replicable, strategy for a long-standing client of 2 years. You will soon learn the technical, onsite, and backlinks approach that allowed this e-commerce client to grow their traffic by 48% year on year. This traffic growth saw transactions increase by 93.20%, which generated an additional $49k for the year for the client or a 39.45% increase in overall revenue (from $123.6k up to $174.5k). The Challenge The client is a niche-specialist in the confectionery industry, offering high-end chocolates to customers in the USA and around the world. The kind of chocolate you eat until you explode. They specialize in both wholesale and retail chocolate sales and want to attract professional clients from the food industry as well as “off the street” buyers. Building relationships, authority, and a brand following are very important in this business. The client approached The Search Initiative two years ago and was looking to increase conversions, develop a solid link building strategy, and have an in-depth, on-site SEO audit to improve their traffic metrics. The following is a walk-through of the steps you can take as an e-commerce site manager to achieve similar gains to our favorite chocolate client. Perform a Technical SEO Audit Crawl Management One of the more common issues faced by e-commerce sites is crawl management. If Google crawls areas of the site that have no use to the bot or users, it can be the result of faceted navigation, query strings, or sometimes, Google’s temperamental flux. Crawling such pages is a waste of Google’s time, as these pages generally have no use for Google due to them being very similar / duplicates of an original page. Since Google only has a finite amount of time on a website before it bounces off, you want to be able to control that as much as possible. You need to make Google only spend time on pages that have value. This means value pages are more likely to be crawled more often and new changes on the site are more likely to be picked up quicker. What’s even better is: Google tells you what its algorithm “thinks” of your pages! How? In Search Console -> Coverage report! One of the areas that are especially worth inspecting with the greatest care is Coverage report > Excluded > Crawled but not indexed. When reviewing Search Console, you should be looking for URL patterns. In the “Crawled but not Indexed” section of Google Search on our client’s site, we found many random query strings URLs Google recognized, but wasn’t indexing. These URLs “in Google’s eyes” had no value. After manually reviewing them, we discovered that Google was right. To prevent the search engine spending more time on these URLs and wasting its crawl budget, the easiest approach is to use robots.txt The following directives were included in the robots.txt file: User-agent: * Disallow: /rss.php* Disallow: /*?_bc_fsnf=1* Disallow: /*&_bc_fsnf=1* This was enough to take care of it! Please bear in mind that when you are cleaning the index with the use of robots.txt, there will be a part of Search Console > Coverage report which will start going up: Blocked by robots.txt This is normal. Just make sure to review the URLs reported there every now and again, ensuring that only the pages you meant to block are coming up. If you suddenly see a big spike or URLs you did not want blocked, it means either you made a mistake or Googlebot crawled itself somewhere you did not know about. Index Management Index management involves removing pages that contribute no value to the user in Google’s Index of your site. Google’s Index of a site is a list of all the pages it could return to the user of a given website in the Google SERPs. Unlike crawl management, pages that should not be in the index are not always cases where they present no value to Google. For example, “tag” pages are useful for internally linking articles or products and therefore have value in that they can help Google understand the relationship between pages. However, at the same time, these pages are not the type of pages you want to see in the SERPs, and by having them indexed, Google will crawl them more regularly Consequently, ‘bloating’ the index in this way holds your site back, as the search engines use their limited resources – or crawl budget – to assess pages that did not convert or are naturally thin in content. The client had the site set up in such a way that internal search results and tag pages were also being indexed. These provided no value to a user whatsoever, nor would they effectively contribute to better rankings from the search engine’s perspective. The most common pages that usually mess up index management include: The tricky part is, you have to identify all URL parameters/types of pages that have no value to the SERPs, and then you can noindex these pages. As a quick note, it is important to understand that there are cases where index management and crawl management are both under the same umbrella. For example, Google may be crawling non-value query strings and indexing them at the same time. As a result, this is both an indexation issue and a crawling issue. Double the fun. Broken Links Broken links are a troublesome issue that needs to be resolved if you want a well-oiled website with free-flowing authority across your important pages through PageRank. Having broken links prevents users from navigating the site easily and effectively. It can also result in users missing the opportunity to navigate their way to valued e-commerce pages! A broken page or a 404 page is, in essence, a page that returns an error due to the URL not existing or no longer existing. It’s commonly caused through old pages being deleted that still have internal links pointing at them from within your site. The client had 404 errors in abundance, and many internal links were broken; the result of changing their site in the past and not updating the link structure (or doing a proper URL mapping). To find and resolve these, you need to crawl the website. Any popular crawler like Sitebulb, Ahrefs or Screaming Frog will do the trick. Here’s how you can do it using Sitebulb. Under Link Explorer > Internal Links > Not Found you can identify where the internal links to these 404 URLs are. After this, you should go through these URLs one by one and remove the broken links manually. Where possible, replace the links pointing at non-existent pages, with a link to a relevant, working page. This is particularly beneficial if you are replacing a link from an old, no longer existing, product page, to a new, functioning product page You may need to fix hundreds of these broken links using this manual technique. All this effort is to ensure no link equity gets lost between the pages you want to rank, especially the money-making pages. Yes, it’s mundane. Yes, it’s necessary. Yes, in most cases, it’s worth it. Internal 301 Redirects In addition to finding broken links, crawler tools are also great at picking internal redirects. This causes hops between intermediate URLs instead of going directly to the linking page, which is the optimal route. It looks quite something like this: If you follow the Red Arrows: The link points from the source to a page which is redirected using a 301 HTTP response code, to only then, finally, land on the correct page (returning 200 OK code). In short: Not Good! Now, follow Green Arrow: The link is pointing from the source, directly, to the correct page (200 OK). There is no interim “hop” in a way of the redirected page (301). In short: Good! With this, don’t get me wrong. One internal redirect is normally not an issue in itself. Sites change, things get moved somewhere else. It’s natural. It becomes a problem when a site has many internal redirects – this then starts to impact the crawl budget of the site, because Google is spending less time on core content pages that actually exist (200) and more time trying to navigate a site through a thicket redirected pages (301). Similar to solving broken links, you have to run a crawl and go through the links identified manually to replace them with the correct, final page URL. Page Speed and Content Delivery Optimization I cannot stress enough how important speed optimization is. In this day and age, it’s a no-negotiation must for a site to be responsive to users. A slow site that takes time to load, in most cases, results in users bouncing off the site, which is not only a loss in traffic but also a loss in potential sales. And guess what Read More Read More
The post E-Commerce SEO Case Study – How to Increase Traffic, Transactions, and Revenue first appeared on Diggity Marketing.
]]>