Brand new keywords sound super tricky to find — except for a ton of easy ones that come around every January: simply adding the year to whatever keyword you’re targeting. People can start getting traffic from “2020” keywords long before they show up with any kind of search volume in typical keyword-research tools, since their data lags. (Hat tip to Glen Allsopp, who I got that idea from.)
Your article reaches me at just the perfect time. I’ve been working on getting back to blogging and have been at it for almost a month now. I’ve been fixing SEO related stuff on my blog and after reading this article (by the way is way too long for one sitting) I’m kind of confused. I’m looking at bloggers like Darren Rowse, Brian Clark, and so many other bloggers who use blogging or their blogs as a platform to educate their readers more than thinking about search rankings (but I’m sure they do).

Awesome blog post. But I did not understand the logic behind ghost posts. could you please give more clarifications? Is it like just because the new post was published on the 4th page and not on the top of the feed, Google would give higher rankings?. And is the correlation between SEO & promoting content with facebook ads and getting traffic?. Does Getting social traffic from paid social ads improves the organic rankings?.
With my experience, about 65% of my traffic comes from search engines, & the rest is from social sites that include referrals & direct traffic. Communicating with similar kind of blogger is the best way to get traffic. It’s just like going to relevant sites comes under the micro niche site to you and ultimately making you get the direct quality traffic to you. Anyhow, it will then affect our keyword ranking and PageRank according to the Google guidelines. To get higher search rankings, you need not only focus on SEO but other factors to make you drive more attention to readers online. Thanks for this page, that will help me a lot and for other newbies too…
Think about the words that a user might search for to find a piece of your content. Users who know a lot about the topic might use different keywords in their search queries than someone who is new to the topic. For example, a long-time football fan might search for [fifa], an acronym for the Fédération Internationale de Football Association, while a new fan might use a more general query like [football playoffs]. Anticipating these differences in search behavior and accounting for them while writing your content (using a good mix of keyword phrases) could produce positive results. Google Ads provides a handy Keyword Planner34 that helps you discover new keyword variations and see the approximate search volume for each keyword. Also, Google Search Console provides you with the top search queries your site appears for and the ones that led the most users to your site in the Performance Report35.
Consider your competition. Look at what your competitors are doing and how they are performing in their search marketing before you decide how you can best compete with them. Research what search terms they rank organically for. Consider if you can execute a plan to top their SERP placements. Also, look at what paid terms they are using to drive traffic to their own sites. As you perform this research, look for gaps that you can fill and areas where you will be unable to compete in both paid and organic search.
Brian, I’m going through Step 3, which is referring to the one version of the website. I found a very good free tool (https://varvy.com/tools/redirects/) to recommend. It checks on the redirect and gives you a visual number of hops. More hops mean more delay. For example, if I use your manual method to check on https://uprenew.com, all looks good. However, if I use the tool and check, I realize there is an unnecessary 1 hop/delay, whereby I can fix it. Hope this helps. : )
There are other parts of SEO which you should pay attention to after your audit to make sure you stay competitive. After all, the technical foundation isn't the end of the road for SEO success. It's important to pay attention to your competition's SEO activity, keep an eye on the newest search engine best practices, and maintain local SEO best practices if your business depends on customers visiting a physical address. All of these are elements of a successful SEO strategy and should be corollary to your audit and ongoing SEO maintenance.
Fill this bucket by building a fan base. Build a social network, get people to link to you, get people to share your t-shirt pages on their social network saying ‘I want this!’, get people to comment, leave testimonials, show pictures of themselves wearing the product or using the product, Create a fan-base and then rally them to link to you and talk about you. That’s how you prove to Google that you are trustworthy and authoritative.
Hey Sharon, great post! Re. dwell time – I’ve read conflicting opinions, some saying that Google DOES consider it an ‘important’ ranking signal, and others saying that it doesn’t, because dwell time can sometimes be a misleading indicator of content quality. For example when a user searches for something specific and finds the answer immediately in the recommended page (meaning that the content on the page is actually spot on) so he returns to the SERPs very quickly. I have been unable to locate any definitive statements (written/spoken) from anyone at Google that suggest that dwell time IS still a factor in ranking considerations, but it makes sense (to me, anyway) that it should be. Do you have any ‘proof’ one way or the other re. whether Google definitely considers dwell time or not?

Site. Migration. No two words elicit more fear, joy, or excitement to a digital marketer. When the idea was shared three years ago, the company was excited. They dreamed of new features and efficiency. But as SEOs we knew better. We knew there would be midnight strategy sessions with IT. More UAT environments than we could track. Deadlines, requirements, and compromises forged through hallway chats. ... The result was a stable transition with minimal dips in traffic. What we didn't know, however, was the amount of cross-functional coordination that was required to pull it off. Learn more in this video!

To avoid undesirable content in the search indexes, webmasters can instruct spiders not to crawl certain files or directories through the standard robots.txt file in the root directory of the domain. Additionally, a page can be explicitly excluded from a search engine's database by using a meta tag specific to robots (usually ). When a search engine visits a site, the robots.txt located in the root directory is the first file crawled. The robots.txt file is then parsed and will instruct the robot as to which pages are not to be crawled. As a search engine crawler may keep a cached copy of this file, it may on occasion crawl pages a webmaster does not wish crawled. Pages typically prevented from being crawled include login specific pages such as shopping carts and user-specific content such as search results from internal searches. In March 2007, Google warned webmasters that they should prevent indexing of internal search results because those pages are considered search spam.[47]


Search engine optimization (SEO) is often about making small modifications to parts of your website. When viewed individually, these changes might seem like incremental improvements, but when combined with other optimizations, they could have a noticeable impact on your site's user experience and performance in organic search results. You're likely already familiar with many of the topics in this guide, because they're essential ingredients for any web page, but you may not be making the most out of them.

Robots.txt is not an appropriate or effective way of blocking sensitive or confidential material. It only instructs well-behaved crawlers that the pages are not for them, but it does not prevent your server from delivering those pages to a browser that requests them. One reason is that search engines could still reference the URLs you block (showing just the URL, no title or snippet) if there happen to be links to those URLs somewhere on the Internet (like referrer logs). Also, non-compliant or rogue search engines that don't acknowledge the Robots Exclusion Standard could disobey the instructions of your robots.txt. Finally, a curious user could examine the directories or subdirectories in your robots.txt file and guess the URL of the content that you don't want seen.
Consider the length of your typical customer buying cycle. If your products and services have a short customer buying cycle, meaning your customers know what they want, search for it, and buy it, you may benefit from SEM ads that put your product right where customers will see it. Longer buying cycles, where customers research and compare for weeks or months, may not perform as well with SEM, as there isn’t an immediate buy after seeing one ad.
If this sounds good to you feel free to fill out our discovery form.  This will give us a chance to analyze your site and put together a game plan to rank you higher.  We promise to get in touch within 24 – 48 hours.  Once we go through our analysis, we will set up a meeting where we can show you how you can dominate your market online and stay there.
Early versions of search algorithms relied on webmaster-provided information such as the keyword meta tag or index files in engines like ALIWEB. Meta tags provide a guide to each page's content. Using metadata to index pages was found to be less than reliable, however, because the webmaster's choice of keywords in the meta tag could potentially be an inaccurate representation of the site's actual content. Inaccurate, incomplete, and inconsistent data in meta tags could and did cause pages to rank for irrelevant searches.[10][dubious – discuss] Web content providers also manipulated some attributes within the HTML source of a page in an attempt to rank well in search engines.[11] By 1997, search engine designers recognized that webmasters were making efforts to rank well in their search engine, and that some webmasters were even manipulating their rankings in search results by stuffing pages with excessive or irrelevant keywords. Early search engines, such as Altavista and Infoseek, adjusted their algorithms to prevent webmasters from manipulating rankings.[12]
×