Posts

How the Web is Won – Real-Life Tips for Getting Highly Ranked by Google

Lawrence Strauss

Google has by far the most comprehensive data on the web.  Its business is virtually completely dependent on people using its search engine.  This means it has an enormous interest in keeping searchers satisfied, that neither an upstart nor Microsoft overtakes it (as it did to AltaVista).  And, given its wealth, it purchases the best talent to constantly improve its search function, continually fulfilling the promise of artificial intelligence while acting less like a malleable machine1.

And if you want sales on the Internet, given the market share we surfers have granted it, there’s no avoiding Google.

So how can you get your site highly ranked by Google? The answer:  SEO (Search Engine Optimization) developed in the wake of the explosive growth of the web.

Twenty years ago there was no “Optimization”; you were trying to appeal to the fledgling Yahoo’s employees, who could manage to look at and review every site. But when machines supplanted people because of web-page volume, the software could be fooled with myriad techniques, including the popular and persistent keywords:meta tags. (Please see Search Engine madness by Lawrence Strauss in the April 2016 edition of Bryley Information and Tips.)

Because its business is built almost exclusively on search results, Google got much better at understanding site owners and seeing through their desire to be ranked first, and their techniques to get there.  So what’s come about is a return to the only really timeless technique, memorably expressed by Phil Frost; the Golden Rule of SEO is create the web page you would want to find if you were searching.

But first a diversion into much less poetic territory; like life itself, Google isn’t fair.

Big business breaks the rules all the time and Google rewards it with the best rankings.  Take for example the Microsoft-founded travel site Expedia:  Expedia was penalized (i.e. knocked down on some search results) by Google for violating its rules about manufacturing in-bound links.  (In-bound links, or links on other sites pointing to Expedia, are meant to be understood by Google’s PageRank as independent votes that boost Expedia’s credibility.)

Why does Google, if it’s interested in serving searchers with good information, reward a page like that?  It has been suggested it is because if Expedia were missing from search results where people would expect to find it, people would doubt whether Google search was working correctly.

So, small businesses are being made to adhere to standards that big businesses can ignore.

If, according to Google, nobody much would notice if your business is missing from the results, and you violate Google’s Quality Guidelines, Google can algorithmically exact a penalty on your site.  These penalties (with over 200 criteria) are not so easy to clear up.  On the bright side in this vein, one of the best things you can do for your business in Google search, and your business in the world, is to build it, or to use the buzz-word, build its “brand” – so that it cannot be ignored. Then it will slough off Google Guidelines like it was launched by Bill Gates.

What exactly is a “Brand”?

I just saw a memorable branding of the bad guys by a red-hot metal bat-symbol in the Batman vs Superman movie. The word comes to business from livestock hide-marking.  And it’s because of this connection that business-people understandably focus on the logo; David Ogilvy taught advertising agencies in the 1980s, an already decades-old chant:  “If your client groans and sighs, make his logo twice the size.”  Ogilvy was grumpy about it, because a symbol is really an almost inconsequential part of doing business.  And, as concerns our topic, symbols are unreadable, and so, useless, to Google search, demonstrating that there is much more to the idea of “brand” to get a good Google rank.

It’s hard to argue against market-dominance being a factor in having a business that is acknowledged with a top position by Google.  But there are great brands that exist in every industry, that win the rankings and sales appropriate to their business size and model.  Bear this in mind when thinking about building your brand: establish its role in the market (or sometimes alternatively called, “mission,” or sometimes “vision”) and values (what are the means by which the business will fulfill its role).

The more consistently these ideas are both articulated verbally and non-verbally, and most important, repeatedly put into practice, the meaning of the brand will be revealed.  And how will it be revealed?  Ever heard the expression, a business has the customers it deserves?  Well the meaning of your brand will end up being reflected back to the business in the form of recognition. And recognition can take many forms:

  • Conversations on social media,
  • Reviews on Facebook pages and other websites,
  • Awards from trade associations, links from industry peers,
  • Citations in industry publications, and so on.

(It’s also not a bad idea to get the ball rolling by asking industry colleagues and customers on occasion to discuss via social media a page you’ve added to your site.)  The main benefit of all this is that your organization gains in reputation and therefore credibility (and along this path, a good Google rank), and therefore sales or for a nonprofit, another form of fulfillment of its role.

They Call Me the Seeker

Most of the web has been built ignoring data about how people search. And that’s as it should be.  For instance, if you have a specific story to tell, if you are building a page or site for a specific community that is being directed to the site in other ways (this newsletter, for example, is created to help the Bryley community), or if you have research to publish.

But if you want your site to be found by strangers among the billions of web pages, consider how people are using the web, which leads back to the idea of getting a good ranking by thinking like a searcher.  Start by asking yourself:

  • In my field what are the questions that Internet searchers are asking?
  • What is the motivation behind the searchers’ questions:
    • Are they looking for free advice only?
    • Are they looking to see who is an expert that they can hire?
    • Are they looking to connect with people with similar interests?
  • How are the searchers asking those questions:
    • What are the popular resources for those kinds of queries?
    • Why do you think those sites are popular?
    • What words are people using to search?

If you’re not, get familiar with Google AdWords’ Keyword Planner; for the last couple of years you’ve needed to sign up for an AdWords account to access it and it is designed for Google’s paid search-results program, but the data is derived from Google searches, and so helpful in understanding what’s being searched for.

One of the best ways to use the Keyword Planner is to enter a top-Google-ranking competitors’ site in the field revealed after you select Search for new keywords using a phrase, website or category.  Google usually does a great job in parsing the site, giving few irrelevant returns, but it gives a lot of returns.  But these can be filtered in the menu to the left, entering, for example 1000 searches/month and a minimum bid of $1.50, as recommended by Dan Shure in the AdvancedWebRanking blog.  This will minimize the keywords that no one searches for and reveal those few that businesses value.

Once you have a keyword or keyword phrase about which you’re planning to build your page, now what?  How do you avoid having a page like Expedia’s keyword-stuffing example that violates Google’s Quality Guidelines, yet still able to attract the interest of Google?

Short answer: put the keyword or keyword phrase in the title tag of your page.  Also include it in the “keyword:description” meta-tag.  This meta-tag is what Google will likely use as the search-return description under the link to your page. This meta-tag is not there to be seen by Google, but by a prospective visitor, so the description should contain the keyword or a supporting idea, be plain-English and be compelling to invite a click – after all it is these people that we’re really interested in, not Google.

Long answer: as Bill Gates said years ago, “Content is king.” And thanks to

Google’s synonym-support, the content of the page should not be redundant, but reflect the variety of terminology used to explain a given subject.  And understand that authoritative content, as Google prefers, means at least 500 words on your subject per page. Matt Cutts, head of Google’s Webspam team advised, “For example, if you’re talking about a USB drive, some people might call it a flash drive or a thumb drive”.

Bear in mind the terms that people will type and think about synonyms that can fit naturally into your content.  Don’t stuff an article with keywords or make it awkward; rather, incorporate different ways of talking about a subject in a natural way.

In May 2015, Google announced that the share of search on mobile outstripped search on PCs for the first time.  With the announcement came suggestions for organizations to make their sites able to deliver information in the moment it is wanted, meaning make sure your site’s load time is speedy.  To test and improve this factor in Google rankings, Google created the PageSpeed Tools.  (To test your page-download speed, enter a web address in the field and click “Analyze”.)  The PageSpeed Score ranges from 0 to 100 points:  A higher score is better and a score of 85 or above indicates that the page is performing well.

While not a wholly comprehensive accounting of what it takes to be in the first rank of a Google search, the suggestions covered here are consistently the biggest effecters:  being the kind of organization that makes people link to its content, creating content that answers what searchers are seeking, and making sure your pages load fast.

Funnily enough, these answers are not too different than thinking about getting your page ranked twenty years ago when Dave and Jerry at Yahoo! were linking to pages manually.  But unlike then, no one sees the same results.

Search history is recorded and weighted in the results (unless a searcher opts out), and social connections are recorded and weighted in the results (unless a searcher opts out). Couple these with a physical location to get the results shown by Google and Bing.

For the web developer, strategically not much has changed, except the weight is maybe more strongly on reputation. But if you’re developing a site, how do you see the results without being affected by these filters and get a truer sense how your site is faring? You’ll have to choose these settings in your browser: Chrome, Incognito Window, Firefox, Safari and Internet Explorer, Private Browsing.

1 Google notes to support these statements:

$500B market capitalization.

Search Engine madness

Lawrence Strauss, Strauss and Strauss

A long time ago in the Information Age, there was Yahoo!. Yahoo! was the work of Jerry Yang and David Filo, grad students at Stanford, and was a guide to the soon-to-be-bursting-out World Wide Web. Here is a snapshot of an early version of Yahoo!, when there were about 200,000 websites (now there are around a billion).

Yahoo! was the work of people, who spent their time looking for interesting sites on the Web and, when they found something of value, the discovered site would make the Yahoo! list, sometimes with a brief, opinionated review of what to expect on a visit. And an opinionated review is what netizens sought to deal with the voluminous web: What do the people at Yahoo! think is a good resource for any given subject?

But when the sites and the pages ballooned in the mid-’90s, it begged for developers to write software-based means to reveal the Web’s contents in a helpful way. And the engineers adapted database-sorting software to the task, authoring Lycos, Overture, Excite and Alta Vista. AOL was the most popular way to access the Internet at the time. And so it was imitated, generating what became known as “portals”. Each of those software search engines, one by one, tried to follow AOL’s model, and tried to each create a content-rich site so visitors would theoretically never have to leave1.

Google, also developed by students at Stanford, Larry Page and Sergey Brin, had a different approach. Google emerged from this trend of bloated interfaces as a bare-bones search engine. Google also incorporated a different technology, Page Rank. Page Rank aided in prioritizing search results not just on the basis of the page’s content, but also on the basis of how often it is linked to by other web pages. The thinking behind this was that a good resource will be highly valued by others and so these others will naturally want to link to it on their web pages. Google uses a combination of methods to arrive at its results to a given search. And Google, so confident it would lead visitors to the right answer, included an “I’m Feeling Lucky” button to take a visitor directly to the top item on the search result’s page.

Google’s technology and approach left the others in the dust … and now we are in an age in which Google is nearly the only major search engine left. And while still extant, CEO Marissa Mayer is selling Yahoo! for parts.

Today, it’s estimated that 80% of the time that we search the web, we Google.

(See comScore’s comScore Releases February 2016 US Desktop Search Engine Rankings and Search Engine Land’s Who’s Really Winning The Search War? By Eli Schwartz on 10/24/2014.) The other options include Microsoft’s successfully relaunched Live Search, now known as Bing (and on Yahoo’s site, branded as Yahoo! search), which has search engine traffic around 20% of Web searches. And there are lesser-known search engines like DuckDuckGo (although growing because of its privacy aims, it’s mostly a Bing-derived search2 and represents less than 1% of searches) and similar and even less frequently used Google-derived privacy protected search, such as at ixquick.com.

The Business of Searching for Business

Although popularly dubbed Web 2.0 around ten years ago and 3.0 more recently3, people still use the web to do most of the same things as in the ’90s. And 70% of the time, we start with a web search (per the 2014 research of 310 million web visits by web content-creating company, Conductor, in the Nathan Safran article: Organic Search is Actually Responsible for 64% of Your Web Traffic). So search is important to businesses who want to use the web to get searchers to consider their services.

And not only is the top position potentially lucky for the Google searcher, according to a study by ad network, Chitika, that top position in the search-results page is clicked 33% of the time (from the article No. 1 position in Google Gets 33% of Search Traffic by Jessica Lee). So, no wonder there is an industry, SEO (Search Engine Optimization), to try to get pages in that top position.

As a result of the desire for the top position, there is an ongoing cat and mouse game between makers of web pages (or their SEO contractors) and search engines. The makers are the cats who want to catch that elusive mouse of top-of-page placement when someone searches using the ideas that connect to their service.

One of the first examples of this game was the infamous Meta name=’keywords’. Created by the World Wide Web Consortium (W3C) in the ’90s out of a desire to get useful indexing information to the search engines the Meta Tag, Keywords could contain a list of words that would help a search engine’s software robot have ready access to the important ideas on a given page4. Only problem was how quickly web-page-writers tried to stuff (aka spam) the Keywords tag with words the writer thought would make it rise to the top of the pack of search results (and I’ve seen some ridiculous things like porn words placed by an “SEO expert” in the Keywords meta tag of a retailer).

In 2002, Alta Vista’s John Glick said, “In the past we have indexed the Meta keywords tag but have found that the high incidence of keyword repetition and spam made it an unreliable indication of site content and quality.” (See the Search Engine Watch article Death of a Meta Tag by Danny Sullivan on 9/30/2002.) And Alta Vista was one of the last to support the Keyword tag.

And this game goes on today, only the venue changes. Google just announced that it is delisting or downgrading sites that have outbound links it considers illegitimate (these links were intended to boost the Page Rank of the page being linked to). In the current case bloggers were linking to sites in exchange for gifts. Google discovered the pattern of behavior and exacted penalties on the offending bloggers’ sites. (See the Search Engine Land article Google’s manual action penalty this weekend was over free product reviews by Barry Schwartz on 4/12/2016.)

Google is our (mostly) sole arbiter of the content of the voluminous web that we access by its rankings in importance (aka software-derived opinionated review). And an opinionated review is what netizens seek in order to deal with the voluminous web: What does the Google engine think is a good resource for any given subject? Which of course sounds a lot like trying to appeal to David and Jerry’s Yahoo!: Fundamentally the rules that applied to catching Yahoo’s favor are the rules that apply to winning Google’s highest ranks.

Next installment: How the Web is Won.

Notes

1Keeping visitors was valuable two ways. In lieu of a truer model, a site’s “eyeball” count was a measure by which too many web-based companies’ valuation went stratospheric. Also ad revenues were based on the traditional media-derived model of cost per impression.

2DuckDuckGo’s search is not identical to Bing in the way Yahoo’s is, as of this writing. DuckDuckGo, per its own site, claims to have its own web robot collecting information on web pages and also aggregates information from disparate sources, chiefly Bing, and uses a proprietary method to weigh the importance of information from all the sources.

3Web 2.0 was to indicate increased content coming from web users (e.g. blogs and YouTube channels). Web 3.0 is a Web-inventor, Tim Berners Lee, proposal to increase and change the nature of the web’s html language to include access to additional code and computer languages so that computers can process data in the html, it’s designed so that both humans and machines can make use of the content in a way native to each. (See the W3C standards on Semantic Web.)

4Meta Tags or Metatags are mostly hidden html content. These include a page refresh function and page-content description.

Recommended Further Reading:

  • The Search: How Google and Its Rivals Rewrote the Rules of Business and Transformed Our Culture by John Battelle.
  • Googled: The End of the World As We Know It by Ken Auletta.