Google is up again with a new research paper which describes a dramatically new way to improve how web pages are ranked. This algorithm claims significant improvements to deep neural network algorithms that calculate relevance.

The new algorithm discusses a method of ranking web pages called, Groupwise Scoring Functions.

Without confirmation from Google we cannot know for certain if it is in use. But because significant improvements are claimed by the researchers, it may not be far-fetched to consider that this algorithm may already be in use by Google.

Does Google Use Published Algorithms?

Google has stated in the past that “Google research papers in general shouldn’t be assumed to be something that’s actually happening in search.”

Google rarely confirms which algorithms described in patents or research papers are in use. That is the case with this algorithm.

Is this Algorithm Part of the March 2019 Core Update?

This research paper shows how Google is focusing on understanding search queries and understanding what web pages are about. This is typical of recent Google research.

Google has recently introduced a broad core update that is reported to be among the biggest in years. Is this algorithm a part of that change? We don’t know and we will likely never know. Google rarely discusses specific algorithms.

It’s possible that something like this could be one part of a multi-part update of Google’s search ranking algorithm. It is not clear but is is unlikely to be the only one. The March 2019 Core Ranking Algorithm consists of a series of improvements.

Why this Algorithm is Important

The research paper begins by noting that machine learning algorithms label and give values to web pages individually, each web page in isolation from other web pages. Then the algorithms score the web pages in competition with the other web pages to find out which web page is most relevant.

Here’s how the research paper describes how current algorithms work:

“While in a classification or a regression setting a label or a value is assigned to each individual document, in a ranking setting we determine the relevance ordering of the entire input document list.”

The research paper then proposes that considering the age of all of the relevant web pages can give a clue as to what users want. So instead of ranking all the web pages one against the other, by reviewing the age of the web pages first, the ranking algorithm can better understand what a user wants and choose a better web page.

This is how the research paper describes the new algorithm:

“The majority of the existing learning-to-rank algorithms model such relativity at the loss level using pairwise or listwise loss functions. However, they are restricted to pointwise scoring functions, i.e., the relevance score of a document is computed based on the document itself, regardless of the other documents in the list.

…the relevance score of a document to a query is computed independently of the other documents in the list. This setting could be less optimal for ranking problems for multiple reasons.”

Cross-document Comparison

The research paper then shows how the current method of ranking web pages is missing an opportunity to improve the relevance of search results.

This is the example the research paper uses to illustrate the problem and the solution:

“Consider a search scenario where a user is searching for a name of a musical artist. If all the results returned by the query (e.g., calvin harris) are recent, the user may be interested in the latest news or tour information.

If, on the other hand, most of the query results are older (e.g., frank sinatra), it is more likely that the user wants to learn about artist discography or biography. Thus, the relevance of each document depends on the distribution of the whole list.”

In this example, the age of the web pages that are relevant to the search query can help to refine which answer is the best answer.

Modeling Human Behavior for Better Accuracy

The research paper then notes that search engine users tend to compare search results relative to other web pages. They then suggest that a ranking model that does the same thing is more accurate.

“…user interaction with search results shows strong comparison patterns. Prior research suggests that preference judgments by comparing a pair of documents are faster to obtain, and are more consistent than the absolute ratings.”

Also, better predictive capability is achieved when user actions are modeled in a relative fashion… These indicate that users compare the clicked document to its surrounding documents prior to a click, and a ranking model that uses the direct comparison mechanism can be more effective as it mimics the user behavior more faithfully.”

The New Algorithm Works

When considering algorithm research, it’s important to note whether the researchers stated that it improved and advanced the state of the art.

Some research papers note that the improvements are minimal and the cost of achieving these gains are significant (time and hardware). We consider less successful research to not be a good candidate for inclusion in Google’s search algorithms.

When a research paper reports significant improvements coupled to minimal cost, then in our opinion these kinds of algorithms have a higher likelihood of being included into Google’s algorithms.

The researchers concluded that this new method improves Deep Neural Network and tree-based models. In other words, this is useful. Google never says if an algorithm is used or how it is used. But knowing that an algorithm provides significant improvements and can scale improves the likelihood that the algorithm may be used by Google, if not currently then at some point in the future.

This is the value of knowing about information retrieval research. You can know what is possible. Understanding that something has not been studied is a strong clue that a theory about what Google is doing is not likely.

For example, correlation studies caused the SEO community to believe that Facebook likes were a ranking factor. But if those SEOs had bothered to read research papers they would have known that such a thing was highly unlikely.

In this case, the researchers state that this method is highly successful. In the following quote, please note that DNN means Deep Neural Networks. GSF means Groupwise Scoring Function.

Here is the conclusion:

“Experimental results show that GSFs significantly benefit several state-of-the-art DNN and tree-based models…”

How this Can Help Your SEO

Ranking in Google is increasingly less about traditional ranking factors. Twenty year old ranking factors like anchor text, heading tags, and links are decreasing in importance.

This research paper shows how considering commonalities between relevant pages may provide clues to what users want. Even if Google isn’t using this algorithm to rank web pages, the concept is still useful to you.

Knowing what users want can help you better understand the user’s information needs and to create web pages that better meets those needs.

And that may increase your ability to rank. Chase the carrot, not the stick.

Read the research paper here:
Learning Groupwise Scoring Functions Using Deep Neural Networks (PDF)

Googles March 2019 core update

It appears Google heard about the confusion with the namings of the broad core algorithm update from March 12 and decided to clear things up. Google said on Twitter, “Our name for this update is March 2019 Core Update.”

Why did Google name it? Google doesn’t often give names to updates but in this case, Google said, “We think this helps avoid confusion; it tells you the type of update it was and when it happened.” So Google named it the March 2019 Core Update, which they think will help avoid confusion.

Can Google change names of updates? Yes, they can and they have done so in the past. The original name we had for the Panda update was actually the Farmer update. Google didn’t like the name Farmer update and renamed it to the Panda update, which was based on the lead engineers last name.

So yes, Google has renamed updates in the past and they’ve stuck.

The tweet: Here is the tweet from Google earlier this morning with the new name:

 “We understand it can be useful to some for updates to have names. ... Our name for this update is "March 2019 Core Update."

Will it stick? Will the new name stick or will people still call it the Florida 2 update? It is hard to say but I suspect people will go with calling it what Google wants it to be called, the “March 2019 Core Update.”

How the Google Update of March 2019 Impacted Websites

Analysis shows that Google continues to fine-tune its search results for queries related to particularly sensitive topics. The March 2019 Google Core Update has had a large impact on websites.

While some websites have benefited from the update, several others have lost and their rankings have declined.

Google now values trust and branding

One clear trend resulting from this update seems to be Google favoring websites, particularly when users are searching for sensitive keywords, that are able to provide a higher level of trust. The main beneficiaries of this focus are websites with a strong brand profile and a broad topical focus. On the flipside, this has meant that niche websites dealing with these topics have seen their rankings fall.

Update Show User signals grow in importance

One key take away from this update show that Google’s algorithm has increased its weighting of user signals when calculating rankings. The results show that domains that improved their SEO Visibility following the Google Core Update have higher values for time on site and page views per visit, and lower bounce rates than their online competitors.

 

Time On Site

Pages per Visit

Bounce Rate

Winners

2:29

2.7

58%

Losers

1:58

2.4

65%

Just take a look on the contrast of the average time on site for all winners and losers. The update winners have an average time on site of 2:29 minutes – that’s 26 percent more than the update losers, which have an average time on site of 1:58 minutes. Similar differences can be found when analyzing the pages per visit or the bounce rate.

These findings are of great interest because these user signals are some of the “hardest” ranking factors. If users spend longer on a domain, open more pages per visit and bounce less often back to the Google search results, then the page must be doing something right. However, optimizing for these three metrics isn’t the answer for every single search query. Some searches – and the search intents behind them – will be fully satisfied by a short visit to website (though here there is an ever-increasing danger that Google is already, or will soon be able to answer such queries itself).

This analysis is based on the top 100 winners and losers of the Google Update. Domains have been excluded where user date could not be provided, where there has been a domain migration or where the weekly SEO Visibility change was under 10 percent.

3 ways to win from Google Updates

With its first official update of 2019, Google has made another major alteration to the workings of its algorithm. But how can website owners deal with losses or how can they protect themselves from being negatively impacted by any future updates?

1. Build brand & trust

One of the key investments for website owners is building a brand and expertise in a topic segment so that users and Google trust the site. It also creates long-term user relationships and makes the website more independent of individual Google updates: That means More Direct Traffic, less dependency from organic search.

“Branding and user experience are becoming increasingly important for Google. Websites that aren’t positioned at the end of a transactional user journey have to offer more than just good, holistic content and crawler-friendly technical infrastructure.” Malte Landwehr, VP Product, Searchmetrics

 

The Google search results are no different from any other place where users interact with brands – people are more likely to click on a result if they recognize a website than if they don’t. This creates improved user signals and can have a long-term positive impact on rankings, traffic and conversions. In this regard, it is possible for a website to establish itself as a well-known brand authority for subjects around a particular topic. Niche pages that are only dedicated to one keyword cluster are likely to struggle to compete long-term with larger, better-recognized brands.

2. Follow Google’s Quality Rater Guidelines

Most of the readers of this post are probably already familiar with Google’s Quality Rater Guidelines. Google publishes these as recommendations for testers who are tasked with the manual evaluation of the quality of websites appearing in the search results. But, hand on heart, who of you has actually read and memorized all 164 pages of the document? It is worth doing, because this is where Google has clearly formulated guidelines that describe the kinds of search results it considers desirable, making the Quality Rater Guidelines a valuable source of numerous tips for website owners.

3. Optimize for user intent

Does my page meet user expectations? Do I provide precisely those answers that users are looking for? To ensure that the answer to both of these questions is a definitive “yes”, you need to do more than optimize for one or two main keywords. However, covering a topic holistically won’t always be successful either. Theoretically, every topic and every search query demands its own kind of answer. The expectations users have when searching for a particular topic, the kinds of related questions that users are asking and the intent – navigational, informational or transactional – that is motivating the search query – are best analyzed with the aid of a dedicated software solution.

Read Previous News

 
 

Free Certificate Course in Web Designing

 



Thursday, April 4, 2019







« Back

Powered by WHMCompleteSolution