A Google exploit has recently revealed some surprising insights about consensus scoring, query classifications, site quality scores, and more used by Google’s ranking system.
Mark Williams-Cook, a Google Ads management services expert, discovered an exploit revealing that Google uses more than 2000 properties to classify websites and queries, along with specific classifications such as query types and consensus scoring.
Why We Should Care?
The unveiling of these vulnerabilities has given us a more clear view of how Google’s search ranking system actually works. Companies offering paid marketing solutions like social media management and Google AdWords campaign management services can capitalize on these new insights to deliver better results than before.
The massive content API warehouse leak that happened last year already gave us a lot of information about Google Search. Now, we have got some more new, undiscovered insights into classification, scoring, site quality scores, and more.
Consensus Scoring
According to experts from a social media advertising service, Consensus scoring is a process that Google uses to count the number of passages in content that match, contradict, or remain neutral to the “general consensus”. Based on this, Google will generate a consensus score which might impact your rankings for a specific query, especially debunking queries (like, is the earth flat?).
Query Classification
The exploit revealed that Google classifies almost all of the queries into eight “refined query semantic classes” which are as follows;
- Definition
- Short fact
- Reason
- Bool (short for Boolean – yes/no questions)
- Instruction
- Consequence
- Comparison
- Other
Google adjusts its algorithms for specific query types according to these classifications. For example, Google has been using different ranking weights for your money, your life-type queries since 2019.
Site Quality Scores
According to Williams-Cook, Google heavily relies on site quality scores to create search results. Google has patented the algorithms for both site quality scores and predicting site quality scores. Agencies that offer digital Marketing services for startups must dedicatedly focus on improving site quality score to maintain and improve rankings.
The site quality score is calculated at the sub-domain level, which depends on;
- Anchor text relevance around the web.
- User interactions (for example, clicks).
- Brand visibility (for example, branded searches, or searches that mention the brand’s name).
Google has set a scale of 0-1 for site quality scores. A website that fails to reach a minimum score of 0.4 on this scale is ineligible to appear in search features such as People Who Ask and Featured Snippets.
Click Probability
According to the exploit, Google mostly uses a “click probability” for every organic search result instead of directly using click-through rates for rankings.
On this, Williams-Cook said “And so it appears that Google does factor in how likely it thinks someone is going to be to click on your result. This would change if we modify the page title. They have a tool that can give you hints about this in the Google Ads Planner because it will tell you an estimated click-through rate there.”
About the Data
Williams-Cook and his team of digital marketing, SEO, and social media ads Management experts analyzed more than 90 million Google Search queries. All these data summed up to about 2 terabytes. For discovering the endpoint vulnerability, Google paid a whopping amount of $13,337 to Williams-Cook and his team.
A Google exploit has recently revealed some surprising insights about consensus scoring, query classifications, site quality scores, and more used by Google’s ranking system.
Mark Williams-Cook, a Google Ads management services expert, discovered an exploit revealing that Google uses more than 2000 properties to classify websites and queries, along with specific classifications such as query types and consensus scoring.
Why We Should Care?
The unveiling of these vulnerabilities has given us a more clear view of how Google’s search ranking system actually works. Companies offering paid marketing solutions like social media management and Google AdWords campaign management services can capitalize on these new insights to deliver better results than before.
The massive content API warehouse leak that happened last year already gave us a lot of information about Google Search. Now, we have got some more new, undiscovered insights into classification, scoring, site quality scores, and more.
Consensus Scoring
According to experts from a social media advertising service, Consensus scoring is a process that Google uses to count the number of passages in content that match, contradict, or remain neutral to the “general consensus”. Based on this, Google will generate a consensus score which might impact your rankings for a specific query, especially debunking queries (like, is the earth flat?).
Query Classification
The exploit revealed that Google classifies almost all of the queries into eight “refined query semantic classes” which are as follows;
- Definition
- Short fact
- Reason
- Bool (short for Boolean – yes/no questions)
- Instruction
- Consequence
- Comparison
- Other
Google adjusts its algorithms for specific query types according to these classifications. For example, Google has been using different ranking weights for your money, your life-type queries since 2019.
Site Quality Scores
According to Williams-Cook, Google heavily relies on site quality scores to create search results. Google has patented the algorithms for both site quality scores and predicting site quality scores. Agencies that offer digital Marketing services for startups must dedicatedly focus on improving site quality score to maintain and improve rankings.
The site quality score is calculated at the sub-domain level, which depends on;
- Anchor text relevance around the web.
- User interactions (for example, clicks).
- Brand visibility (for example, branded searches, or searches that mention the brand’s name).
Google has set a scale of 0-1 for site quality scores. A website that fails to reach a minimum score of 0.4 on this scale is ineligible to appear in search features such as People Who Ask and Featured Snippets.
Click Probability
According to the exploit, Google mostly uses a “click probability” for every organic search result instead of directly using click-through rates for rankings.
On this, Williams-Cook said “And so it appears that Google does factor in how likely it thinks someone is going to be to click on your result. This would change if we modify the page title. They have a tool that can give you hints about this in the Google Ads Planner because it will tell you an estimated click-through rate there.”
About the Data
Williams-Cook and his team of digital marketing, SEO, and social media ads Management experts analyzed more than 90 million Google Search queries. All these data summed up to about 2 terabytes. For discovering the endpoint vulnerability, Google paid a whopping amount of $13,337 to Williams-Cook and his team.