Articles Blog

English Google Webmaster Central office-hours hangout

English Google Webmaster Central office-hours hangout

welcome everyone to todays Google Webmaster
Central office-hours hangout. My name is John Mueller. I’m a webmaster trends analyst
here at Google in Switzerland. And part of what we do is talk
with webmasters and publishers about websites and web search
issues that they might have. Looks like we have a nice crowd
again today– lots of questions that were submitted as well. As always, I’ll give a chance
to any of the new faces here in the Hangout to
ask the first questions. LYLE ROMER: If nobody new wants
to ask a question, I got one. JOHN MUELLER: All right. LYLE ROMER: Basically, I
just had a question regarding the mobile friendly test. I noticed something
as we strive to make sure we’re doing
everything properly and have all of our
technical ducks in a row. The mobile friendly test seems
to– whenever we test a page– like tomorrow I can do it. Test one page, it’ll fail
the test the first time. Then if I just go and test
another page on our site, it will pass. And then if I go back to the
first page that one will pass. And this is pretty repeatable. So there’s a little concerned
that it might be something that we’re doing wrong
or possibly there’s a bug in the test. JOHN MUELLER: I’m not aware
of any bug in the test. So that leaves the
other option open. I don’t know. It probably depends a bit on
what you see in the screenshot there. What I could imagine
is that maybe we’re having trouble fetching some
of the JavaScript or CSS files from your server. Maybe something is timing
out there sometimes. And that causes us to be
able to fetch the page but not like all of the styling
and JavaScript information. So we might not be able
to pass that through. And these are files that we
usually can cache on our side. So once we’ve been
able to fetch them, we can probably just reuse the
previously fetched version. So that might be something
that you’re seeing there. Where if, for example,
we kind of timeout on some of these
things– you should see that in Search
Console as well– then that’s something that
might be happening there. With regards to timing
out, you could double check your server to make
sure that it’s really not an issue on your
server– that it’s really kind of more like a timing
issue there that Google just didn’t have enough bandwidth
available for your site at the moment. You can make sure
that it’s responding as quickly as possible
to these requests– that the pages have minimal
amount of embedded post script in CSS files. So that we don’t have to do
hundreds of requests just to render one page. So those are kind of
the standard things I’d watch out for. I wouldn’t see this
as a critical issue. But I would see it
as something where– if you have a bit of time,
it’s probably worth digging in to figure out what’s
happening there. Because maybe users see this
kind of timing or slowness as well. LYLE ROMER: OK, yeah, because
definitely when it fails, the screenshot doesn’t
show the mobile site. But yet, in Webmaster
Tools, whenever we do the fetch as
Google as mobile, that always shows the
mobile site properly. So it seems to only
be an issue when we run the mobile friendly test. JOHN MUELLER: I don’t know if
in this case it would work. But sometimes you’ll see
individual resources that weren’t able to be fetched. And you can kind of
like click on that expand icon to get a
list of those resources. And you could
probably double check there to see if there is
anything that’s continuously showing up there in that test. LYLE ROMER: OK, thank you, I
will look into that, thanks. AUDIENCE: Did I miss the window
for you guys to ask questions? JOHN MUELLER: Go for it. AUDIENCE: I work on a site
with a huge navigation menu. And so I’ve read
lots of stuff about no following internal links and
never no follow internal links. I was wondering if
there’s circumstances where no following
may be a good idea. So I’m just thinking
of Googlebot crawling the site and a page where
the navigation may not be actually relevant to the
user, but it just lives there. And it’s the first
thing the bot gets to. So how much use
it that to users? How much am I steering them
in the wrong direction? JOHN MUELLER: I usually wouldn’t
worry too much about that. That’s something where we can
recognize the navigation fairly easily. And we can still crawl
the site normally, kind of recognizing the
structure within the website as well. So unless your navigation
has like thousands of links in there,
then that’s something I wouldn’t really worry about. If it does have thousands
of links in there, then that’s probably more
of a usability issue anyway. AUDIENCE: I’m working on that. JOHN MUELLER: OK, but if
you’re looking at something– I don’t know, I’d just say like
hundreds of links– something around that order of
magnitude, then that’s something we should
be able to deal with. That’s not something where
you need to artificially tweak the navigation to kind of make
it disappear for Googlebot. AUDIENCE: OK, thanks. AUDIENCE: Hey John, I have one. JOHN MUELLER: Sure, go for it. AUDIENCE: So it’s interesting. I’m working on a site
that’s industry leader, you know, market
leader in their field. They have a thousand employees. Good company, no
ranking penalties, or any specific stymie stuff
that they did in the past. But for their main
keyword, their one product, they don’t rank anywhere in the
U.S. But in the Netherlands, in the UK, all the
other engines there, you know, pretty well ranking
on the first or second page. But in the U.S.,
in their market, they have absolutely no
rankings for their main keyword. For everything else
they’re doing fine. First page, feature snippets,
a knowledge craft, everything. But there it seems like
there’s one keyword phrase that they’re absolutely hit on. I don’t know if
it’s a Penguin thing or if you ever see anything like
that, but only in one country. JOHN MUELLER: That’s
probably hard to diagnose like in a Hangout like this. I’d probably need that example. So if you can send that to me
by email or just a private note on Google+, then that would
be interesting to look at. I can’t guarantee that we can,
like, tweak that and fix that. But I will definitely
show it to the team and see what they think. And we can kind of figure out if
there’s something on our side. Or if they think,
well, the algorithms are actually
treating it correctly in this case because
there is something weird, unexpected happening that
might not be visible directly. AUDIENCE: Yeah, I’ll do that. I think the team
was at SMX Advanced, but they didn’t get
a chance to ask Gary. So I’ll send that to you. And that will be awesome. JOHN MUELLER: Thank you so much. OK, sure, glad to take a look. All right, let me
start off with some of the submitted questions. And if there are more questions
from you all in between, feel free to jump on in. Let’s see. Starting here, would you
agree that the best way to improve the rankings
of a landing page would be to have lots of useful
content based on the landing page subject and
have it internally linked to that landing page
from your blog section? If not, what would you do? So, in general, if you
have a landing page for a specific
topic, then having information on that topic
definitely makes sense. So that’s hard not to
really agree with regards to linking it internally. If it’s an important
page, then definitely show that it’s important through
your internal links as well. So that doesn’t necessarily mean
it has to be linked from a blog but it could be from your home
page– from different parts of your website depending on
where you think it makes sense to find users that are
interested in the content and guide them there. I had some pages incorrectly set
as 404s and 410s some time ago. If I was to change these
back to status code 200 and then 301 then
to the current URLs, will Google start passing any
page rank to the new pages again, or not? Yes, if you change–
if you kind of recreate pages that were
flagged as 404, or 410, then we’ll start treating
those as normal pages again. So if you put content
there, then we can start indexing that
again, especially if we find new links to those pages. If you have redirects
there, then we can start passing that
page rank on as well. So if you kind of reconsider
the decision to remove content completely
from your site. And say, oh this old
URL I want to re-use for the same kind of content
[INAUDIBLE] for something else, then you can definitely do that. We got an unnatural links
penalty two years ago, which then changed to impact
links– which we successfully recovered from six months ago. However, the SEO agency
disavowed almost everything to get rid of the impact links. Can we put some of
the good links back? Sure, if you disavow
too much and you disavow things which are
actually normal good links, then you can definitely remove
those from the disavow file. Submit the new file
with those links removed, and then we’ll
be able to take those into account again. That’s probably
not something where you’ll see a big change
immediately happen. But over time, as we
recrawl those URLs, we’ll be able to take
that into account again. In general, while I
recommend taking, kind of, like a rough look
at your links when it comes to late penalties,
link manual actions, I wouldn’t recommend removing
everything from your site. Because then you’re
really removing a lot of things that
might be actually kind of beneficial and normal
for your website as well. How can we reverse the negative
impact of a core algorithm update? Do you regularly
refresh the site data or do we need another
core algorithm updates to see positive changes? We drastically improve
content quality, user metrics, relevancy,
et cetera five plus months. So, in general, we have
a lot of algorithms that run all the time that
run from time to time as well. And these are things that do
look at your website overall. So if you’re working on
improving your website overall, then that’s something that
these algorithms will probably take into account. It’s not something where
there’s like a line of HTML that you need to tweak and then
suddenly the ranking algorithms will see the site differently. Usually it’s really
the bigger picture that these algorithms
are looking at. We have separate branch
pages for each of our depos. Should we be internally linking
these to our category pages where we offer that service
to enforce that we offer that service in that area? Or is Google able to
know that already? What do you recommend? This is mostly up to you. So if you have separate pages
for individual locations and you have a list of the
services and the products that you offer in
those locations– and you think users going
to those location pages would profit from, kind
of, having this, kind of, lookup into the different
products and services you have, then sure. Go ahead and put that in there. However, if you’re only
doing that because you think you’re passing page
rank to the right pages, then probably that’s the
wrong motivation for something like this. I’d really look at it
from a user point of view. Are users going to these pages? Are they finding useful
information already on these pages such as address,
opening hours, phone number, those kind of things? And do they need
additional information such as links to the
rest of your content then obviously
put that in there. A lot of websites have
that already naturally because they, kind of, have
that in their navigation anyway– where you have the
different locations and then maybe sections of your
site for the individual products and services
that you offer. How does Google treat
landing pages that are not accessible from your website? We’ve seen some
competitors create these and they seem to rank well. But you can’t access the pages
directly from the website. Is that classed
as a gateway page? So we wouldn’t
necessarily classify those as being spammy pages. But what will happen there is if
we can’t crawl to those pages, then we’ll have a lot
of trouble understanding the context of those pages. We’ll have difficulty
actually finding those pages in the first place– discovering
them so that we can crawl them for the first time– that
we can index– that we can show them and search. So if these are
pages that are really completely separate from
the rest of your website, then it’s probably
going to be hard for us to pick up the
right contacts there and to show them in the
search results appropriately. So ideally, I’d
recommend making sure that any page
within your website can be crawled
from any other page so that users can kind of follow
a trail of links to that page but also search
engines can do that. Does the HTTPS
ranking boosts differ depending on the type
of site that you have? For example, would it be more
important for e-commerce sites that deal with
financial transactions as opposed to a blog? Can you negatively be
affected by not doing it? No, the HTTPS
ranking boost is not dependent on the type
of site that you have. And so that could be relevant
for pretty much anything. However, keep in mind
that this ranking is not something that will propel
you from page 10 to page 1. It’s more of a subtle thing. More like a tiebreaker
that when we know that URLs are
essentially equivalent, we’ll prefer the HTTPS version. So that’s something
to kind of keep in mind to keep the expectations
in check– to make sure that when you’re working on this
with the rest of your website, with the rest of
your company, you’re not suggesting that
switching to HTTPS will make your website rank
number one automatically. In our blog section, does
the value of an article diminish depending on the number
of internal anchor latency it has going out from it? We’re told that we
should have two– one of which is optimized,
one to a category, one to a product. What do you think? No, the value of
an article doesn’t diminish if you have
outbound links on a page. It gives more value,
I think, to the user. It’s not something that we
take into account from an SEO point of view directly. But if it provides
more value to the user by having more references
within your website, outside of your
website, then that’s something that probably
makes sense for a page. With regards to
one of those links should be optimized or not,
that’s really up to you. So we use these links
for discovering new URLs, for understanding the context
of URLs within your website, outside of your website as well. So if you use an
appropriate anchor text, that always helps us there. It’s not the case that
you need to stuff keywords into those anchor text. But just kind of
like use something other than click here and
link here to the other page so that we can actually get
some context from that link. LYLE ROMER: Hey John,
quickly regarding blogs, does the algorithm
attempt to identify what section of the
website might be a blog and treat it any differently
than any other page? Not necessarily. So there are some aspects that
could come into play there with regards to like using
RSS feeds to actually update the content directly so
that we can kind of pick up those changes a
little bit faster. With regards to kind of like
finding dates on the content that we can understand, oh, this
is new and relevant content– or this is kind
of like evergreen content that’s always
relevant– those kind of things. JOHN MUELLER: But
it’s not the case that we have
something that says, oh, this URL is on a
blog, therefore, it should be treated differently. Its just well, for this URL we
have this metadata available. For that URL, we have
different metadata available. We need to find the right
one to show in rankings. LYLE ROMER: And as
far as dates go, would you recommend
not having dates on like the evergreen type
content as you called it? JOHN MUELLER: That’s
kind of up to you. I think sometimes
that does make sense. Sometimes it’s not something
that you necessarily need to have there. From a personal point of
view, if you have a blog then I’d definitely
put dates on there. Because when people
stumble across that a year later or two years
later, they’d like to know how relevant
this information is. Maybe it’s really
new, and they should be giving it a lot more weight. Or maybe they should
be saying, well, this was published 10
years ago, and it’s on this topic where lots
of things have changed. In the mean time, maybe
it’s not so relevant for me. But at least from a
user point of view, I think having dates on
there makes a big difference. AUDIENCE: John, talking
about stuff that changed, Gary posted this
morning on Twitter that 30 times redirects
don’t lose page rank anymore. Do you know if that’s
a joke or if it’s a new change to
Googlebot crawling? Could you talk about
that a little bit? I posted the link– JOHN MUELLER: It’s not 30 times. It’s 30x. So any type of 300 redirects. AUDIENCE: Like I
thought I thought he was talking about how many
hops you have, so I’m sorry. JOHN MUELLER: No,
I think 30 times would be a little excessive. AUDIENCE: Yeah,
301 redirect won’t want to lose page rank anymore. JOHN MUELLER: Yes. AUDIENCE: That they
used to lose page rank? I mean that was a little bit
of a debate going on with you. JOHN MUELLER: I think that–
well, that’s something that we had, like, way
in the beginning where we saw a lot of people doing
spammy stuff with redirects. Then that definitely
made sense there. But, at the moment, if you have
a 301 redirect, 302 redirect, 3, 0, whatever, redirect, then
those wouldn’t lose page rank. So this is something
that I think came up in one of the
questions here as well. It’s like, I moved to HTTPS. Is the ranking boosts kind
of comparable to the drop I have with a 301 redirect. And since there is no drop
with a 301 redirect, that’s not something you
need to worry about. AUDIENCE: What
I’m trying to say, is that new as of like the
next the past week or so or it’s been like
this for some time? JOHN MUELLER:
That’s been a while. How long does it take for
Google to build up trust again in a site after a
manual links penalty? Would you say two years is
about right to see a recovery if we’ve been doing everything
else good in the meantime? It kind of depends on
what all has happened in the meantime–
what kind of issues you’ve been seeing there. So some algorithms do take
quite a bit of time to refresh. And that might be something
that’s playing a role here. If it’s a matter of just the
manual action then as soon as that reconsideration
request is reprocessed, then essentially that
manual action is lifted and your site is kind of
into normal rankings again. So that’s something where
it really kind of depends on what all has been happening
with that site with search in general. Would you say that the
more external links to different domains
the page has, the more unique detailed
and valuable content should be on that page? Otherwise, that
page might trigger some kind of low quality
or demotion algorithm? Not necessarily. So just putting
external links on a page doesn’t necessarily
make it better. Sometimes it does provide
more additional information. But it’s not having
external links on its own will kind of improve
the quality of a page. Scraper sites ranking above
me with my own content since January– does my domain
have irreversible quality issues? Even my 8-year-old updated pages
are outranked by other sites. They’re exact copies with tons
of ad banners and no extras. Is this permanent? That sounds like something
we might want to pass on to the engineering team. So if you’re listening. If you can send me some
information on Google+, for example, with
queries with URLs, then that might be useful to
pass on so that we can take a look and see if that’s working
as expected or something weird is happening on our side. I have a product
review page with a link to an official site. Official site ranks first for
the product query where users can complete the transaction. Does this link
hurt me, though it helps to complete the user
journey on a different domain and end the search session? Product review page with a
link to the official site. That sounds kind of
like an affiliate site. Kind of set up where
you have one page and you’re linking
with an affiliate link to another page where the
transaction can actually be done. In general, just by having
that kind of configuration is not going to
cause any problem. So Google is not against
affiliate kind of businesses. That’s not a problem. However, we do expect
that these pages have unique, compelling, and
valuable content of their own. So that’s something where if
you’re into affiliate business and you’re reselling
products or, kind of, linking to other places where
people can buy those products, you need to make sure that your
pages can stand on their own– that there’s actually a
reason why we should show your pages in search
results rather than any of the other ones. So if you’re just
taking an affiliate feed and republishing
it on your site, then that’s not a lot
of additional value that you provide there. Whereas, if you’re really
doing an in-depth review, and you’re linking
to another place where you can actually
buy that product, then that sounds like a
good use of a website. And that sounds
like something we would generally be interested
in showing in search. We have an external link going
out to the third party review site that we use. Should this be followed
or no followed link? In general, that’s up to you. So I don’t see just off
the hand see any reason why this would necessarily
need to be a no follow link. Sometimes you put those
reviews on a third party site. Sometimes you keep those
reviews on your own site. In general, that’s
not something where I’d necessarily no-follow that. But it might be interesting
to see some examples. So if you have some that
you’ve seen externally– where you think, wow, this is
kind of a weird situation– maybe post those
in the help forum and we can take a look there. How much should I care about
updating incoming links after changing domains? A website that has natural
links from a lot of domains, change to domain, all links
are redirected with 301. Should we expect any
search results visibility boost after updating the links? In general, this
is something that just makes it easier for users
to actually find your content. And if, over time,
you’re kind of collecting this cruft of redirect chains
from one domain to another, then that’s something
that’s probably worth cleaning up at some point. So we do recommend updating
the links from external sites to point to your new domain. That does help us
to kind of reinforce that this is a permanent
move, and you’re really moving to this domain,
and we should really switch our URLs that we have
indexed from the old version to the new version. So all of these
things kind of add up. I don’t think you’d
see any critical loss by not doing this. But I’d really kind of
look into your analytics and see where most of
the users are coming from and make sure that
they don’t have to be bounced through this
additional redirect, which means another DNS
lookup and all of that. I have a page that
reviews 10 products. And it runs OK with no
links from that page. When I add 10 external
links from this page where users can
get those products, the page loses 70% of
the search traffic. When I remove them,
it comes back. What’s happening? I don’t know. I’d have to take a
look at those pages to actually see what
you’re actually doing there– what’s happening there. So it’s really hard to say
just from a vague description like this. You can obviously send me that
information by Google+ if you want or maybe post in the help
forums so that others can also take a look at that. AUDIENCE: Would it be possible
that by providing links to his competitors, he’s
boosting his competitors and pushing himself
down the rankings? JOHN MUELLER: Possibly,
possibly, but– AUDIENCE: Not 70 places,
I wouldn’t imagine, but– JOHN MUELLER: Yeah, I mean 70
places is kind of a big change. It might be lots
of other things. It might be coincidental timing. It’s really hard to say
what might be happening. Here’s that question
about the HTTPS. Does the ranking boost
from moving to HTTPS equal the loss that you get
from having 301 redirects or is the loss still
greater than the boost? What’s up with that? So like Barry
mentioned, Gary tweeted that there is no loss with
regards to 301 redirects, with regards to the other
302 redirects either. So that’s something where
you can definitely do that regardless of the redirects. However, like I mentioned before
the caveat that moving to HTTPS is not going to make your site
rank number one automatically. It’s more of a tiebreaker. It’s a fairly small
ranking signal. AUDIENCE: John, does this
mean that some people in the last few months
will have received a boost with no way of
tracking where that came from? Because it used
to be– and we’ve spoken about this
before– that 301s didn’t pass all of the value. But now if it does,
will that have some people who just suddenly
go, no, look, I’ve suddenly– JOHN MUELLER: Probably
not noticably. Probably not something
where you’d really notice that in the tracker. AUDIENCE: All
right, because we’ve had whole domains redirected
to another as you know. But we didn’t see any real
movement over the last six months because
within the last six– JOHN MUELLER: We fixed a schema
markup error on our products two months ago and Search
Console is still only showing a very gradual decline in
errors as the bot recrawls them. At this rate, it will take
a year or more to sort out. How can we speed that up? One thing you can do there
is submit a new sitemap file and let us know that all of
these pages have changed. So, in the sitemap file, you can
specify last modification date. And that’s useful for us to
recognize that these pages have actually recently changed. So that’s one thing
you can do there. In general, however,
you will always kind of have this gradual
decline in errors when you make significant
changes on your website because recrawling
and re-indexing is a gradual process. We can’t recrawl
the whole website from one day to
the next at least for any kind of larger website. So that’s something
where you’ll always kind of see this gradual
change over time when you make changes. That could be fixing errors
in the structured data markup, fixing errors in AMP pages, or
fixing crawl errors in general, you’ll always kind of see
this gradual change over time. We were asked in Search Console
to select a type of industry our site is in. Does this affect the
rankings in any way if we will either leave
it blank or select the wrong one from the list? I am not actually aware of us
asking for the type of industry in Search Console. I believe if you sign up
for Bing Webmaster Tools, they ask for this
kind of information. But I don’t know
if we actually do that in Google Search Console. But regardless, if we did
ask for this information, it would be purely for
statistical purposes and not for any kind
of ranking information. So if you pick the
wrong one there or if you don’t have one
that really matches then that wouldn’t be play a role. Who is the best representative
to chat about Chromium dropping support for HTTP/2
in Chrome 51 back in May? The reasons behind that decision
in HTTP/2 and ALPN in general. I have no idea. Probably someone on
the Chrome’s team would be able to help there. I believe that the
Chrome developers are active on Stack Exchange–
not completely sure. So that might be one
place to check in with them to see
what specifically is happening there. I’m not aware of this
particular change and what might be behind that. Sometimes I see that almost
every page on my domain gets a boost in search
traffic for one to five days. I see a gradual rise but
then it drops and stays the same for three
or more weeks. Does this mean that the overall
site wide quality is not enough? What can I do? It’s really hard to say
without actually having any information about
the site and what actually is happening there. So I don’t really have anything
specific that I can say there. It can definitely happen that
our algorithms kind of rethink things from time to time. If they see some pages go
up, some pages go down, those kind of changes
kind of happen naturally. If you see your site sometimes
ranking a little bit better, sometimes ranking
a little bit less, it might be that you’re
just kind of on the edge there with regards
to our algorithms. And kind of like
pushing it significantly in a positive direction
will, kind of, keep you above that edge. But, in general,
that’s not something where we’d have any
specific guidance on, like, what specifically you need to
change within your website. You really kind of need
to take a step back and look at your site overall. I saw a spike in
search traffic to one of my pages for two days, but
it dropped to the same level as before. There was no increase in search
volume for those keywords. Was Google testing
quality of my page and deemed it not
worthy of more traffic? Kind of again, as like
with the previous question, sometimes our algorithms
review a site and they think, well, it’s fantastic. And the next day
they look at it, and say, well, maybe it’s not
as good as I thought it was. And they kind of
move it down again. So these kind of changes
can happen over time. It’s pretty normal that
you see changes in search. We’re always improving
our algorithms to figure out how to
best put them together. The rest of the web is
also changing all the time. So you’re kind of seeing
fluctuations in search and that’s kind of normal. How important are XML
sitemaps these days? Our site has been
around for 20 years. Google sends a good amount
of traffic to many pages. Is it worth the effort
to update the sitemaps? So sitemaps are
fantastic if you have new or updated content
within your website that you need to have crawled
and indexed fairly quickly. Because with the change
date, in particular, we can pick that up
fairly quickly– crawl and index those updated
pages and take that into account for search. If your pages haven’t
changed a lot recently or aren’t changing
that regularly, then, probably, we can
just live with the version that we have indexed already. So it kind of depends
on your website. I think if you’ve had the same
website for the past 20 years then probably it’s worth
kind of revisiting and making some changes and updating
things over time. But in general, we try
to recognize the changes primarily. A sitemap can also
help if you’re not just changing the content, but
if you’re changing something technical on the site. So if you’re adding things
like hreflang markup, rel=canonicals, if you’re adding
alternate mobile versions, if you’re adding AMP
links from those pages– then all of these things mean
that those URLs have changed, and that we should go off and
recrawl and reprocess them. So it’s not just
the text that needs that could be changing where it
makes sense to put a sitemap– but anything on those
pages that you want Google to take into account. Google shows one of my URLs
which I didn’t focus on. So there’s an example, is my url. But Google takes and more as the URL. So the second one is
older but is there any way to show my supposed URL. So assume you have two URLs that
kind of have the same content or maybe even
identical content, then there are a number of ways that
you can use canonicalization– as we call it– to let
us know about which one you want to have indexed. So that could be
something like a redirect if you redirect from
one version to other. It will probably assume that the
redirection target is something that you want to have indexed. If it’s a 301 redirect, you
can use a rel=canonical article to let us know that this
is your preferred version. You can update the internal
links within your website to make sure that they’re
all pointing out the version that you want to have indexed. You can do things like
sitemaps for example, as well, that let us
know this is actually the version that you do
want to have indexed. So all of these
things kind of add up. And if we have two
versions that we know are essentially equivalent
and all of the signals are saying well, this
is the one that you should be showing in search–
then we’ll try to follow that. It’s not guaranteed
however– and especially if there are mixed signals. If you’re saying, well,
this is the canonical, but you’re redirecting
to the other one. Or this is the canonical,
but all the internal links are going to the
other one, then that’s the type of situation where
we have to make a decision. And our algorithms
might say, well, this is a good choice today. And tomorrow they look
at it and say, ah, well, actually, these other
signals are pretty good too. We’ll just pick
that one instead. So the clearer you can
give us your information, the more likely we’ll be
able to take it into account. Does HTML5 have a
positive effect in SEO? No. HTML5 can be really useful
to make a modern website. But it’s not something
that we’d use on its own as a ranking factor. Many descriptions use emojis
or symbols as eye catchers. How tolerant is Google? As far as I know
such descriptions are not conformed
to the guidelines. So we generally try to filter
those out in the snippets. But it’s not the case that we
would demote a website or kind of flag it as web spam if it
has emojis or other signals in the description. A site has two topics. One topic has 100
high quality pages. The other has 25,000 high
traffic pages with AdSense. We can’t get the 100 pages
without AdSense to rank well. They used to rank well. What might be causing this? The high traffic pages
are on a subdomain. Really hard to say what
might be happening there. In general, when
we look at a site, we do try to look at it overall. So that’s something where
maybe those other pages are kind of pulling down
the rest of the site. So, in general, I’d
really work to make sure that across the site
everything is really high quality and valuable. How important is it to
display the same content on desktop and mobile
versions of a page? Can some content
mismatch cause some kind of confusion for Googlebot–
or worse, ranking? For example, mismatch
and number of displayed articles of products. We do expect that these
pages are equivalent. So that means that when a user
clicks on a search result, if they go to a desktop
version or the mobile version, they should be able to fulfill
kind of what you are suggesting in that search result. So that doesn’t mean that
they have to be identical. It doesn’t mean that
they have to use the same layout and the
same, kind of, like sidebar, and navigation, and everything. It essentially just
means that they have to fulfill the same purpose
and be equivalent for the user. So if those pages
have for example, a list of different
products on your page and the mobile version has
10 and the desktop version has 20 in the list then that’s
still a list of those products. And you can still kind
of go through that and find all of that. On the other hand, if
the desktop version is a list of products
and the mobile version is just one product, then that’s
kind of a weird mismatch there. So that’s something
where we kind of expect them to be equivalent. I don’t think you’d see a
ranking drop at the moment because we essentially try to
recognize the mobile version and fold it together
with the desktop version. But it’s possible over
time that we’ll say, well, actually the
mobile versions are the ones that we
want to focus on– and then you might see a
change in the search results. LYLE ROMER: Also, related
to the mobile version, in order to get our site to
look the best– in responsive design both for
desktop and mobile with regards to navigation–
in our code we basically have our whole
navigation code repeated and one displays
one when it’s mobile and one displays
when it’s desktop. Just wanted to make
sure that that wouldn’t be an issue that it sees
all these navigation links repeated twice in
the same area of the code. JOHN MUELLER: That
should be fine. I wouldn’t worry about that. LYLE ROMER: OK, thanks. JOHN MUELLER: Can
you guys confirm whether you will be rolling
out the Penguin update or not in a yes or no answer? Yes, we will be rolling one out. I guess that’s good. I don’t have any dates
or time frame on that. So that’s still kind of not
a perfect answer for you all, I know. But we are definitely
working on it. So I don’t want to jinx it,
so I won’t say anything. AUDIENCE: You were
about to say something. Nobody’s listening. JOHN MUELLER: No. [LAUGHS] Different topic, the IP
host location of the domain, does that have an
effect on the SEO? No. Well, mostly no. So I think this is talking about
the server location in general. We use a number
of factors when it comes to geotargeting for
recognizing which audience that you’re talking to. And if we don’t have any
additional information, we will use a server location. But, in general,
we’ll have information from your domain name where
you maybe have a country code top-level domain. Or if you have a generic
top-level domain, you’ll have geotargeting
set, or you’ll have hreflang information
in your website, then all of these kind of
help us to understand which area of the world– which
country are you targeting. So, in general, we
have enough information to actually figure out which
country you’re targeting. In a case where it’s a
completely new website, and we have absolutely no idea
what this website is about, and we know the server is
located in one country, then that’s something
might take into account. But as soon as we have
more information, then usually that overrides the
server location completely. We’re working with
the lottery in the UK. Google is creating
an [INAUDIBLE] result for Euro millions results. It pulls in numbers
that were drawn. However, it doesn’t
pull the latest result. How can we make Google pull
the correct information for our page? I took a quick look at this and
passed it on to the team here. It looks like it’s pulling
essentially a snippet from that webpage. And obviously,
snippets from webpages, they take some time
to be reprocessed, crawled, and re-index. So it looks like we’re
kind of pulling in an older version of the content
than you would actually have on the page there. So one thing you
can obviously do is let us know that these
pages changed faster. So using something
like a sitemap file would definitely help us
there, so that we understand while this page that
we crawled yesterday has actually changed
in the meantime. And we should go off and recrawl
and re-index that new page. So that would help. But I’m also passing it on to
our team to kind of make sure that we’re not pulling it out
as something like a bigger snippet on a page. We use structured markup
to display breadcrumb in the search results. Sometimes the breadcrumb
is displayed properly. Other times it’s just
the cropped URL even though its exact page type. What can we do? I probably need to have some
example queries and URLs there to see what is happening there–
why we’re sometimes picking up the structured markup. Why we’re sometimes not
using that information. AUDIENCE: OK, so should I
send you some URLs then? JOHN MUELLER: OK. AUDIENCE: –as an example. All right, thank you. JOHN MUELLER: I need some URLs
from your site and some queries that are showing those results. AUDIENCE: OK, well
the weird thing is that, like, whenever
I search for the wrong– like if I misspell the query,
the right markup shows up. And if I type what we
actually want to rank for, it doesn’t work sometimes. So it would be good to fix that. JOHN MUELLER: OK, that
sounds weird, yeah. AUDIENCE: All right,
I’ll send you. JOHN MUELLER: That sounds
like a perfect thing to pass on to your team, yeah. AUDIENCE: OK, thanks. JOHN MUELLER: Thanks. Is a no-follow an
effective and efficient way to sculpt linked and authority
flow for internal links? For example, pages and
sites with large navigation menus to pages which might not
always be relevant to the user. In general, I’d recommend
not using no-follow as a way to kind of sculpt page rank
because that probably doesn’t do what you expected it to do. I think using no-follow
internally makes sense for things like
calendar sections where you have kind of an
infinite calendar otherwise. But, in general, for
a normal website, you don’t need to use
no-follow to sculpt page rank. We can pretty much figure
that out on ourselves. And even with a
no-follow there, it’s not going to change anything. A week ago my
review-rich snippets disappeared for all pages
older than four days. After that, if a page
becomes older than four days, they also disappear. I have a lot of reviews with
2,000 word unique content and the technical part is OK. What could be the case? Really hard to say. Because it sounds
like technically we’re able to pick it up from
a policy point of view our algorithms kind of
agree that it’s OK to show. So that’s probably
something where we’d also want to take a look at that. So if you can send that
to me directly by Google+, for example. Or if you can also post
in the help form, that’s something that would be
really interesting for us to kind of look at and see is
this really working the way that we expect it to work. We used to have
other ad networks together with AdSense between
some of these dates– so 2005, 2016. Some banners have
deceptive download buttons. Safe browsing marked our
site as unsafe for two days. And we immediately
removed the ads. May it be the reason
for lost rankings? Probably not. So, as far as I know, the
safe browsing evaluations that we do
specifically for Chrome are things that we do
specifically just for Chrome. They don’t apply to any of
the search ranking issues. However, it might
be that– of course, other things are in
play there, if you’re a site that used to have
deceptive download buttons then maybe there
are other things that our website algorithms
picked up on and kind of are responding to now. But that wouldn’t be from
what the safe browsing side kind of flags. Even in situations where you
have malware on a site then that’s something where
we’d essentially be ranking the site in the same place. We’d just be showing the
malware interstitial there. It’s not that we would
demote a site from the search results because of malware. But again, there might be
various things coming into play there. Can you talk about plans to
expand the data and search analytics? For example, being able to
differentiate local search from organic search
images, voice search, combinations of
Knowledge Graph results, and how often they’re displayed. I try not to
pre-announce things, so I don’t really know what I
can say for this specifically. I know the team is working
on search analytics and working on expanding
that, but it’s always kind of an interplay
between different sides when it comes to bringing
out features like this. So in an ideal situation
what’s important for us is on the one
hand that this is data that makes sense for us to
kind of expose externally, so it doesn’t cause
any harm on our side. But also that we see that
this data helps you to make even better web content. So the more we can go to
the internal teams and say, if we provide the webmasters
with this type of content, they’d be able to make
even better websites and our users would
search more on Google. Then that’s a really strong
argument to bring up. On the other hand,
if we just say, well, some SEOs really
want this information. They don’t really know
what to do with it. But they like to track numbers. And they want to have those
kind of numbers as well. Then that’s not a really
strong argument to a team. Because then they say,
well, what use is it if we provide these numbers
or if we do all of this work to expose those metrics,
if nothing actually changes on the [INAUDIBLE]. So if you have things
where you see– well, if Google would be able to
provide this information, then we could significantly
improve this part of the web. Then that’s kind of
useful feedback to have. And that’s something that we can
take back to your team and say, hey, look at this
awesome feedback. I totally agree with this
person who put this together that we could improve
the rest of the web if we provided this additional
information in Search Console. So that kind of feedback
would be really useful. Can you talk about how
you treat subdomains? We have five similar
topics but different enough that we would put them
in separate folders. Instead we use subdomains. Does this hurt us in rankings? No, not in general. So we recognize that
some sites use subdomains as different parts of the site. And, in the same
way that other sites might use sub directories. With subdomains, the main
thing I’d watch out for is that you’re not using
wildcard subdomains because that can make crawling
really, really hard if we have to go through all
of these subdomains and treat them all
as separate hosts. But if you have a limited
number of subdomains then that might be an option. Similarly, if you have
different sites that are essentially completely
separate websites but they’re in subdirectories–
so in folders– then we’ll try to
figure that out as well. And say, well, actually these
are all on the same domain, on the same host name,
but these are maybe user-generated content–
like separate sites that should be treated
completely separately– then we’ll try to
figure that out as well. So that’s not something
that would kind of like improve or hurt rankings. It’s more a matter of
us figuring that out. And so far I’ve
seen our algorithms do a pretty good job of that. Let me grab one
more question here. I run We recently upgraded
the site significantly but this required changing
the URL structure. We’ve implemented
hundreds of 301 redirects. Our organic traffic from
Google has dropped to zero. Do we have some sort of
penalty on our domain? If there were
manual action, you’d see that in Search Console
in the manual action section. Otherwise, what might
just be happening is that we’re having
trouble following all of these redirects. So I’d look at the
technical side of things to make sure that
we can actually pick those redirects up. They don’t have things like
no index in the mix as well or that you’re using robots.txt
to block some of these URLs– those kind of things. So if you’re in, kind of,
from a technical point of view handling everything
right, then it might also be that this
is just a natural change for your website in
search that would have happened independently
of any URL structure change. However, any time you do make
significant URL structure changes, it is going to take
quite some time for Google to understand that again. So that’s something
where you probably want to be looking
at a couple weeks, maybe even a couple of months
time for things to settle down before making a decision saying,
well, this was a good move or this was a bad move. But again, that’s something that
just takes a bit of time to, kind of, settle down properly. All right, couple
of minutes left. What else is on your mind? AUDIENCE: Hey John. JOHN MUELLER: Hi. AUDIENCE: Hi, I was
wondering if it’s ever going to be possible to
do a whole TLD disavow? JOHN MUELLER: What would
you be using that for? AUDIENCE: We get a
lot of spammy links from other countries listed
on websites that have nothing to do with what we’re selling–
a lot of porn sites and things like that. JOHN MUELLER: I don’t know. I think it might be
possible that it’s already possible to do that. But, in general, I wouldn’t
recommend going that broad. Because you’re probably–
I don’t know– potentially dropping a lot of
reasonable stuff as well. But that’s interesting
feedback to get. Maybe we need to find a
different way of handling that kind of situation. AUDIENCE: Thank you. JOHN MUELLER: More questions,
what’s on your mind? AUDIENCE: One more question. JOHN MUELLER: OK. AUDIENCE: Is there
something going on with the Data Highlighter
tool in the webmaster Search Console? Because every time we use it
to highlight certain things on our website– and we’re using
the feature where you highlight multiple pages using the same
tags– it does however time to time start showing
errors where it’s highlighting the wrong items. When I go to click on
it, it doesn’t load. It shows that
there was some sort of an error that had occurred. And it takes like up to
like a couple of hours just to load that particular page
to fix the highlighted data. And when we do fix
it and click Done, it takes them another 15 to
20 minutes for it to register. JOHN MUELLER: I don’t know. I know they did some maintenance
on that a while back. But that sounds like
a more systemic issue that you’re seeing there. So I’d have to take a look at
that with the Data Highlighter team. And that’s something that
you can reproduce regularly? Or does that just
happen– I don’t know, every couple of weeks? AUDIENCE: This has been going
on for proximately, I would say, about two to three months. JOHN MUELLER: OK. AUDIENCE: And it happens
just about every day. JOHN MUELLER: OK, that doesn’t
sound that great, yeah. I need to double check with
the Highlighter team on that. AUDIENCE: All right, thank you. LYLE ROMER: Hey John,
one other question. There’s certain
queries for our site where if we put in kind of
like a short search term– a very short tail term, we’ll
rank in a given position. And then if we take
that same query and add into it the car– which
is what our site is related to– which should
make it more relevant, the ranking will drop
sometimes significantly. Do you have any thoughts on
what may cause that– what we might be able to do about it? JOHN MUELLER: Hard to say. One of the things
that– I don’t know. It’d really be
something where we have to look at the
specific examples. So one of the things
that might be happening is that we’re looking
for that specific phrase with those extra keywords. And that might not
be a perfect match if it’s kind of like a
combination of keywords rather than a real
phrase from your website. So that might be something
that’s happening there. But it’s really hard to say
without having the full example queries and be able to, kind
of, reproduce that on our side. LYLE ROMER: OK, I’ll send
you a couple queries, thanks. JOHN MUELLER: All
right, so I need to head off to another meeting. But it’s been great
talking with you all again. I set up, I think, the
next Hangout on Friday. And I’ll try to set up
another earlier one as well for those of you from
Australia, and Japan, and that part of the world. Thank you all for joining. Thanks for all of the questions. And hope to see you all again
in one of the future Hangouts. Bye everyone. AUDIENCE: Thanks John, see you.

1 thought on “English Google Webmaster Central office-hours hangout”

  1. whats the best way to index an SPA Site…? I have a client who has built an spa and search console is only indexing the home page

Leave a Reply

Your email address will not be published. Required fields are marked *