JavaScript SEO For Ranking In Google



Coming from a developer background there's so many misconceptions and myths about SEO that developers come up with or have heard or That come from the SEO world as well. Where do these come from? How do these get into the world? The myths, the legends that come through now JavaScript?

I think a lot of it is people with very good intentions will try to provide the information they have available and there's a gap in translation between the SEOs and the developers and how they think and what they consider. So by going ahead and adopting is acceptance criteria as part of my tickets when I work with devs, that lets them know very specifically instead of being like "and I want you to make magic for me" And you go from "give me magic" to "hey, here's my user story." "I would like to accomplish three pieces for acceptance criteria." You can bridge the gap.

Hello and welcome to another episode of SEO myth busting. With me today is Jamie, Alberico. Jamie what do you do in your job? Thank you so much for having me here. I'm a technical SEO with Arrow Electronics.

That means that I am embedded with a number of dev teams across a number of projects. And we tried to execute these initiatives, get new features available on the site, in an effective and search friendly way. And that means a lot of times we have to have conversations about how we're using our JavaScript.

Having you here is fantastic because then we can have a conversation about pretty much everything that you want to know from the search side as well as the web developer side, so ... Any questions that you have in mind or anything like pops into your mind? Oh so many questions. I hope I get to poke at the black box of Google here.

And I have one that's absolutely burning: Is JavaScript the devil? That's a fantastic question. It might seem that way sometimes especially when things are not going the way you want. You see the horror stories They're on forums or on Twitter. Everything is gone. Yeah, that's one thing. That's the SEO site on the developer site is also like oh it's a language that wasn't designed to be like super resilient But it actually is and then often people are oh It's a C style type language and it's not really it's a list type language They're like a lot of misconceptions coming from both worlds together and clashing here.

I don't think it is the devil? I think it has its benefits. I mean it allows us to build really cool and fantastic stuff on the web and be really responsive to what the user does and wants to do with our applications and it has moved the web from becoming or being a document platform towards an application platform and I think that's fantastic. So I think we are already pushing hard on fighting this "JavaScript is the devil."

And "if you use JavaScript, we can't be indexed at all." So that's not true for for a long time. But I think now the documentation is catching up with like outlining the different Bits and pieces that you should be aware of and the features that you have to deal with that are not available. One thing for instance is you probably have built single page applications, right? Oh, yes. Has there been problems in terms of SEO when they rolled out I? I was pretty lucky. I had a dev team who believed in SEO.

That's good. That's really good. That was actually my the big moment of my career when I got on the technical SEO And I came and I talked to you one of my new developers for the first time with this very specific problem I was trying to solve and he just paused and looked up from his keyboard, and went "you're not snake oil" So I think we're making a lot of progress between SEO and devs That is fantastic. It's a great story. So you might hear a few people in in the community going like ooh Should we do a single page application? Is that risky?

And one of the things that a bunch of developers are not aware of and some SEOs are not necessarily communicating all the time is that we are stateless. So that means with a single page application You have a bit of an application state, right? You know which page you are looking at and you how you transition between these pages. However, when a search user clicks on a search result They are not having this application.

They are jumping in right to the page that we indexed. so we only index pages that can be jumped right into. So a lot of the technology JavaScript technology is making assumptions of how The user navigates so the application. So like the developer, as a developer in my test. It's okay. Here's my application I click on the main navigation for this particular page and then I click on this product and then I see and everything works But that might not do the trick because...

You need that unique URL. It has to be something we can get right to. Not using a hashed URL and also the server needs to be able to serve that right away If I if I do this journey and then basically take this URL and copy and paste it into an incognito browser Mm-hmm. I want people to see the content.

Not the home page and not a 404 page. So that's something that we're working on giving more guidance for lazy loading You probably have seen a bunch of communication about that one is probably, Yes Yeah How do we get a rich media experience out to users, but do it in a way where if you're on your cell phone we keep that very small time frame we have to get your attention?

Correct, and you want to make sure that if you have a long list of content? You don't bring everything into the especially on the cell phone right? Just feeling like 100 images What about Ajax? What about using asynchronous JavaScript and XML? That is perfect Whoa, I haven't I haven't heard Ajax being used in a while and it's fell out in a while.

I mean Everyone's using it but no one's talking about it that much. It was just like yeah you just load data in as you go and that's perfectly fine. We are able to do that. also, I often get us about how that affects the crawl budget.. Let's talk. So what worries you about that? well if we're using Ajax and me requests say a product detail page and we're using Ajax to supplement a lot of pieces of content to it, right?

Googlebot's requested one URL and it's gotten back nine. Yeah, because each of those Ajax calls had a unique string, right? How do we handle that and does that negatively impact our crawl budget? So I wouldn't say it negatively impacts your crawl budget because crawl budget is much more complex than you might see this It's one of these things that looks like super simple, but there's more than meets the eye. We're doing a bunch of caching, right? because we expect that content doesn't necessarily like update too much.

So Let's say you have this product page you make one request to the product page and then that makes nine more requests. We don't make it We don't distinguish between like loading the CSS or the JavaScript or the images or the API calls that get you the product details. So if you have nine calls from this one page load then that's gonna be ten in the crawl budget.

Because of caching we might have some of these in the cache already. So if we have something that is already cached that doesn't count towards your crawl budget. So if we were to version our Ajax calls, yes, those could be cached as those could be cached exactly Yes, and then that's that's one way of working around it if you can do that if that's a possibility.

The other thing is you could also consider it not just an issue for The crawl budget but also an issue for the user, right? because if you're on a slow network or spotty network connection It might flake out in the middle and you were your left foot broken content. That's not a great user experience You want to probably think about like pre-rendering or hybrid rendering or server side rendering? Anything in between there.

And crawl budget is tricky generally because we are trying to deal with the whole "host load" situation. So what can your server actually deal with? so we are constantly adjusting that anyway. So it's like "oh this affected our crawl budget negatively." Not really because we just like had host load issues with your server. So we like adjusted it anyway, so we had balancing issues across your entire content, so I wouldn't say that it's not much of a deal. But I see that it's very important for people to understand that and unfortunately that's not that easy.

Can we demystify Googlebot a little bit? Because we have this The omnibus, the great, the Googlebot! but it actually goes through a series of Actions, so we get that initial HTML parse, We find that the JavaScript and CSS that we need to go ahead, and make our content then call those pieces, We know since Google I/O there is actually a gap between our initial parse and our HTML rendering.

But I want to know more because Googlebot follows HTML / HTML5 protocols. Yes. There's some nuances there I don't think I know I didn't know about. Where say you've got an iframe in your head and you've got a closing head script right there? That ends your head for Googlebot.

Yeah, all of our lovely meta content, our hreflangs, and canonical's below that have a tendency to exist.. That is true - there's a bunch of things at play. so when we say Googlebot, what we actually mean on the other side of the curtain, is a lot of moving parts, so There's the crawling bit that literally takes in URLs. right? and then caches them from the server then so that when you are providing the content to us we get like the raw HTML. That tells us about the CSS, the JavaScript, And the images that we need to get and also the links in the initial HTML.

Yeah, and because we have that already, we have such a wealth of information already, we can then start it like go off and fetch the JavaScript and everything that we need to render later on. But we can also already use the HTML that we've got and say like "oh! look there's links in here that need to be crawled."

So when you have links in your initial HTML, we can go off and basically start the same process for these URLs as well. So a lot of things happen in parallel rather than just like one step, and then the next step ,and then the next step. So this is definitely the start of it. And as we get the HTML, in parallel to extracting the links, and then crawling these, we queue them for rendering.

So we can't index before we have rendered it because a bunch of content needs to be to be rendered first. In a way that better fits us if we've got a single page application. We now.. Googlebot has the template, they just got to grab the content that fits within there? Yeah, so wouldn't that mean that Googlebot likes these JavaScript platforms?

The more content you get us quickly in the first step in the crawling step, the better it is, because we can then basically carry that information over rather than having to wait for the rendering to happen.

But is prerender always the best solution? That's a tricky one. I think most of the time it is because it has benefits for the user on top of just the crawlers. But you have to very carefully measure what you're doing there. I think so Giving more content over is always a great thing.

That doesn't mean that you should always give us a page with a bazillion images right away because that's just not going to be good for the users. Because they're gonna have to then... if you're on a really old phone and I have a pretty old phone and you have a pages full of images and transitions and stuff then you're like..

"I can't use this website." So pre-rendering is not always a great idea. It should be always a mix between Getting as much crucial content and as possible, but then figuring out which content you can load lazily in the end of it. So for SEOs that would be you know, we we know that different queries are different intents. Informational, transactional... so elements critical to that intent should really be in that initial rush.

Exactly and you might consider if if the intents are wildly different, And the content is very very different, consider making it into multiple pages or at least multiple views if you're using a single page Application so that you have an entry point for the crawler to specifically point at it when when it comes to surfacing The search results.

So treat it like a hub and let the users branch out from there? Yes, so that's where we'd use Maybe our CSS toggle for visibility. That is a possibility just having different URLs is always an option especially with the history API you can probably in the single page application figure out which route to display and then like have the content separated between different routes or Be a little more dynamic there.. We support parameters. So even if you use URL parameters.. basically expose the state that is relevant to the user in the URL.

What other ways does that benefit our users? because our ultimate goal is to make them happy. And that's our ultimate goal too. So like we are we are the same in terms of what our goal is. We both want to surface useful information to the user as quickly as possible. So The users benefits are, especially if you do like hybrid rendering or the server-side rendering, that They get the content really quick.

Normally if it's done well, if it's not overloading their device, And they get to jump in right where the meaty bits are, right? So if I'm looking for some specific thing and you give me a URL that I can use to go to that specific thing I'm right there and I'll have a great time because it's the content that I needed So, yeah, if you have performance metrics going up as well, then even if I'm on a slow phone and a really spotty network, I still get there.

I mean our performance metrics that's based on a lot of pieces. We have a stack of technology. That is true What should SEOs look for in our stack? Where should we try to identify those areas where we could have a better experience for not just Googlebot but our humans?

Yeah, so I think a bit that is oftentimes overlooked not by a SEOs, But by businesses and developers, is the content part. So you want to make sure that the content is what the users need and want and it's written in a way that helps them.

But on the technology side... wait So that blurb at the top people always do where the like here's my hero image and then 500 words about this thing And I'm a human who wants to buy something, and there's so much stuff in the way... Yeah, don't do it. At least like have two pages have, like the promotional page that you want to do direct marketing towards, and then if I specifically look for your product, just give me your product.

Just let me let me give you money! So I think Talking about performance and all the different metrics it's a bit of a blend of all the things like Look at when does my my content actually arrive when does my page become responsive so you look at First content for pain, you look at time to first buy as well, less important than the first content full paint I would say because it's fine if it takes a little longer if then the content is all there.

Versus.. So time to first byte can take a bit of a hit? Yeah If we deliver that faster first meaningful paint. Exactly because in the in the end, as a user, I don't care about if the first byte has arrived quicker, if I'm Still looking at a blank page because javascript is executing or something is blocking a resource... If it arrives a little later, but then it's right there. That's fantastic right and you can get there in multiple ways. I highly recommend testing, testing, testing, testing. What testing tools would you recommend? So I definitely recommend lighthouse. That's a great way web hint is more More broad approach as well and you could also use PageSpeed insights or the new SEO audits in lighthouse, Mobile-friendly test also gives you a bunch of information.

PageSpeed insights is look at that full page, though Mm-hmm, and we had a bit of a bit of a gap. We have almost this futurist Lighthouse where we want that time to interactive and then we have people adopt this methodology That's how we got, you know so much contact via Ajax, because full page load is fast, but all that content was still coming...

I would recommend lighthouse that gives you like the Filmstrip view of when things are actually ready for the user to work with, so I would highly recommend looking at lighthouse but PageSpeed insight Gives you a good like first first view over and it integrates with lighthouse really nicely now. Wonderful. Do you think that JavaScript and SEO can be friends now and developers and SEO s can also work together? I do. I really think that.

You know if Google is a library and a webpage is a book, using these JavaScript frameworks lets us make pop-up books, enrichen experiences to engage with. Oh, that's a fantastic analogy, I love that image. That's a that's a beautiful one. Thank you so much Jaime. Thank you very much, and I hope you enjoyed it and see you next time. Have you ever wondered where on the map you should put UX and performance when you're talking about SEO? So have I. Let's find out in the next SEO myth-busting episode..
SHARE
    Blogger Comment
    Facebook Comment