Tech Talk: "Load Performance in Single Page Applications" mit Christian Oberhamberger von NETCONOMY

Tech Talk: "Load Performance in Single Page Applications" mit Christian Oberhamberger von NETCONOMY

Hello everyone, welcome to a little presentation about Load Performance in Single Page Applications.

Quick introduction from my side – my name is Christian Oberhamberger, I'm a front end architect and chapter lead at Netconomy. We're partnering with SAP. And we're delivering fast and responsive touchpoints for our clients.

Good, then let's start with a small review on single page applications. In the last decade or so we saw on-premise applications move into the cloud, we saw big chunky application monoliths transform into microservice architectures. And we also saw touchpoints moving away from the back end and closer to the users. Why are we doing that? Well, dynamically rendering a touchpoint for human interaction is kind of expensive, costs CPU power and is also a bit meaningless for the back end anyway. So with single page applications, the users own device does that for us – which is kind of neat, so there is less load on our systems and a better user experience for the client, because there are no more page loads. Only the initial load for the whole touchpoint could be a problem though. Because now we have to ship a big JavaScript bundle to the user. And for backoffice applications that might be fine, but usually if you're running a website on the public internet that is searchable and you want traffic on there, load performance becomes incredibly important now. And even more important than it was before.

So here's an incomplete list of things you can do to improve performance in a single page application, there's pre rendering, server side rending, static site generation, there is rehydration and partial rehydration, there's chunk splitting, there's critical style inlining, there's lazy loading – they all sound more complex than they even are, frameworks can help. But I could've gone on writing those for a couple more slides, but you get the idea.

What you need to do though in order to see where you're starting from in terms of performance – and if anything that you're doing has any effect at all – you need to start measuring performance of your touchpoint.

I'm kind of flying through this.

So let's talk about measuring performance. If you're working as a front end developer, you've probably heard of or used Lighthouse somehow. There are other tools too, but they're also kind of looking by now what Lighthouse is doing a lot lately, so let's use it straight away. It is developed and maintained by Google. It gives meaningful standards across websites, check out especially the Core Web Vitals if you haven't yet! It's open source, it has a command line. So it's super cool and it's also super accessible! So you can access them via the chrome devtools, there's pagespeed insights and all those places, but that accessibility also has some caveats. People sometimes use Lighthouse in a way that doesn't really give them accurate results.

So here are some dos and don'ts that we found helpful in the past while we're looking at performance.

First, run on adequate hardware. As a developer, measuring things on your big and beefy developer computer will likely give you better results, no matter how much Lighthouse throttles the test in the background. Then again, if you're doing processor heavy tasks or maybe you're in a Zoom meeting with screenshare enabled and all that stuff, and you run Lighthouse you might have significantly different results again. So if anything, scroll down your Lighthouse sheet, you will find a CPU benchmark there and Lighthouse measures the power of your machine. That benchmark relates to your result as well – so be careful with that.

Next point, check your code as much as your content. Imagine you're measuring your performance and you get a result and you start developing some improvements and the deploy them a couple days later and then there are three new images on your page, because a CMS manager has gone rogue. Also not an ideal case. Test something that doesn't change that you can use as a reference point. So for example an impressum would be a good place to start with.

Next, check the docs in the GitHub repository! I know, it sounds a little counterintuitive, because there is so much documentation on Lighthouse all over the place anyway. But if you're a developer and you really want to get into how Lighthouse works and how the score is calculated and what you can do with your machine to get more accurate results and all these things – look at the GitHub repository in detail! They have a great documents section, you will be happy!

Good, then coming to the don'ts.

Don't measure what is not part of your code. We've seen our clients use tag manager software, analytics, they do crazy stuff like generating heatmaps over your clients or all the users while they're running your website. That is like an extra layer on top of your own code – of course, as an end result it's always good to measure the overall performance. You can get a sense of that on where you are. But if you want to find out how you can improve your own code, don't measure tag managers – it doesn't make sense. You can simply block them out using the dev tools' built-in network request blocking and you can also do the same thing when you're using the CLI. It doesn't work for pagespeed insights though – which is kind of a shame.

Next thing: don't use your main Chrome profile for performance measurements. This one should be pretty clear. But still a lot of people don't do it. Set up a Chrome profile, give it a nice colour so that it's punchy and you can see that this is my performance measurement profile. Disable all the plugins, it will help you not only with measuring performance, but also when you go into the details of your analysis as well.

Last point with the don'ts: Don't measure only once for meaningful results. This one is important, because there is always variance. Of course, every time you load a page, there are slightly different network conditions – especially in the public internet, especially with connections and all that stuff. So, no two page loads are ever fully the same. The recommended amount are five runs, at least! Do that, you will really get a more accurate sense – especially if you measure over a long time.

Good, the last thing I want to leave you with is: if performance is at all relevant to you, set up performance monitoring! Measure changes in performance over time and display them somewhere – we're using Grafana for that, I put a little screenshot here on the slide. That gives your measurements also a dedicated environment, so you don't really need to worry so much about your browser profile and what your personal CPU load is at the time. It can just run in the background and do the measuring for you. And also, you can get a lot more data from that, because you can compare things, you can actually see response times, Lighthouse gives you a lot of data that you can work with – you can use that to improve your pagespeed performance.

That is it from me – I hope this was helpful to you! Take a look at netconmy.net/careers

See you around!

 

 

Erfahre mehr zum DevTeam von NETCONOMY