Conf42 JavaScript 2025 - Online

- premiere 5PM GMT

Let Your Browser Take a Breather with a Scheduler.yield()

Video size:

Abstract

Modern web applications often overload the browser’s main thread, causing sluggish interfaces and poor responsiveness. This talk explores how JavaScript’s single-threaded execution model contributes to that problem, and how the new scheduler.yield() API offers a clean, effective way to give control back to the browser without breaking your code flow. We’ll walk through the limitations of traditional approaches, introduce task prioritization with scheduler.yield(), and show practical examples of how to make apps feel faster and more responsive. By the end, you’ll know how to let your browser take a breather, and your users will feel the difference.

Summary

Transcript

This transcript was autogenerated. To make changes, submit a PR.
Hi everyone. I'm really glad to welcome everyone to this wonderful event and to share something new and exciting with you. In my talks, I usually focus on two things, either new, lesser known technologies or old but overlooked ideas that I believe deserves more attention. And this topic is no exception. I'm going to talk about something experimental, but with a lot of potential. It is a scheduler yield right now. This future is still in early stages, and not all browsers supported yet. But there's already IL available. And during the talk I'll also show some alternative ways to achieve similar behavior. These aren't perfect replacements, but they can still be useful in the meantime. And before we dive into the topic, just a few words about myself and why my viewers listening to what I have to say. I'm a fronted engineer with over six years of experience, author of technical and scientific publications and speaker at the Global Tech conferences. I'm a mentor and judge at international hackathons and innovation award programs, and I'm also involved in the open source community program contribution, including creating my own open source C success library ton mammoth. I'm also a member of several professional associations. Now that you know a little bit about me, let's get to the point. So what is a scheduler yield? It is a method of the scheduler interface from the new prioritized task scheduling API. This methods allows you as a developer to post your JavaScript execution and explicitly yield control back to the main threat. So it can handle other pending important tasks like user interactions, clicks, typings, et cetera. In simple terms, when you call scheduler youth, you are telling the browser. Wait, take a breath. Let's put the current tasks and focus on other, no less or more important tasks. Once you have done, come back and continue execution from where we left off. This makes your page more responsive, especially when running long or heavy JavaScript tasks. It can also help improve metrics like interaction to next paint called INP, which is all about how quickly the browser responds to the user input. Before we dive deeper, let's quickly go over a few basic terms that I'll be using throughout the talk. I'm sure many of you already know them, but a quicker press, never hearts main threat. This is a central place where the browser does most of its work. It handles rendering, layout, and runs most of your JavaScript. A long task. This is any JavaScript task that keep the main thread busy for too long, usually more than 50 milliseconds. When that happens, the page can freeze or feel unresponsive. And the blocking task is a synchronous separation of the main thread that prevents the browser from processing. Other important things like responding to clicks, updating the ui, usually learn tasks. Tasks are blocking tasks. The problem. To understand the beauty of scheduler yield, we first need to understand what problem's trying to solve. And for that, let's quickly refresh how JavaScript purchase tasks. In other words, how the task processing in the browser works. I'm sure many of you have seen diagrams like this before. The task queue, the event loop. The call stack. This one isn't perfect, but it gives us the big picture. Let's walk through the main ideas step by step. So all in Crohn's code goes straight to the cold stack and runs line by line, function By function. It follows a leaf of principle lasting. First out, JavaScript runs in a single thread, which means it can do only one thing at a time, as in Crohn's operations, set them out, fetch. Pro handled outside the ADE by the web APIs provided by the browser environment. When they've done, they don't go back directly into the call stack. Instead, their callbacks queue either in the microtask queue, you can see it here like promise. Then queue microtask or tasks queue, set them out or set inter. When the call stack is empty, they went loop check the microtask queue and runs all microtask one by one in order only. After that, it takes one runable task from the chosen task queue. Importantly, the task queue assets, not strict FIFA queues, they went loop. Pick the first task that is ready to run, not necessarily that one that was added first. If during the process new microt tasks are added, they run before the next task from the task queue. So microt tasks always get priority. This loop keeps going All microt tasks. One task from task queue repeat. Use chrome code gets into the call stack. When new tasks arrive, like a user click in a button and new script being run. When a microtask or task from Task Q runs it's callback. This is a very brief and superficial explanation just to remind you how it works, since it'll further close the inter circuit with the topic, please note that Event Loop itself is not part of the JavaScript ECMA script specification. If you look into the ecma Script specification, you won't point it there at all. The event loop is defined in the HTML standard. As a problem description. Now that we have refreshed our understanding of how JavaScript secured tasks, let's take a closer look at the real problems that comes with this model. The issue is simple. When a task takes to long on the main thread, it blocks everything else. User interactions, rendering updates, animation. This leads to AI freezes and pure responsiveness. The obvious forethought might be, just don't write long or heavy functions, and that is the problem is solved. And yes, that's true. In an ideal world, we will always split heavy coat into smaller parts, optimize every sink, and the avoid blocking the main thread. Let's be honest, many of us have run through these issues even if we weren't that ones who regionally cause them. Even if you were not the culprit of this behavior, you have to work with it. And to make this more concrete, let's simulate a simple but realistic case. Imagine we had to process a large array, and each element requires some non-trivial computation, something that takes time and uses CPU, which in turns blocks the main thread. For this, we'll create a function called blocking task. The task as a block in task for the main threat for the specified period of time. The function simulate this kind of heavy computation on each element of the array. There's nice and f fancy about the function here. All it does, so it accepts an argument, the number of milliseconds. This is the minimum time the function will run, thus occupying the main threat. Then it creates an empty array. Then it creates a start time as a current time, and then runs a while loop until the specified time has passed. Inside the loop, it just does random, meaningless calculation to simulate load. Finally, it returned the result of the calculation. The function doesn't do anything useful, but it does simulate a real world scenario of heavy load. This function will be used inside another simple function. Imagine a common situation where we need to look through an array of data and apply that heavy work to each item of the array. To do this, we'll create a heavy work function in which the following happens. On line two, it creates an array of 200 items, just numbers from zero to one. Nine nine. I want to know that 200 items are not that many, but it'll be enough that the to the see the essence of the problem. Then a new empty result array is created to store the processed values. Then on line five declares a loop that go through the entire lens of the data array. Inside the loop, we run the blocking task function, simulating only 10 milliseconds of work for each element, and the result is added to the result array. Once again, I want to remind you that for the demo, the blocking task function does not carry any semantic load. It is simply perform some imaginary resource intensive work in the real world. It could be some labor intensive processing. Cory Element. Finally in return, the result in array, and that's where the amazing part comes in. Just 10 millisecond per element and only 200 elements, but together they block the main thread for two full seconds. That's enough to cause a noticeable freeze in the ui. No clicks, no typing, just a frozen. It's a problem demonstration. Now it's time to look at the problem, not just in theory but in action. This is not full fledged demo just yet. Tin of a simplified visual to help you clearly see the issue. Here's what you see here. The left window titled configuration lets you turn on the main thread block on and off, meaning whether the blocking task function is actually running. You can also toggle scheduler yield functionality. We will get to that part later. The window titled heavy task here on the right runs the heavy work function. This is one that process an array using blocking task on each element. If the main thread blocking is enabled and the window typed logger here, just talk the current time to the console, including milliseconds. Let's see what happens when the main thread booking is turned off. So the tasks are very light. It just took over an array of 200 elements without any complex calculation. What we observe here, the user click the okay button. The heavy work function runs and instantly returns. This is indicated by the message Heavy Task done in the console, followed by the result of an array of numbers here. Then the user clicks the log button three times. They log the time to the console, timestamps appear immediately with a slight difference in time. They run the heavy work function again and again. Instant response. Finally, the user closes two windows, which actually just remove those elements from the do no delays, no hiccup. In this case, everything feels fast and responsive. The browser has no trouble hand the interactions because the mantra stays free. Tasks are performed almost instantly and consistently. Now let's enable the main thread blocking so that for each element of the array, the blocking task function will be called with a delay of only 10 milliseconds. And now you can observe that user interaction with your elements has become less smooth. UI freeze, have a period. Let's break it down to what is happening here and what we can observe from it. The user press okay button, thereby launching the heavy work function. And the first log that occurs is that the okay button visually stays pressed. Why? Because the browser cannot repent. While heavy work is still blocking the main threat, and it is important to understand that we are talking not only about the current task, but about the call stack as a whole. During this time, the user clicks the logs button four times. Nothing happens. The clicks are registered and they handles edit to the queue, but the browsers can't react. Only after heavy work finishes do we see the console output for the heavy workers out. Then the four time steps. All printed in a batch and only after that the okay button changed its state and became impressed. Next, the user clicks. Okay, button again, same behavior, stuck button. Then while the heavy work task is running, he tries to close the window by clicking the X icon three times. Again, no visual response, only one. The task ends do we see the window disappear. And finally, one more attempt to run heavy work and close the last window. Same freeze. So you see the press button, you see nothing happens here. You see the press button again, nothing happens. It's lagging, it's freeze. What that, this show, the simple demo, shows how long task block the powers their ability to respond to user actions. Even though each block code takes just 10 millisecond changing 200 of them to get results in a two second freeze, the user can interact with the button, the interface doesn't repaint. Events get queued up, but not processed until the call stack is clear. This is not just a performance issue, it's a user experience problem, and that's exactly the kind of issue we want to solve. Ideally, without having to manually split our logic into dozen of callbacks. It is a problem solution. Now that we understand the problem, let's talk about the possible solutions. Of course, the best strategies to wet loan task in the first place by keeping code efficient and breaking things up early. But as we have seen stuff happens, whether it's legacy code and available computation are just not enough time to optimize. Sometimes we have to deal with it. Over the years before the prioritized task scheduling API appeared, various workaround and tweaks have been come up is to improve the responsiveness, but the core idea behind all of them and behind scheduler as well is pretty simple. Break a task into smaller pieces is chunks, and once in a while, post to let the browser catch it breathe. In other words, we give the main threat a chance to run more urgent tasks like user interactions, rendering updates, and then we come back to finish our own work. Here's what the concept of heavy work function looks like in the pseudo code. What's happening here? First, you run a chunk of your task, then you post allowing the browser to handle other high priority tasks like UI updates. Put the current task and task control that demand threat to handle other high priority tasks, and then continue execution. The function from where it left off. All problem solving approach. Before scheduler yield came along, the most common trick for dealing with the long blocking task was to use set time out. By calling it with a zero delay, you add its callback task to the end of the task queue, allowing other tasks to run first. In other words, you tell the browser, run this bit of code later, after you handle everything else, and that's how we can give the main threat a short bleeder between chances of heavy work. Here's what the updated heavy work function might look like using this approach. Let's break it down. What's going on here on A Promise is created and it's executor runs immediately. Scheduling a set time out with a zero delay the time article back, which resolves the promise is added to the end of the task queue. Because of a wait, the rest of the same function is post technically discontinuation is added to the microtask queue, waiting for the promise to resolve. The JavaScript and giant check the call stack. Once it's empty, they went loop kicks in. First it took at the microtask queue, but times the promise isn't resolved. There's nothing to run they want. They went loop. Pick the task from the queue. In our example, it is set time out, called back runs it, and this resolve the promise. Now that the promise is resolved, the Microtask contains the continuation of the, I think function is run. In simple terms, line three gives the browser a chance to catch its breeze before heavy work begins. Therefore, by calling this method before doing any heavy work gives a browser a moment to rear-end UA updates, such as unfreezing, a click button. On line nine, we calculate how often we want to yield to the main threat, roughly every 25% of the work, this number can vary depending on how heavy the task is. On line 13 through 15 inside the loop, if the condition for yielding interval is met, execution is transferred to the main thread. That is the set technique is repeated, allowing the browser process, user interaction, or draw the interface. Essentially, this approach works. It's relatively simple and does improve responsiveness, but there's a tradeoffs. One big issue that is set them out isn't built for precise scheduling. It puts tasks at the end of the task queue, and anything already in that queue can delay your continuation. For example, let's say some other part of the page, you set interval to run task regularly. Now your own task, the next chunk of heavy work function might get delayed by one or more of these interval call bags. The brow just runs whatever in is next in line it do. You don't control the order. So while set time out lets you yield, you don't know exactly when you get control back. There are other ways to upload this situation. It could be the request animation frame function, which lets you schedule work right before the next repaint, often used in conjunction with the set amount and has similar drawbacks or request idle callback that runs your code during the browser. Idle time. It's not quite an alternative, but good for background, a less important work that help the mantra to be free for more critical tasks. In general, we could discuss other strategies for solving and preventing such problems. However, to stay on topic, let's move on and see what schedule yield brings to the table. So schedule yield, it's a modern way to post execution and yield control to the main threat, which allows the browser to perform any pending high priority work and then continue execution from where it left. What actually happens under the hood when the AWAI scheduler yield expression is reached? The execution of the current function is in, which it was called, is suspended, and the yields controlled is the main threat, thereby breaking up or posing the current task, the continuation of the function, that is the execution of the remaining part of it from where it left off is a separate, newly scheduled microtask in the event loop. The beauty of scheduler yield that the continuation after scheduler yield remains at the front of the queue and the scheduler to run before any other non-essential tasks that have been queue. The key difference from the set out approach that we set them out, discontinuation typically run after any new tasks that have already been queued, potentially causing long delays between yielding to the main threat and their completion. The following diagram illustrates how three approaches compare in practice. Let's look at them in the first example without yielding to the main thread. At the first is the long task one runs uninterrupted. Blocking the main trend and UI accordingly. Then a user event is processed. A button click triggered during the execution of task one. And finally task Q. Task two is executed, set time out, call back, schedule it earlier during the execution of the wrong task. In the second example, using set timeout as a yield into the main threat, the execution use different at first, the long task one runs. Then when the yield to the main threat happens, task one passes to let the browser breathe and the button click is processed. But after the button click is processed, the set time article back will be executed first, which could have been scheduled in advance during the execution of Task one, and finally only after that the continuation of Task one will be executed. And in the last example, using scheduler yield technology. After the long task one has been post and the user click event has been processed. Then the continuation of the task one is prioritized and runs before an acute set amount tasks. In summary, scheduler yield is a more intelligent and predictable way to give the main threat breath in the room. It avoid the risk of your code being pushed far too back in the queue and helps maintain performance and responsiveness, especially in complex applications. Priorities. So what causes such a difference in behavior? It's all about priorities. As developers, we don't usually think about the order of execution of task in the event loop in terms of priorities more precisely, we have a good understanding what microtask and task you are and the order image which they run. But if you look deeper, you'll notice that there are all simplicity priorities at play. For example, a button click handle, fired by user action. Typically execute before a set time call back. Even though both a task from the task queue. As mentioned earlier, scheduler yield is a part of the prioritized task, scheduling API and extensive and future reach interface that deserve its own separate, full-fledged discussion is clearly beyond the scope of this talk. Nevertheless, it's important to mention one of its key features. The introduction of Clear Task priority Model, prioritized Task Scheduling API simply make these priorities explicit, making it easier to determine which task will run first, and enables adjusting priorities to change the order of the execution if needed. Here's a quick look at the main priority levels it defines. First, it's user blocking the highest priority tasks that directly affect user interaction, such as hand and clicks, tops, and critical UI operations. Then user visible tasks that affect UI visibility or content, but are not critical for immediate input. And the SEC and the third background tasks that are non-agent and can be safely postponed without affecting the current user experience and are not visible to the user. By default, scheduler yield has a user visible priority. Also prioritized task scheduling API exposes the post task method designated to schedule task with a specified priority from the above. While it won't go into details about this method here, it is worth mentioning that if scheduler yield was scheduled from within a post task, it ensures its priority. So how to use scheduler yield, and once you understand how it all works, the types of tasks, the problem caused by long blocking operations and the priorities, the use of scheduler yield becomes straightforward, but it should be used wisely. And with due caution, here is an updated version of the heavy work function using scheduler yield. Now, instead of set time out, you just need to call AWAI scheduler yield. And the rest part ment change here and here. Instead of we just call a wait, schedule a real. Now when a user starts a heavy work function using scheduler yield, the different is immediately noticeable. Firstly, the okay button does not stick. And secondly, user click events on the log button as successfully purchased, which does not blocked the user interaction with the page that is. At first, the heavy work function was launched and the button was re rendered without sticking. While the heavier task was being executed, the user pressed the log button. They went, was processed successfully and the data was printed to the console. Then the heavy work function continued and this final result was printed to the console. After the completion, the user press the log button again. In short, you can give your browser a break with just one line, so you can see that there no UI freezes. We can see the look almost immediately. Now that we have explored the theory, let's move on to the practice and look at the real working demo. This is a simulated banking complication. Of course, it's fictional and simplified, but it captures just enough of the real world complexity to help us understand how block and the main threat affects interactivity and how schedule yield can help. Here's what the user sees in the interface. Balance section. By default, the account balance is hidden behind the plus holder of faster risk. This is a familiar pattern in the real banking apps where sensitive information is hidden unless explicitly revealed by the user. A button labeled show balance to visibility. Next, a bank card, a visual representation of a bank card shown from side by default where some details are displayed. Card type in the top left corner. Last four digit of the card holder's name, and the payment system is the bottom right of the card. There are two buttons to the right of the. Show card details, which flip the card. One, click the backside of the card to view sensitive card data like its full number, expiration date and serial code. Although the card number is generally not considered private information, some applications still prefer not to show the full number by default, but only the user initiates it. However, I know and even use the banks that generally do not allow you to see the bank card number even in the application and generate reports. By clicking this button, the feature supposedly generates a list of transactions on the card and display them in the table below. This imitates the real functionality where a user can generate reports on the bond card transactions. In reality, these reports can be complex tables with many customer visible filters and the ability to download their reports as a file. Such operations might involve heavy computations process, a huge amount of data making them resource intensive. And time consuming for the sake of the demo, it's simplified under the hood. The generate report button triggers a previously discussed heavy work function, which simply blocked the main threat using the blocking task function, which was also discussable. After that static mock transaction data, simply render it into the table. The behavior of the application can be customized using the various settings of the configuration panel. On the left side, you may hear notice its simplified version in earlier screenshots. Now it's time to explain what it does. Main thread blocking determines whether the main thread will be blocked. In fact, when this option is enabled, the blocking task function is executed. Scheduler yield, toggles where the scheduler yield is used. Data Arit controls how many elements are iterated by the heavy work function. The more elements, the longer it takes. Blocking time, durations specifies how many milliseconds each element of the race takes to process. And the yield interval defines how often scheduler yield is called as a percentage of progress through the array. That is the lower this number, the more often it'll be called. In earlier example, we used a 200 element array with a 10 milliseconds delay and the 25% interval. A good balance for visible impact without excessive delay with larger data sets, a smaller interval is often better. But as always, it depends. Here and sorted out all the functionality and configuration. Let's walk through the real use user scenario and see how blocking the main threat affect the user experience. To start, we enable main threat blocking and disable schedule yield. We also increase the array lens a bit, so the heavy operations take longer given as the time absorbs the effects. So the user clicks the generator port button behind the scenes. This triggers the heavy work function, which processes 1000 elements, where each element takes 10 milliseconds. Watch what happens. The generator port button stays stuck. It doesn't impress, and the UI doesn't rear render. While the report is being generated, the user tries to click the show card details, then the show balance buttons, but nice. In response, the interface is completely frozen. There is no animation, no feedback, no sense of progress. This is a classic example of a bad user experience. The app appears frozen, even though it's technically still working. The user doesn't know whether to wait or load the page. And let's address these short comments using scheduler yield by adjusting thumb configuration. Here's how the configuration now looks. The main threat is still blocked. This time, the option to use scheduler yield is enabled. The rail lens is slightly increased just for clarity. The blocking time remain the same 10 milliseconds. The scheduler Ute response is interval reduced to 5% for tho responsiveness, and the relevance has been increased. And now with the updated configuration, the same user flow looks completely different. The third thing that catches the eye is that after the generator port button has been clicked, it'll renders correctly, and the load animation appears while the report is being generated. The user successfully interacts with the ui. They can flip the card and toggle the balance. The application remains responsive. Even if the animation are slightly less smooth, it's a huge step forward compared to the previous freeze. This is a much better experience. The user is informed in control and not let grace in whether the app is working and all. It was one method called just Schedule a yield. Of course, the actual implementation can be further optimized, but even this simple form, the differ difference striking. And as a conclusion, I want to say that today you learned about giving your browser break the importance of yielding to the main thread that perform higher priority task, and the advantages and the disadvantages of these techniques. There are certainly more nuances to cover and the prioritized task schedule and API has other capabilities that will not code in this talk, but my goal was to give you a solid foundation enough to start experimenting and enough to start thinking differently about how your code plays with the browser. Thanks for your attention, and give your browser a brick once in a while.
...

Aleksandr Tkachenko

Senior Software Engineer @ Playtech

Aleksandr Tkachenko's LinkedIn account Aleksandr Tkachenko's twitter account



Join the community!

Learn for free, join the best tech learning community

Newsletter
$ 0 /mo

Event notifications, weekly newsletter

Access to all content