Transcript
            
            
              This transcript was autogenerated. To make changes, submit a PR.
            
            
            
            
              Hello everyone, my name is Dmitrii Ivashchenko and I'm a software engineer
            
            
            
              at Mygames and this talk. We'll look at these
            
            
            
              differences between WebGL and soon to
            
            
            
              be released Web GPU and learn how to prepare your projects
            
            
            
              for this transition. Let's begin by exploring the timeline
            
            
            
              of WebGL and WebGPU, as well as the current state of
            
            
            
              the WebGPU. WebGL, similar to other technologies,
            
            
            
              has a history that dates back to the past. These desktop
            
            
            
              version of OpenGL debuted way back in 1990.
            
            
            
              These in 2011, WebgL 10
            
            
            
              was released as the first stable version of WebGL.
            
            
            
              It was based on OpenGlas 20,
            
            
            
              which was introduced in 2007.
            
            
            
              This release allowed web developers to incorporate
            
            
            
              videographics into web browsers without requiring
            
            
            
              any extra plugins. In 2017,
            
            
            
              a new version of WebGL was introduced called
            
            
            
              WebGL 20. This version was released
            
            
            
              six years after the initial version and was based on Openjlas
            
            
            
              30, which was released in 2012.
            
            
            
              WebGL 20 came with several improvements and new features,
            
            
            
              making it even more capable of producing powerful treaty graphics on
            
            
            
              the web. Lately, there has been a growing
            
            
            
              interest in new graphics API that
            
            
            
              offer developers more control and flexibility,
            
            
            
              and three notable APIs. Here are Vulcan,
            
            
            
              direct, these to twelve, and Metal.
            
            
            
              Together, these three APIs create a foundation
            
            
            
              for Web GPU. Vulcan, developed by the Kronos
            
            
            
              Group, is a cross platform API that provides developers with
            
            
            
              lower level access to the graphics resource hardware.
            
            
            
              This allows for high performance applications
            
            
            
              with better control over graphics hardware direct these
            
            
            
              D twelve, created by Microsoft, is exclusively for Windows
            
            
            
              and Xbox and offers developers deeper control over graphics
            
            
            
              resources. Metal, an exclusive API for
            
            
            
              Apple devices, was designed by Apple with maximum performance in
            
            
            
              mind on their hardware. WebGPU has been
            
            
            
              making significant progress lately. It's expanded to platforms
            
            
            
              like Mac, Windows and Chrome OS and available in
            
            
            
              Chrome 130 and H 130 versions.
            
            
            
              Linux and Android support is expected to be added
            
            
            
              soon. There are several engines that either
            
            
            
              support or are experimenting with WebGPU.
            
            
            
              Babylon JS fully supports WebGPU,
            
            
            
              while these JS currently has experimental support.
            
            
            
              Blake Canvas is still in development, but its future looks promising.
            
            
            
              Unity made an announcement of early and experimental
            
            
            
              web GPU support in Alpha version 2023.2 and
            
            
            
              cost creator 3.6.2 officially supports
            
            
            
              web GPU and finally construct is
            
            
            
              currently only supported in Chrome version 130
            
            
            
              or later on Windows, macOS and Chrome OS machines.
            
            
            
              Taking this into consideration, it seems like wise
            
            
            
              move to start transitioning towards web GPU or at least preparing
            
            
            
              projects for a future transition. Let's take a closer
            
            
            
              look at some of the code pieces of the API.
            
            
            
              This won't be comprehensive, but WebGL touch on all
            
            
            
              the most important bits. The GPU adapter is
            
            
            
              a pivotal component. Adapters represent the
            
            
            
              gpus. The device can be axed.
            
            
            
              This can be even software based like Swift shader,
            
            
            
              and typically it returns one adapter at a time.
            
            
            
              However, you can specify an adapter based on certain criteria,
            
            
            
              like power performance, low power,
            
            
            
              and it provides a snapshot of the GPU specifications
            
            
            
              such as vendor Nvidia architecture, tuning,
            
            
            
              and furthermore it outlines the features and limits
            
            
            
              of the capable to the GPU device.
            
            
            
              The GPU device plays a central role.
            
            
            
              It serves as a main interface for the API.
            
            
            
              This company is responsible for creating resources
            
            
            
              such as textures, buffers and pipelines.
            
            
            
              It comes equipped with GPU queue
            
            
            
              to carry out comments, and functionally it's
            
            
            
              quite akin to WebGL rendering contexts.
            
            
            
              Webgl extensions are roughly equivalent to those in WebGPU.
            
            
            
              However, they are not universally supported across all
            
            
            
              systems. Each adapter provides a list of available
            
            
            
              extensions to activate them. They must
            
            
            
              be specified when requesting the device. There are
            
            
            
              numerical constraints on GPU capabilities.
            
            
            
              Baseline exists that every web
            
            
            
              GPU implementation must meet. The adapter
            
            
            
              indicates the actual limits of the system. By default,
            
            
            
              only these limits are active unless specified during
            
            
            
              the device request. WebGPU enables rendering to
            
            
            
              the canvas after creation. It needs configuration
            
            
            
              to link with device. Multiple canvases
            
            
            
              can share the same device, and they can be reconfigured
            
            
            
              as necessary. It supplies a
            
            
            
              texture for rendering and there
            
            
            
              are several resource types in web GPU.
            
            
            
              First, we have the GPU buffer. It defines
            
            
            
              the size and its usage. It can
            
            
            
              be used from uniforms, vertices, indices and general
            
            
            
              data. Next is the GPU
            
            
            
              texture. It designates dimensions as
            
            
            
              along with size maps, samples, formats and usage.
            
            
            
              Then there is a GPU texture view.
            
            
            
              It is a subset of a texture used for sampling or
            
            
            
              as rendering targets. You can specify
            
            
            
              its use as a cube map, array, texture, and more.
            
            
            
              The GPU sampler is also important. It dictates these
            
            
            
              textures, filtering and wrapping behavior.
            
            
            
              It's crucial to note that all these resources
            
            
            
              maintain a fixed shape post creation.
            
            
            
              However, their content can be modified.
            
            
            
              The device comes with a default GPU queue.
            
            
            
              Currently, it's the only queue available for use.
            
            
            
              However, future API versions might offer more options.
            
            
            
              This queue is essential for submitting comments to
            
            
            
              the GPU. Additionally, it features
            
            
            
              useful helper functions. These assist
            
            
            
              in writing to buffers and textures, and these are
            
            
            
              the simplest method to update the content of these resources.
            
            
            
              It's highly recommended to make use of them. To record
            
            
            
              GPU comments, start by creating GPU comment encoder from
            
            
            
              your device. This allows you to transfer data between buffers and
            
            
            
              textures. You can then initiate render or
            
            
            
              compute passes. Once you're done,
            
            
            
              it generates the CPU comment buffer.
            
            
            
              Remember, comment buffer remains inactive until they are
            
            
            
              queued, and once a comment buffer
            
            
            
              is submitted, it can be reused.
            
            
            
              Passes play a significant role in GPU operations. A render
            
            
            
              pass can utilize GPU render pipelines.
            
            
            
              It binds to vertex or index buffers,
            
            
            
              issues, draw calls, and write to one or
            
            
            
              multiple textures. On the other hand, a compute
            
            
            
              pass taps into GPU compute pipelines.
            
            
            
              It's responsible for issuing dispatch calls.
            
            
            
              It's essential to note that when a pass is active, you can't
            
            
            
              record other comment types. However,
            
            
            
              both render and compute passes have capability
            
            
            
              to set bind groups. Initially, a render pass
            
            
            
              requires you to provide details about its
            
            
            
              attachments. This includes these output designation
            
            
            
              and methods to load and save these. Here,
            
            
            
              the clearing of data types take place. It's also where
            
            
            
              you establish multi sample results, ensuring accuracy at
            
            
            
              the past. Conclusion now let's explore the main high level
            
            
            
              differences when beginning to work with GraphQL APIs,
            
            
            
              the first step is to initialize the main object for interaction.
            
            
            
              This process has some differences between WebGL and WebGPu,
            
            
            
              which can cause some issues in both systems. In WebGL,
            
            
            
              this object is called context and represent these interface
            
            
            
              for drawing on an HTML five canvas element.
            
            
            
              Obtaining this context is quite easy, but it's
            
            
            
              important to note that it's tied to a specific canvas.
            
            
            
              This means that if you need to render on multiple
            
            
            
              canvases, you will need multiple contexts.
            
            
            
              WebGPU introduces a new concept called device.
            
            
            
              The device represents a gpu abstraction that you will
            
            
            
              interact with. The initialization process is
            
            
            
              a bit more complex than in webgl, but it provides more
            
            
            
              flexibility. One advantage of this model is that one
            
            
            
              device can render on multiple canvases, or even none.
            
            
            
              This provides additional flexibility, allowing for one device to control
            
            
            
              rendering on multiple windows or contexts.
            
            
            
              Buffer management in both APIs looks the same.
            
            
            
              However, in WebGPu, once a buffer is created, its size
            
            
            
              and destination are fixed. It's also worth noting
            
            
            
              that don't bind the desired buffer.
            
            
            
              Instead, it simply passed as an argument.
            
            
            
              This approach can be found throughout the whole API for
            
            
            
              shaders. The big change is that WebGL
            
            
            
              uses GLSL and WebGPU uses a new shader
            
            
            
              language called WGSL. It's designed to
            
            
            
              cross compile nicely to the various Buchanan's preferred shader
            
            
            
              variants. Note that in WGSL,
            
            
            
              the fragment and vertex shaders can be part of the same shader
            
            
            
              as long as they have different function names.
            
            
            
              This can be very convenient. WebGL and WebGPU
            
            
            
              are two distinct methods for managing and organizing the
            
            
            
              graphic pipeline. In WebGL, the primary emphasis
            
            
            
              is on the shader program, which is combined vertex and
            
            
            
              fragment shaders to determine how vertices are
            
            
            
              transformed and how each pixel is colored. To create a
            
            
            
              program in WebGL, you need to follow simple steps.
            
            
            
              Just write and compile a shader code for source
            
            
            
              code for shaders. Attach these compiled shaders to the
            
            
            
              program and these link them. Activate the program before rendering
            
            
            
              and transmit data to activated program. That's all?
            
            
            
              Yeah. This process provides flexible control over
            
            
            
              graphics, but can be complicated and prone to errors,
            
            
            
              particularly for large and complex projects. When developing
            
            
            
              graphics for the web, it's essential to have a streamlined and efficient process,
            
            
            
              and in web GPU it's achieved through the use of a pipeline.
            
            
            
              The pipeline replaces these need for separate programs
            
            
            
              and includes not only shaders, but also
            
            
            
              other critical information that's established as state
            
            
            
              in WebGL. Creating a pipeline in WebGPU
            
            
            
              may seem more complicated initially, but it offers
            
            
            
              greater flexibility and modularity. The process
            
            
            
              involves three key steps. First, you must define
            
            
            
              the shader by writing compiling the shader source code just
            
            
            
              as you would in WebGL. Second, you create these
            
            
            
              pipeline by combining the shader and other rendering parameters into
            
            
            
              adhesive unit. Finally, you must activate the pipeline before rendering.
            
            
            
              Compared to WebGL, WebGpu encapsulates
            
            
            
              more aspects of rendering into a single object. This approach
            
            
            
              creates a more predictable and error resistant
            
            
            
              process. Instead of managing shaders and rendering
            
            
            
              states separately, everything is combined into one pipeline
            
            
            
              object. By following these steps, developer can
            
            
            
              create optimized and efficient graphics for the web with ease.
            
            
            
              Finally, we get to drawing again.
            
            
            
              WebGPU looks more complex, but that's
            
            
            
              actually in this case,
            
            
            
              we are more explicit about setting up render
            
            
            
              target, whereas WebGl there is a
            
            
            
              default one, but during the actual rendering,
            
            
            
              web GPU avoids setting up the
            
            
            
              vertex attribute layout because that's part of the pipeline.
            
            
            
              Let's now compare uniforms in WebGL and
            
            
            
              WebGPU uniforms, variables offer constant data
            
            
            
              that can be accessed by all shader instances.
            
            
            
              With basic WebGL, we can set uniform variables
            
            
            
              directly via API calls. However, this approach
            
            
            
              is straightforward, but necessities multiple API calls
            
            
            
              for each uniform variable. With these
            
            
            
              advent of WebGL, two developers are now able to group
            
            
            
              uniform variables into buffers, a highly efficient alternative
            
            
            
              to using separate uniform shaders, and by
            
            
            
              consolidating different uniforms into a larger structure. Using uniform buffers,
            
            
            
              all uniform data can be transmitted to the GPU advance,
            
            
            
              leading to reduced API calls and superior performance.
            
            
            
              In these case of webGl, two subsets
            
            
            
              of a large uniform buffers can be bound through a special
            
            
            
              API call known as bind buffer range.
            
            
            
              Similarly, in web GPU, dynamic uniform buffers offsets
            
            
            
              are utilized for the same purpose. Eleven is the
            
            
            
              passing of a list of offsets when invoking the setbind group
            
            
            
              API. This level of flexibility and optimization has
            
            
            
              made uniform buffers a valuable tool for developers looking to optimize
            
            
            
              their WebGL and WebGPU projects. A better
            
            
            
              method is available through Webgpu. Instead of supporting individual
            
            
            
              uniform variables, work is exclusively done through
            
            
            
              uniform buffers. Loading data in one
            
            
            
              large block is preferred by modern gpus instead of
            
            
            
              many small ones. Rather than recreating
            
            
            
              and rebiding small buffers each time, creating one
            
            
            
              large buffer is using different part width for
            
            
            
              different row calls can significantly increase performance,
            
            
            
              while WebGL is more imperative resetting global state with each
            
            
            
              call and striving to be as simple as possible.
            
            
            
              WebGPU aims to be more object oriented and focused
            
            
            
              on resource reuse, which leads to efficiency.
            
            
            
              Although transitioning from WebgL to WebGpu
            
            
            
              may seem difficult due to differences in methods,
            
            
            
              starting with transition to Webgl two as an immediate
            
            
            
              step can simplifies our work. Transitioning from WebGL
            
            
            
              to WebGPU involves modifying both API and shaders.
            
            
            
              The WGSL specification facilitates a seamless
            
            
            
              and intuitive transition while ensuring optimal efficiency and performance
            
            
            
              for contemporary gpus. I have an example shader for
            
            
            
              texture that uses GLSL and WGSL.
            
            
            
              GLSL serves as a connection between web
            
            
            
              gpu and these native APIs.
            
            
            
              Although WGSL appears to be more worthy
            
            
            
              than GLSL, the format is still recognizable.
            
            
            
              The following tables display a comparison between the basic and metrics
            
            
            
              data types found in GLSL and WGSL.
            
            
            
              Moving from GLSL to WGSL indicates a
            
            
            
              preference for these more stringent typing and clear specification
            
            
            
              of data sizes, resulting in better code
            
            
            
              legibility and lower chains to make
            
            
            
              mistakes. The meta declaring structures has
            
            
            
              been altered with these addition of explicit
            
            
            
              syntax for declaring fields in WGSL structures.
            
            
            
              This highlights the need for improved clarity
            
            
            
              and simplification of data structures and shaders.
            
            
            
              By altering the syntax of function in WGSL, it promotes a unified
            
            
            
              approach to declarations and return values,
            
            
            
              which results in more consistent and predictable code.
            
            
            
              If you are working with WGSL, you'll notice that some of
            
            
            
              the built in GLSL functions have different names
            
            
            
              or have been replaced. This is actually helpful because
            
            
            
              it simplifies the function names and make them more intuitive.
            
            
            
              This will make it easier for developers who are familiar
            
            
            
              with these graphic APIs to transition to WGSL
            
            
            
              if you are planning to convert your WebGL projects
            
            
            
              to WebGPu, there are a lot of tools available that
            
            
            
              can automate the process of converting GLSL to WGSL,
            
            
            
              and one such tool is Naga. It's a rust
            
            
            
              library that can be used to convert GLSL to WGSL,
            
            
            
              and best of all, it can be even used
            
            
            
              right from your browser with the help of webassembly.
            
            
            
              Now, let's talk about some differences in conventions
            
            
            
              between WebGL and WebGPu. Specifically, we will
            
            
            
              go over the disparities in textures, viewport and clip spaces.
            
            
            
              When you migrate, you may come across an unexpected issue
            
            
            
              where u images are flipped. This is a common
            
            
            
              problem for those who have moved. Implications from OpenGL
            
            
            
              to direct OpenGL and WebGL images
            
            
            
              are usually loaded so that the first pixel is on
            
            
            
              the bottom left corner. However, many developers load images
            
            
            
              starting from the top left corner, which results in flipped
            
            
            
              images. Directory D and metal systems, on the other hand,
            
            
            
              use the upper left corner as a start point for textures.
            
            
            
              The developers of WebGPU have decided to follow this
            
            
            
              practice since it's appear to be more straightforward approach for most
            
            
            
              developers. If your WebGL code selects
            
            
            
              pixels from the frame buffer, it's important to keep in mind that
            
            
            
              WebGpU uses a different coordinate system to adjust.
            
            
            
              For this, you may need to apply a straightforward y
            
            
            
              equal one minus y operation to correct these coordinates.
            
            
            
              If a developer encounters a problem where objects are
            
            
            
              disappearing or being clipped too soon,
            
            
            
              it may be due to differences in the depth domain.
            
            
            
              WebGL and Webgpu have different
            
            
            
              definitions of the depth range of the clip space,
            
            
            
              while WebGl uses a range from minus one to one, WebgpU uses
            
            
            
              a range from zero to one, which is similar to other graphics APIs
            
            
            
              like direct to D, Metal and Vulcan.
            
            
            
              This definition was made based on advantages of
            
            
            
              using the range from zero to one that were discovered while working with
            
            
            
              our graphics APIs. The projection matrix
            
            
            
              is primarily responsible for transforming a position of your
            
            
            
              model into a clip space. One useful way
            
            
            
              to make adjustments to a code is to ensure that the projection
            
            
            
              matrix generates outputs ranging from zero to one. This can be
            
            
            
              achieved by implementing certain functions available in libraries like GL matrix,
            
            
            
              such as the perspective co function.
            
            
            
              Other metrics operations also offer
            
            
            
              comparable functions that you can utilize in the event
            
            
            
              you're working with an existing matrix projection
            
            
            
              and that can be modified. Yeah, there is still a
            
            
            
              solution. You can transform the projection matrix
            
            
            
              to fit the zero to one range by applying another
            
            
            
              matrix that modifies these range before
            
            
            
              the projection matrix. This pre multiplication technique
            
            
            
              can be an effective way to adjust the range of your projection matrix to fit
            
            
            
              your needs. Now let's discuss some tips
            
            
            
              and tricks of working in this web GPU.
            
            
            
              First of all, of course, minimize the number of
            
            
            
              pipelines you use. The more pipelines you use, the more
            
            
            
              states which you have and the less performance it result.
            
            
            
              It may be not trivial depending on where your
            
            
            
              assets come from. Creating a pipeline and
            
            
            
              then immediately using it it works but
            
            
            
              don't do it. Yeah, just create functions,
            
            
            
              return immediately and it start work on
            
            
            
              a different thread when you use it. The queue execution
            
            
            
              needs to wait for pending pipeline creation to finish.
            
            
            
              This causes a significant junk. Make sure you
            
            
            
              leave some time between create and first use,
            
            
            
              or even better, use the Create pipeline async variants.
            
            
            
              The promise resolves when the pipeline is ready to
            
            
            
              use without any styling. Render bundles
            
            
            
              are prerecorded partial reusable render
            
            
            
              passes. They can contain most rendering
            
            
            
              comments, except for things like setting the viewport.
            
            
            
              They can be replayed as part of
            
            
            
              an actual render passes later on. Render bundles
            
            
            
              can be executed alongside regular render passes comments.
            
            
            
              The render pass state is reset to defaults
            
            
            
              before and after every bundle execution, primarily for reducing
            
            
            
              JavaScript overheat or drop in. So GPU performance is
            
            
            
              the same either way. So transitioning to
            
            
            
              web GPU is more than just switching graphic APIs.
            
            
            
              It is a step towards the future of web graphics,
            
            
            
              combining successful features and practices from various graphics
            
            
            
              APIs. These migrations requires
            
            
            
              thorough understanding of technical and philosophical changes,
            
            
            
              but the benefits are really significant. So thank you for your
            
            
            
              attention and I hope you enjoy the conference. See you.