Conf42 Machine Learning 2025 - Online

- premiere 5PM GMT

Reinventing Event Photography: How AWS AI and Serverless Transform Community Moments

Video size:

Abstract

Imagine processing 8000 photos real-time in the way no one else ever did. I combined a puzzle of diverse technical opportunities into the technical miracle powered by AWS that worked together to sort photos, maintain socials, and deliver moments to the community during the Community Day DACH 2024.

Summary

Transcript

This transcript was autogenerated. To make changes, submit a PR.
Hello and welcome to Comfort to Machine Learning. I'm Mi Flo and I'm excited to have you here with me at my session today. I have a very special story from AWS Community Dark 2024. It's elevating human photography workflows with AWS. So I'm not holding you, let's. Go directly to the session, but before we begin, Sierra's from Vienna, and I would like to mention that this is not just a technical story. This is a story about our AWS community, about the technical knowledge, how technical creativity brings something interesting that. To let us resolve some magnificent things. And here I would like to quote that technology alone is not enough. It's technology. Married with the liberal arts, married with the humanities that yields the results that make our hering. And this is actually the story that fully. Complies with this quote and yeah. And a quick introduction of myself. I am Ra Kanka. I am cloud architect at app. I am AWS community builder. I am AWS user group leader at Vien and Leans, and I am together with my colleagues via, are working on AWS community in Vienna. I'm a big it geek. I love. Everything which is related to te and I'm holding master since communications Bachelor's in business management. And I'm working with AWS who focus on AWS for the last five years. And I'm a big photography fan. And actually those things that we have here that I'm IT gig, that I'm working with a and that I'm a photography fan and hobbyist, so they're all collided here. This is why I have this story for you and, so in this session, we will talk about the challenge that, first of all, we will take my, as I mentioned, passion for the photography and that I wanted to make lots of magnificent pictures for our community. That we do a big job preparing a community dark with our team. And I wanted to make it memorable for us, for our speakers, for us, for sponsors that, for example, we could back. After some years and say that yeah, so it was the thing and we have done it together. And yeah. Then this is why it's actually related to the technical event that how a LS let me achieve much more efficient workflow for the photo processing that let us deliver. The photos to our marketing team, to the speakers, to everyone involved, and to reduce the management and all the processing overhead that comes after the photo shoot. And let's also talk about the technical perspective and what kind of results does it bring. But before we go to the technical details, so some words about myself, I, raised surrounded with the photography. I'm a big fan of the technical side. I really like to research. So all the technical things like sensor manufacturing image data processing, so everything. And I also dealt with another photography. And it's a big passion because it combines creativity, technical expertise. And it's also one field where I can express myself. So I have done photo shooting of different events such as Formula one TM, factory team and I also was photographing different AWS related events, secluding our community days, our user groups. So I was photographing AWS Stars such as Victoria, Ziman, Anton, Bianca. So lots of other exciting people in our community. And of course I also part of different exhibitions for the festivals. And I have. Photographed Playboy models I have was winning competitions. So it's a pretty long way that I'm doing, besides my main professional activity related to AWS and bringing this, all this experience to a s community doc. So I used to take over 8,000 pictures, 30 speakers, about 650 attendees, and the main challenge here, so what should I do with this amount of things. And normally, so even we are talking about the photo processing, it's a huge process because it starts from the planning of the photo shoot you are shooting, you can I make some adjustment of the amount of work you can do if you shoot properly. And since I'm a technician, for me it was the question, how can I make it more easy? Easy. And I, first of all, I started to research what do we have on market from the commercial solution. And since the story was about the summer of the last year, I was. Surprised that there are not many AI solutions powered, AI powered solutions, and none of them, they was capable to do what I wanted to do. And since we are builders, we love to build. So that's why I was thinking, okay, let's maybe try to do it on AWS and. So the whole process and together with our marketing team, thank you to Susanna. Hans. Linda, you are amazing. So we planned what, where, and how and when we would like to have, so then my task as a photographer was to to be in time. And to put the thing, so to capture it, that we could have an image. So for example, me with the guys, we have done all the pre-planning together with a LinkedIn post, with the text, with all the needed surrounding materials such as photos. So where and how. And we were thinking so how to use it properly to maximize the coverage for our media presence. Then of course, it, my part came because I was actually performing a shooting. I was using two Sony Alpha cameras because amount is the best system on the market and no one can keep up with it. And those cameras, they extremely reliable. I will, the main camera was Sony Alpha nine Mark three with a 7,200 G master lens, and the second camera was Alpha seven four with a 24 70. And, I was shooting with the two cameras simultaneously to, to have different perspectives, but also my task was to use the technical advantages. For example, like iPhone nine. It's because of its global sensory doubt. We were, I was not afraid of the light flicker or some other things. And I was able to use both cameras in pretty aggressive setup. For example, taking ER images, which is. Absolutely possible with the current Sony sensors and, yeah. And also they have extremely variable of the focus and they actually saved lot of work. But still all this actually all this gap that gets introduced by the faulty auto focus, especially on the DSLRs that are not capable to do face tracking. So it causes more and more in work. And then we are going to the point when AWS is capable to step in. So in this project I was trying to re. So to reproduce the process that I'm doing myself, so how I'm sourcing the pictures, all this cognitive process. So how basically the mission was intended to do what I would to do and to my target was to use technical opportunities by AWS to represent my way of thinking and all the. Photography process. And of course also to automate the processing to perform the automatic corrections. And here we have a full conjunction of different a s services to perform this task. And yeah. But before I go directly to the technical solutions, so in all my previous session, if you have ever attended mine inclusive at, so I'm always thinking that when we are building some something on the Italy. We treat the data and architecture that interchangeable between humans, machines, and software. So all of those are, must be orchestrated together and, and when I'm starting any kind of project on AWS my goal is to achieve unity between technology and human needs. And here AWS acts as a connector between technical systems, all the software and human requirements, and using its services we can build it. And talking about the building when I'm, I see some process, some applications, something that runs on AWS. I always try to imagine it in a sort of the waterfall, and this is why we actually have a waterfall image and the current set of AWS services, it deeply reminds me as Swiss Army knife because you have lots of the services, but if you master serverless, Lambdas Dyna db, if you know how to do the step functions. So if you do it in proper way with the CICD cloud formation, it's already your key to the success. And of course with the junctions, like for, I don't know, SQA, if you use SQS, if you use I dunno, Kafka, whatever you like. So you can build resilience, scalable systems that just they data is like a bottle of fall is incoming. All this flow of the data gets protest and you deliver your technical vision for your businesses, for your pet project because. This project, it's what? It was not coming out from the just abnormal. I was thinking a lot. So for example, how to conjunction different things. How can some step of my normal cognitive process that I'm doing myself when I'm processing pictures, what function, which service can be in back, end of it? And yeah. And all this vision, it always comes to the orchestrated a s services working together. And our target is as an engineer to interconnect them in the way to make a comprehensive solution. And basically, yeah, as I mentioned, it's, it didn't come up from the empty place before I even had spoken about it at Con City last year. So I was doing numerous variations of system integrations of data processing, data collection from a, from our third party services that we have in app attack. And that how we can. Transform all this data to make it a even driven that we could interconnect transparently for our teams, for users, for the people who are not like AWS proficient. Just to normally use the system to get the data they need to make a daily job they need, and looking to this experience. For example, how I was integrating our CICD processes, security processes, configuration item management, inventory management, et cetera, image building. So it already gave me the backbone that was. Lay down the foundation of this photography project. And actually idea of it came just from a talk with my colleague because he was working with AWS recognition for some proof of concept. And we were targeting to train them to recognize Austrian celebrities. I'm not aware myself of, unfortunately, that's why it's even tricky for a os recognition, but it was a use case. And I was thinking, but then I actually forgot about the fit because we were doing lots of reparations. It's not only about the photos because I was also helping with the sponsorship of community data. I was helping with everyone. And at some point I just started think, okay, let's translate my A risks. Experience to save up lots of time for myself, for my family, for even for the beers and the result of translating my photography workflow. Including the experience that I have had with AWS services before is this, and I would call it community Photo Factory. So the most important was to first of all to get the pictures. Yeah. That's why I delegated it to myself with Sony cameras. I really love them, as I mentioned and we gather quality files that with which we can work. Then the next target was to put them somehow to a OS. And of course we are working with S3 bucket events. We are working with a event breach. So we are having the events that to preserve because we have to have everything processed. I put them in this Q Qs, of course, with the debt ladder, qe, and here then we are stepping in with the two different step functions. So the actual, the processing step function that does the whole process. And, of the processing of the incoming files because it takes out the metadata that we have into the file. We are performing the pre-processing based on our weights that are used to calibrate AWS recognition. In this case, we have a custom written liberal lambda, which is actually make us possible to work with the Air W files that are incoming from Sony Alpha cameras and to actually. Lets us to extract the data and I develop them into different things. And we store metadata in the various dynamics tables. We process the pictures, we sort them based on the timestamps information. We recognize the speaker or kind of medium. The photo was taken and we sorted. And also using the metadata recognized by Libro. We feed through the bedrock cloud haiku, then we compare it with the comparable photo samples. Then we the select the editing strategy, and then it gets put into the following bucket. So this is the scheme that does the, all the steps of the process. Starting from the click of the button, it gets uploaded to AWS, we do the pro. Pre-processing of the image, we recognize what is happening. We detect faces, we get out metadata, we compare with our already known weights. We process it and we have a picture that for example colleagues of mind, they could just come and take. And then we can simply share those folder with our speaker. And you can also notice. The second step step function, metadata, and yeah, one of the evenings I was wondering how can I approach it? And thankfully I have already enough archives, so I have lots of photos from the past community days, from our OS community stage, from our user groups. And I took pictures of some of the speakers, of course, of Linda and some and whatever, just to perform some testings. And interesting fact, Claude is afraid of the glasses. And for example, when you are taking, selecting the pictures I think it's very important to have the person with the open eyes on it. And this is the core thing. The clot at the time, 3.5 it was failing because I took some pictures of Linda. Both with open and closed eyes, I gave them to the court the prompt that I would like. So you are working for the professional press agency and you are picking the picture if it's suitable for the publication. And yeah, Claude was recognizing any kind of I state at Linda, so basically opened and I also went and they collected other people in the glasses. So Claude was failing and it was very sad because I thought to make it in some modern way that we are delegating it to the AI model, but unfortunately. It failed here. And then I actually remember it to, to the story with my colleague that I have already mentioned. And then I decided, okay, let's try AWS recognition and AWS recognition. And it actually gave me basically just like this, I don't know what to give to you. I have this year so that all the data that actually I would need, all the metrics, all the weights. That I could process that. For example, as far as you can see, we can have sharpness metrics. We can have exposure metrics. So we can have the different facial parameters as you can see from the slide. And here I. What I have done, I have aggregated the data based on our previous data sets for the photos I picked and I didn't pick, and it allowed me to set up the thresholds and such a thresholds. I collected them in Dynamic DB and they were referenced for the Lambda in the main processing workflow. That was the sizing. If we are taking picture or we are not. Okay, so we have first piece of our puzzle. The second one, thankfully. I can win an award to linking the systems which are not intended to be linked. And I don't know. I think some of those crazy Japanese engineers in Sony, they didn't expect that someone would try to throw the files through the ft PS services in. Directly to a OS because this is a normal approach. Since in APA we have a lot of photography colleagues who are coming, they are taking pictures of sport events, different other live events, and normally, especially at sports, pictures are getting uploaded to FCP host and I thought, okay, let's try to use this function. And AWS is offering transfer gateways that are actually, have done the jobs absolutely perfectly. But thankfully I'm community builder and for it compensated my mistake. And you have to be extremely careful keeping the transfer family up and running, because in my case, it took over $220 just because I thought. So it has bit more serverless approach for the pricing, so please be extremely careful. But yes, it's possible to give the transfer gateway, endpoint the sensing of Sonic cameras and directly stream your pictures to the S3 bucket. And when you have already the data on S3 bucket, then we are in business. We can. Go through that. As I already mentioned, I was collecting the metrics and let's first of all start with the things. So for example, how I would like to work with it. Training phase, it involved calibration with the existing data sets. First of all, I collected pictures from meetups. I passed them through the recognition, I aggregated the data and for me it was the most important to know if. Eyes are open and the picture is sharp. So those are two main things that I would personally open and. Come and see for the views. Also for for me it was important to have confidence in the face detection that we have faces and because of the faces, we were recognizing the biggest face and the sharp face because sometimes you have a few persons and we are trying to get it or, and it can be potentially a marker of the faulty outer focus. And yeah, we can also estimate the post that, for example, I can automatically throw away. Some, I don't know, crazy facial ex expressions that no one would post on LinkedIn and yeah. And basically I was using detect face CPI and detect label CPI and, and it was in all dedicated S3 bucket of training photos, and it was not intercepting. And I was with the threshold analysis. I was extracting facial expressions and also raw metadata. Raw metadata was the core here that I could implement the actual processing core here. And wait. And and, speaking about the parameters, for me it was the most important to get the sharpness eyes open, confidence, fa, face detection, confidence parameters that we were actually storing in the gb. And then we are going, when we already have the thresholds, we can already so prewarm our system to do what is it intended to do, and in the production workflow. Here, as I already mentioned, we have a trigger, so we are coming then before each individual picture, and it was extremely valuable because in any case it was processing for the processing workflow. It does not matter what actually this picture is. It's already was built to work with the A RW files that were incoming from the Sonic Camera and, all this process was completely orchestrated. I was extremely happy with the results. And we were extracting the metadata as I already mentioned. So we were performing analysis with the recognition that we could get the metrics and, we were collecting the parameter thresholds, we were detecting who is actually this person. And we, it was automatically taking decision if it's taken or rejected. And that's why we have to separate packets because yeah it mes can still make a mistake. So we were just to make pre-approved and approved appro an unapproved approach here. And what is the most important? It's actually discussed in written liberal lambda, which is using the pyro library. And here we just starting from the camera, we are delivering presorted completely good pictures to our team. And talking about the recognition API strategy. So I was mainly using the detect phases API, yeah, because I had to get the scores, as you can see on the screenshot, and I was also using the detect labels for the analysis and to reduced the price usage of recognition. I was not using index faces. Because we already have information about what is present and I wanted to work just specifically with one phase and it was the most important. And we were also using detect moderation, labels as secondary filter and just not to overcome the service and to avoid any kind of API TRT link. We also had some. Dynamic calling so that we have some jitter in calls that we are not hitting any kind of service API limits. And of course, we had the lamb concurrency here and also what was important, for example, sometimes when I'm shooting, so I'm shooting series of the pictures and sometimes you can have almost similar pictures with a. Pretty tight timestamps. So I was sending such pictures of a batch to the recognition. So for example, because sometimes you can have four pictures and only one of it is sharp. And to reduce this aggregation to reduce number of API counts so yeah. We have taken a out more efficiency from AWS recognition and also I used AWS Lambda Power Tools, which is an amazing library that gives you lots of backbone to boost your development, for example, for ex extensive logging, error tracing. So it's extremely useful. And you must use AWS Lambda Power Tools in your projects. And yeah. For example, here you can see an example of some piece of Lambda code I have, and, so in terms of calibration of the sharpness data, it was a logarithmic function. So it was logarithmic scale. So that goes 25 to 100 usually, and we were targeting pictures that usually hitting over 70%. And yeah, we, I would say it was even 80 for the speakers and 65 for the more general photos because they are usually like, have more. Fine grain details that are getting lost and for me, it's also to reinforce the like quality of open eyes. So I have went through them, over 500 pictures that just to be confident enough and I haven't done any additional pre-training of the recognition because it was already very like reliable and I absolutely love it. And also the Lambda was controlling. So if we actually recognizing single eye or both eyes and, but it was like some of the third party, like second priority metrics because mainly we, I was getting normal metrics for both open ice and also I was taking the official expression analysis and actually one of good ideas of development of this project that we could count the. Amount of happy, surprised, cold, angry to get more like statistical overview of the mood and atmosphere on the conference part. We will speak about it a bit later. Yeah. And post assessment is important because we can already drawn out some pictures that have some strange positions of the hands, et cetera, in combination with the wear face. And it already saved lots of work here. So we are getting properly exposed pictures from the set. We are having high confident eye detection and here, and we are already cutting down. A good piece of input that could be potentially not useful in our project. And one of the pillars of this projects and I was thinking a lot what actually could have been on this slide because I was considering different options. I. Starting from running some E two instance, which had a GPU with some kind of windows script automation to perform editing. Then I was considering running any kind of commercial solution and, but unfortunately only solutions with by topa. Are capable to provide UCL access and I haven't worked with the topos much, that's why I abandoned this idea. 'cause I'm mostly use Luminar Neo and the XO photo app in my processing. So Luminar is amazing for portrayed or studio sessions and the so is great when you are processing huge events like diesel or some, I dunno. Yeah, when you don't need to weak the faces. And then I was thinking and I looked in the direction of the open source things. And yeah, we have a very good known liberal library, which is used for example, in dark table in the road therapy. And I thought, give it a try. I. And the reason, actually, you could ask me, Dima, why do you need to do it? Because you can already have GPX from the camera, and this is a thing when you are working with the profiles, you have a file which has 14 bits of information. It means that in our situation when most of the speakers are standing behind the s with a tors, with the bright or dark slides, and we, I was trying not using the flash. Because I didn't want to disturb attendees, not to disturb speakers, to keep everyone focused. So basically, I was a ghost. So that we keep our attendees to focus on the technical topics, to let them enjoy the conference and not to disturb them with, all this. So how is sometimes called welding and Yeah. When you are shooting GP you have eight bit files and you cannot simply recover the highlights or shadows. You can, but just a bit. And here, for example, when you're working with the row, just by working with the. Specific functions, you can recover all the background that have e equally equalized ex exposed picture. So here, using the liberal library, I was also considering if I would like to run it on the ECS or run it like a Lambda. But I also noticed that, if you lambda, since we are receiving the picture and so there is no, there is always a gap between like supplying of the pictures. Yeah. There would be some idling time when you are running the container and I noticed that if I was running Lambda with the capacity of memory of one and half gigabytes, it was giving you much, I was receiving much better performance. Then a comparable container running on the Fargate. This is the first false response time was within 10 seconds, and we already had processed image on the. On the storage and I'm using AMA algorithm. There are lots of algorithms from the basic ones. So ama, it's so the idea is that it does an detection of the edges that allows you to reduce more and also some kind of steps. On the edges, and this is the most advanced algorithms which are available like an open source. And LiRo, actually this lamb that works in two functions based on the event it receives. The first scenario is every profile, TNGC two, nf, IRV, rough, they have the miniature. Which is extracted by photo viewing software when you are coming to the row, because sometimes it's a very like performance demanding operation performed in the mosaic. That's why camera always includes a small like preview miniature inside of the file, which is taken by the photo photo viewing software. And I used to extract this picture, actually like to perform pre-processing. On the recognition and to already save the time and cost of recognition, not to waste it processing every single image because this resolution, it was already enough to perform the person examination. Yeah. And then we were taking the right one. Then of course we could take the bigger one and perform more precise. Analysis of the quality. So basically we were taking and working with more progressive options here. Yeah. And also already mentioned, we have already aggregated loss of metadata, which, so basically I was telling, okay, we have this picture based on this recognition data plus metadata. And I would process it like, like this. And here we are pushing metadata of the sample. We are campaigning it with the weights. It all goes to cloud. CLO through the bedrock, and it was suggesting the potential scenario, and it was extremely good because it was compensating the highlights very precisely. I was happy how it was adjusting the wide balance and, also all my profiles, they had extremely aggressive highlight and shadow reconstruction. I was achieving with this lambda, so the quality and coloring and processing that I was not able to achieve, for example, by using the picture profile on the cameras. But mainly because of the limitations of the loss of the dynamic range on the file that is caused by eight BT GP files. And yeah. This thing, it was it done most of the job here because not only selection takes lots of time. The actual processing, it actually can take even more. And it's always mu it must be consistent because sometimes you can start processing one day. Then I know you had something, you are coming back on the next day and then you're looking. Oh my God, everything is green. No. So here it was also consistent. It was very fast, very efficient, and it's just a must have and big win of this project. And yeah, as I already mentioned, so we were cutting the small batching logic so that I was combining the series, comparable pictures, and to reduce the number of the. Calls to the AWS bedrock and I think all these miniatures, they were around 1,200 pixels. And also it was going to the step functions. They were concurrent and we were, I was also dynamically adjusting the concurrency based on the number of the events. And it was very parallel. Parallel. And pictures were coming one by one and they were not. Interact, interacting between each other. So this is a good technical part of it. Looking here, the results. First of all, I, it was extremely memorable when such an amazing people like Luke, like Jan, they published all these pictures. I. So they are LinkedIns and first of all, it's good memory for them. It's support of their work that they are doing. As a speakers, as a technicians, as a part of AWS community, I got lots of amazing feedback on the email and it already give lots of powers for me because physically it was very exhausting to shoot such a big event. And yeah, from the AWS community side, we expanded the, so we enhanced. Our impression and community experience. So we could use these pictures to have more community restaurants because we can show to the people in the good way with the cool photos, what are we doing? And this is a good memory that, for example, when I will be old and I don't know all of us, one day we'll be old unfortunately, and we could back to it and see, hey, we were cool. And for the speakers, it's also the support of their. Brands, brands development, it's support of their employers. And for me personally, it was a huge win because I spent about I don't know, I would say less than I. I would say maybe 15, 20 hours to get it done, to work very relaxed without any rush or crunch, but it saved potentially over 60 plus H hours of manual work. But I believe even more because from 8,000 pictures, the whole processing, it took out 4,200 candidates. They were pre preferences. We were just performed additional pre selection based on our needs that we have preplanned with our marketing team and talking about the costs. It's absolutely worth it because I spent $20 on a s recognition. I spent $30 on AWS Lambda orchestration. They name a DB step function transfer family. So if counting the price only for the day of the conference, and in total we spend $50 and less than 1 cent per picture. This is an amazing. Values just because time and for example, my health that my mind that I'm using to process the pictures, it's all probably would cost more than $50. Definitely. And such a big response. How much. Effort it saved and how it in allowed us to develop the process also to have a good reason to give a speech for you. And it was absolutely worth it because it done job even more than four $50 and talking about the things. So that I would like to reflect the, to the Steve Jobs quote that I have had in the beginning. So that technology alone is not enough. And and a s. I never speak about of it, just like about the call provider. It's a toolkit that provides and let us solve the problems in the creative ways. When you are equipped with those tools, using our unique skills, you can create unexpected value. So I. Guys, UCWS don't think just you are running virtual machines, go beyond your engineering division and make a real projects like this one that might make an impact on some, something significant. For example, like on a s community. And here we have achieved a good foundation that can, in the future, can be developed in automatic. Collection generation and delivery platform for any kind of event. For example, for publishing, for realtime events that, for example, people, they are on the N, they can sunk your code and come and see it. But we have a technical backbone and for me, the task would be to have even more for it. So to make a, not only at the point when you can come and take it, but to make it in the way that. Pictures are coming to you. So looking to the takeaways from all of this story, first of all, the, so AWS services, they are amazing and they can be combined creativity beyond traditional use case or for what they're intended to do and combining with the things that are intended to do something else. But for some reason, they're useful and we have it here. For example, that FCP function streams directly to S3 bucket and let us work with the data. And I have combined here recognition, step function, Lambda S3 bedrock in the way that actually they were not like, designed for. And it's just going knowledge and seeing beyond the intended use of a LS service. And they are all the services there building blocks that can solve any problem in any domain. You just. Come and solve it. And I think it's not a regular case when AWS services in that matter, they're solving photography tasks in the way like a human would work with the same photo data set. Yeah. And this project also shows wise is amazing. And you can make you can delegate to. Labor intensive tasks without of any management overhead. You're just coming and creating the logics that brings economical, strategical business value to you. And all this orchestration, it turned hours of manual work to the amazing thing. And since we pay only for use, it's also very economical, effective, and, so yeah, if you have some projects with such behavior, feel free to continue with and it'll solve your tasks. Yeah. And the most important, all this technical part, it allowed us to develop human connections and make us a good memory of the conference that we have done altogether. Yeah. This is What about was it? So thank you very much. I'm always happy to discuss any, anything technical, AWS, whatever you like. Even formal one. Please come to me or add me on LinkedIn please. I. So I'm open for any kind of discussions. Thank you very much. I hope you enjoy conference. You do. I'm very thankful to, to the mark for having all this idea, pushing all this community. And this is the second time this year and I'm exciting to be here in support of you. So please enjoy the conference. I hope you like the session, and we will see us any other time on the clouds. Thank you.
...

Dmytro Hlotenko

Cloud Engineer - Architect - CloudOps @ APA-IT Informations Technologie GmbH

Dmytro Hlotenko's LinkedIn account



Join the community!

Learn for free, join the best tech learning community for a price of a pumpkin latte.

Annual
Monthly
Newsletter
$ 0 /mo

Event notifications, weekly newsletter

Delayed access to all content

Immediate access to Keynotes & Panels

Community
$ 8.34 /mo

Immediate access to all content

Courses, quizes & certificates

Community chats

Join the community (7 day free trial)