Conf42 Machine Learning 2025 - Online

- premiere 5PM GMT

AI-Powered Startup Toolkit for a Product Designer

Video size:

Abstract

Discover how AI is transforming the product design workflow—from ideation to iteration—with a powerful toolkit that empowers designers to build smarter, faster, and more user-centric startups

Summary

Transcript

This transcript was autogenerated. To make changes, submit a PR.
Hi, my name is Nikita and I'm from Jos. I work as a head of design and as a product designer and as a communication and graphic designer as well. Jos is a remit now, pay later financial service and application. So today I want to talk with you about AI tool, which can help a lot in early stage of each company. When you are doing a design. So let's start. When I left my last company and joined a startup, I realized very quickly I was the only designer and I had to manage everything. Pitch decks, marketing creatives, user flows, even sometimes backend logic discussions. And of course user interface for application. So all at once. It was overwhelming and I realized that this can be done in traditional way anymore, and I needed to rethink how design could scale. And in 2024, I knew that working harder wasn't the solution, but working smarter was. So I started exploring how AI could help me speed up. Creative production, maintain quality, and also free my time for more strategic design thinking. And this presentation is a story of how the journey unfolded and what lessons you can apply to. In early 2024, me Journey had yet to develop a web-based interface. In other words, generating images required users to interact through Discord chat, which felt stiff link, especially for workflows that required a more creative touch. A web-based me journey did exist, but it was in a closed be phase, meaning that very few users were able to access it having come across plenty of other platforms. I settled on Dream Studio, which operated on stable diffusion, Excel 1.0. This was primarily due to wanting control over the graphical user interface and clean design. The Dream Studio had come to offer. Not only did the platform provide flexibility, but it also had a very fun and intuitive way of allowing user to experiment with prompts and tweak outputs. But it came with limits, of course, even with negative pros. Getting realistic images, say a Pakistani woman without a head covering was extremely difficult. It felt rigid, stereotypical, and lack depth. Despite limitations, stable diffusion was good enough to generate the first assets, images for our early pitch decks, like an example or concept like our first limited payment cards. However, the generated output. Often had obvious flows, so like broken head, hand anatomy, awkward details, and we had to do significant retouching on that. It was a start, but it wasn't scalable for a growing company. Even so stable diffusion was extremely useful. During the early days of our startup, it helped me create a working model of our first limited edition card. So this prototyping step was pivotal in our product design. That added imaginative freedom in the beginning, helped us formulate a brand and launched a vision. And then came great advancement mid journey version 6.1. This release was a watershed moment, not only in regard to features, but also the entire user experience. Mid journey migrated from a discard only interface to a fully functional web application for the first time. Now, the tools seem to be designed for creator rather than mere developers or early adopters, tech people. And the most striking chance in version 6.1 was a level of customization, aesthetic preferences, visual steadiness, and creative directions could now be controlled with much greater accuracy. After all, the aim was not just to produce random, stunning images, user strove to attain their vision seamlessly and reliably with each refined step. Most importantly, it felt like transitioning from riding a prototype to feeling the thrill of driving a complete vehicle, as the image underwent a dramatic quality transformation, boasting clear lines, more delicate nuances, lighting, and better grasp of complex prompts. At this stage, AI stopped feeling like a proof of concept, a project or a whimsical tool For site activities, for concept art, visual storytelling, and rapid iteration AI could now be relied on commenting. Its presents as a critical partner in creative production pipeline. So in short, mid journey introduced two powerful new concept. First is personalization. So it means rating 50 images during onboarding, thought the model might taste. And the second one is aesthetic tuning. It's a sterilization, weirdness, variety. Fine tuned how creative or unusual results would be. This allowed me to shape not just outputs, but the entire feel of the brand's. Visual majority is superior in numerous ways. When looking at issues such as the quality of the image generated. Majority, often outperformed other tools in creating beautiful images, both from technical and anesthetic perspective. It generates detailed compositions complete with anatomy, sophisticated idea clusters. Hues that are vibrant, as well as balanced and lower levels of artifacts and visual inconsistency. It's efficient, especially with higher tier plans in fast, dependable performance, which helps create creative, repetitive seamlessly and effectively with majority. I managed to achieve several remarkable images in my initial trials. UR revealed itself to be a helpful partner. When creating stories and drafting ideas for campaigns revealing its full beauty in not merely providing strong composition, but rather individualized, tailored AI generated images built around focused narratives. I can finally state that it clicked for a reason. Majority provided effortless flexibility and thought after narrative advancements. I remember one image that was particularly eye catching. The atmosphere and tone alongside the composition matched perfectly with what I had visualized. That was the idea starting place for me, a sandbox and try and different props become an attempt to see how far the prompts can take the visual narratives. So the visuals created weren't limited to internal trials. They were integrated into live campaigns. One of the initial images created was incorporated into performance ad creative on meta. As expected, the ad performed quite well. The image captured the attention of users fairly quickly in fields leading to a significant increase in click-through rate when compared against our standard creatives. This confirmed that AI generated visual visuals do have the potential for enhancing engagement in paid media campaigns. The early success of this achievement encouraged us to focus on developing a broader strategy that incorporate deeper exploration of generative content. Just a head up, no matter which neural image generator you use, whether it's mid Jordan stable effusion, or anything else. You'll always need to retouch the results. AI generated visuals often have small flows like weird, hands, awkward, shes, or inconsistent textures that need a human touch to fix. And here's the thing, speed matters. You and your marketing team are usually working against the clock, so instead of firing up heavy tools like Photoshop, the fastest and most efficient way to polish up images is to do it directly in Figma. It's lightweight, collaborative, and way faster for quick adjustment. Perfect for when you are prepping content for campaigns or social posts or product mockups. After all, it's not about spending hours perfecting a single image. The real value comes from quickly iterating on ideas and moving on to the next creative concept that can make a real impact. Here's what a fast research workflow in Figma look like, for example. So maybe the most perplexing thing about Mid Journey was grappling with its tendency to misinterpret intricate directives. While I appreciate the tools purpose of artists concepts, once you try to impose on any specific conditions. Things fall apart. You might have a particular mental image, say an object. Its relations to other elements in the environment, texture or style, and even an overarching artistic aesthetics. Majority often works sideways, meaning you at War Vision, an AI train to get in to build something that resembles your vision. At one point I even juggling wondering you know what, it might be just easier to hire a photographer and go looking for a cricket balls and spend all the time begging me journey to make a specific image in my imagination. It doesn't lack power, the tool, but rather arbitrary. Novelty takes charge of structured creativity, but then suddenly click and after 120 variants, here it is. The third in a row it's still not perfect, but it's good to go for small tasks such as using it as an illustration for the pitch deck again, or within an app proma section. So one of the features I find really useful in my journey is the use of references. Essentially created image collections that act like visual modifiers. Each references is assigned a unique code like S Ref 6, 8, 0, et cetera, and you can even create your own custom one. It's a bit like a Pinterest, but with superpowers. Think of it in a way, like instead of browsing folders full of images and manual drawing inspiration, you can feed an entire reference folder into Mid Journey as context and let it generate new images based on that visual DNA. It's a creative shortcut and a way to inject coherence or intentional style into your outputs. When I was working on my project, I had a pretty clear mental image of my target audience, but finding the right visual tone, that sweet spot, blending fashion, realism, wind touch, charm, and hint of surrealism wasn't easy. I dove into the explore tap and image journey and comped through frequent, frequently updated references. It felt a bit like a treasure hand, but eventually I found the visual style that clicked. So it looks like this outcome. Okay, so now let's talk about open AI and Sora. So open AI and, yeah. And by the way, this is all made through Midjourney, and this is all. Is ready to go mockup we used in our previous and end current ad campaign in meta. So let's talk about SOA and charge pt. So open AI on the heels of launching Sora. Its new streamlined image generator announced on March 25, the update of its image generator features. With GPT for all, so is now fully incorporated into the charge GPT experience. This release was not only a remarkable achievement on the technological front, it's also indicated a nuances, changes of in our branding and creative focus, we have shifted to paying more attention to representation, seeking to populate our visuals and communications with people from as many different background as possible. So unlikely, most tools. Lead and with countless menus, toggles and sliders is free from clutter while everything else is left to what it would essentially call a human versus AI collaborative visioning process. User gets the opportunity to set basic parameters such as image dimensions, expected outcome, numbers, and even references. It's not about fine tuning controls, it's about vision, feeling, and the descriptions that the company is request the absence of clut. Makes the interface clean, enhancing focus, and creating a feeling. Unlike most modern interfaces of storytelling rather than technical configuration here's how the main page looks like, for example, together with an explore section where the most ed pros and images done by other users are displayed. And let's see, some first experiment with soa when I first got access to soa. I was curious, but still cautious. I had seen impressive results online, but I wanted to see how well it would respond to my own creative directions. So I took some of my favorite mid journey style prompts and adapted them for Sora to see how it handled cinematic language, mood, and details and result genuinely blew me away for the first time. The AI seemed to truly get what I was imagining without endless tweaking. Or rear rewind prongs. Skin tones were quie, nuanced and natural. Photo angles felt intentional as it showed by professional expressions were subtlely. Human emotive and believable backgrounds weren't just feelers. They were crisp, coherent and detailed. It wasn't just impressive. It felt like a creative. Breakthrough. Finally, I had a tool that could translate my mental image onto something visually tangible, with almost uncanny, precise. So every tool has its downsides in my attempt with complex scenes having multiple people interact with one another and display motion. Sora had some difficulties. Achieving this control took several iteration, meticulous prompting, and most of all time that said, once control Sora produced, Polish marketing, created assets with little to no ending editing. So like this neuro Sora also has some. Nifty features, for instance, enhancing image resolution. It's amazing how quickly you can transform an image to meet higher quality standards, making it even more versatile for creative professionals who need polished visuals on the fly, and then you can finally get a nice result for product and marketing purposes, like on this shots. Oh, when speaking of Sora, I discovered something really interesting. It has the capability to remove the background from images and it outputs a transparent PNG file with an Alpha channel. This feature is beneficial when planning to add graphics built, compositional images, or for layering elements as one can for work freely without a background. If you prefer Figma, then. Their recent update has a builtin background remover, so you are still covered to do. You can do everything on the platform without switching tabs or logged out, and it makes the whole process effortless. So a few available lessons from this journey to start. Remembering the evolution of AI model is critical. Understanding how quickly they are changing can be overwhelming. All that needs to be addressing is remaining agile. In this context, once mindset should evolve far faster than technology. This means that one should be open to learn and adjust with frameworks and capabilities that make emerge later. Moving on, AI tools are not systems that oblate designers jobs, rather, they are an empowerment tool. The argument comes down to whether or not you will utilize the tool, or in essence, the manner in which you are as a designer will be able to utilize it. Mechanism can assist one. However, the ability to use them proficiently separates an expert and a doubler. Finally, getting to attach to one tool or platform can reduce productivity drastically. To reframe all your available option is in the toolkit can be unhelpful if your creativity is Stu. Equally creative intuition is where your value lies and is more important than any gadgets or tool. That said, once mind as a tool never goes out of style, creative empathy and a vision available assets, which unlock endless possibility. So thank you. I hope you really enjoyed by this speech. Thank you.
...

Nikita Poloznikov

Head of Design @ Jazari

Nikita Poloznikov's LinkedIn account



Join the community!

Learn for free, join the best tech learning community for a price of a pumpkin latte.

Annual
Monthly
Newsletter
$ 0 /mo

Event notifications, weekly newsletter

Delayed access to all content

Immediate access to Keynotes & Panels

Community
$ 8.34 /mo

Immediate access to all content

Courses, quizes & certificates

Community chats

Join the community (7 day free trial)