RIP FLUX! NEW FREE NSFW MULTIMODAL AI KING IS HERE!
The ultimate multimodal image editing AI model is here and it’s just insane. Hello humans, my name is K, your A overload. And boy oh boy, do I have some mind-blowing stuff for you today. Because yes, we recently got a brand new 12 billion parameter model from the makers of Flux called Context. A multimodel model that is able to automatically edit images with incredible precision. And the amount of things that you can do with this model is just mindblowing. and I’m going to show you everything on how to install it and use it on your local computer today. So, that being said, sit back, relax, and let’s go. And to install Context, you have two ways. The first is, of course, by using the oneclick installer that is available for my Patre supporters. Just download the file onto your computer. Then, double click it. It will then ask you which model you want to download based on your GPU VRAM. So, if you have less than 12 GB, you’re going to choose option number one. Between 12 and 16, you’re going to choose option number two. and more than 16, you’re going to choose option number three. So, in my case, it’s say I have 24 GB of VRAM. I’m going to choose number three, then press enter. It’s going to then ask you if I want to download the FPA diffusion model. And in my case, I’m going to choose yes, then press enter. And then here, a brand new addition that I’ve never talked about before that I now added to this new installer is the ability to install Nunchaku, which is basically like a super ultra fast version of the model. And I highly recommend that you install Naku for this model. It is really, really good. So here, if your Nvidia GPU is a 5,000 series, you’re going to choose option number one. And if it’s not, you’re going to choose option number two. Now, I do have a 5090, but I still haven’t installed it. I’m still running my 4090. So, in my case, I’m going to choose option number two, and then press enter. And then it’s going to download and install Config UI with all the custom nodes and models that you need as well as installing Naku. You really don’t need to do anything. And once it’s done, it’s going to automatically launch Config UI, ready to be used. Simple as that. Everything is done automatically for you. Now, for this model, I definitely recommend installing a brand new Config UI from scratch. But if you already have an older Confui installation, and you just want to install the models and the nodes, you can also use the model node installer.bat file, just select that installer, then go inside your Confy folder and paste it right here. And then you can run the installer from here exactly like in the first step. And once you are running Confi UI, make sure that you’re running the latest version. So make sure to click update all to always update Confi to the latest version. And once everything is updated for this video, I prepared a special context image editing workflow that is available on my Patreon. So just download the workflow, then drag and drop it in Symphony FUI, and then big boom, bada boom, here you go. And for the manual installation, all the links will be in the description down below. Okay, so now that everything is installed, we can finally have some fun. But before we begin trying to explain like first of all like what is all of these stuff on the screen? Why is there so many workflows? First is try to understand and explain what flux context actually is. And as I explained earlier, flux context is basically a model released by the makers of flux black forest labs. And it’s a multimodel 12 billion parameter model that is kind of like a mix between control net IP adapter and like an automatic version of in painting meaning that it can automatically change and modify an image use a reference image to create other images create different situations and environment based on the reference image and all of that by keeping the consistency intact. This model is really, really ultra powerful. And we first got a paid version from these makers called Pro and Max, but now a few days ago, we finally got a free version called Dev, which pretty much does the exact same thing. And as I said, this model is very, very good, very, very powerful. But the reason why so many people are kind of on edge is because it’s actually really difficult to use. And the reason why it’s so difficult to use is that you need to be extremely precise in the way that you prompt for the context model. I actually put you right here a ke like the initial description. It explains like here everything that you need to know and all it can do as well as a direct link for a document and I highly recommend you to read it because it is very very important. Like don’t start asking questions if you haven’t read the guide like please. And even though I will explain most of it in the video right here, I still recommend you to read the guide yourself. Okay, so that being said, let’s actually try this model out ourselves and see exactly what it can do in practice. Now, I’m going to quickly go like over the reason why there are so many different workflows. Hence why I kind of make like all these like little description. So although it looks like there is five different workflow, there’s actually only two different workflows right here. an experimental workflow and the official native workflow. And the reason why they all exist and separated is that it’s basically made easier for people who have a low VRAM GPU so that they have more options. Like for example, if you have like a low VRAM GPU like a 6 or 8 GB of VRAM, you should probably use like the GGUF side as well as the Nunchaku one. Whereas if you have more than 16 GB of VRAM, you should probably use the FP8 version as well as the Nunchaku version. And the reason why there is only a normal Nunchaku workflow is that unfortunately Naku is not compatible with the experimental workflow. But don’t worry, I will show you and explain everything at the end. It doesn’t really matter now. But if there is one thing that you need to know, if there is one model, one workflow that pretty much everyone should try at least once, no matter what GPU you have, it’s probably the nonchaku model, which will really give you an ultra fast generation speed. And this is what I’m going to choose right now. So once again, if you’ve never used my workflows before, you have here a little button that allows you to activate the workflow that you want to use. And then here, most of the time, everything that you can change and should change will be located on the left hand side. So here, make sure that you’re choosing the right model. You have here a little note. If you have a 5000 series GPU, you need to choose the FP4 variant. Otherwise, you choose the intake 4 variant, which I’m using right here since I have a 4090. Oh, and also right here, if you have a 20 series GPU instead of Bloat 16, you need to choose float 16 instead. Now, it’s fairly easy to use. Here is where you input your Laura. Here is where you input your prompt. Here is where you input your custom resolution if you want to. But to activate this you need to change from false to true as I wrote it right here. But by default it is false. So it will use like the automatic resolution determined by flux. I will show you later what it means. And as well as here if you have selected false above and you want to use automatically the original image resolution, you need to select this node and then click here to bypass it to use exactly the resolution of the image right here. or else it will automatically resize it to a resolution that Flux Context prefers to continue the generation. But I’m going to show you later how it works. And right here is when you’re going to enable or disable the second reference image. And there you go. That’s pretty much how the entire workflow works. It is very easy to use. There is pretty much nothing that you need to change. Everything is already done for you. Okay. So now let’s actually have some fun and see what context can actually do in practice. So, first I’m going to upload an image like this woman in red right here. And then I’m going to start writing my prompt. Now, the first thing that context can do is basically do very small changes on the image. So, like for example, if I want to change the color of the woman to blue, I’m going to write something like change the color of the woman’s hair to vibrant blue while maintaining the same facial features, hairstyle, and expression. And this sentence is extremely important. It is actually in the notes. It’s one of those sentences that you need to use for the model to understand that it needs to keep it closer to the original image. So the next you’re going to click run and it’s going to give you something like this. As you can see, it has pretty much kept the original hairstyle but this time it has changed the color from brown to vibrant blue. And even like the small hair on the head were actually completely transformed into the new color which is really really cool. Oh, and by the way, for those of you asking, this entire generation with the Nunchuck model with my 1490 were done in 11 seconds. Yeah, this not a joke. It actually took 11 seconds. So, yeah, this nunchu model uh it’s I don’t know what kind of black magic they’re using, but it’s absolutely incredible. Just insane. Okay, so the second thing that context can do, for example, is to add stuff to the image. So, like for example, if I want to add glasses to the woman’s face, I can simply put add a pair of modern medium frame glasses to the woman’s face while maintaining the original composition. And now, if I click run once again, 10 seconds later, I get this image right here with a brand new pair of glasses while keeping the exact same composition. Now, this is really cool, but sometimes for some images, it will kind of like change them a little bit, like the ratio or the like the size will be a little different. We still haven’t found a reason on why it does that. So maybe a future Lura can maybe solve this issue. I’m going to talk about Loras later because it’s really really cool. But for now, this is what we got. And I mean it’s it’s still really really cool. It’s still really really good. Okay. So the next thing that you can do is of course like kind of like image to image and resize the entire image. So if I see something like transform into Giblly style, very classic, it will give you something like this. a really really cool Gibli style image using the exact same position, the exact same environment and all of that without even using a single Lura. So yeah, I mean this is really really powerful because it’s not even using control net to keep the original position. This is really really cool. And the other cool thing is that you can actually do it the other way around. That’s right. Like for example, if you input this image right here. So, if I say something like transform into a real person photo, well, we get something like this, which is really, really cool, especially because you can transform any image that you want from any anime into a real person, like Luffy from One Piece, for example. Even a full body image of your character also works. It works actually really, really well. Even images that are very, very complex can be made into real images in basically a few seconds. So, imagine all the possibilities. And yes, don’t worry, we’re going to talk about that later in the video, but not for now. Now, another thing that you can do with context is to actually change the angle of the character while preserving all the original features. So, like for example, if I say something like change the angle of the woman, so she’s facing directly toward the camera in a full frontal view while preserving her exact facial features, hairstyle, expression, lighting conditions, and maintaining the original photographic style and background setting. And we get something like this. And as you can see, it has not only preserved the exact same background, but the only thing that we have changed is that the woman is now looking at us in full frontal view. I mean, this is really incredible. Like, if you understand what we just did in like 10 seconds with a single prompt, you can probably understand all the possibilities that comes with that. I can think of a few on the top of my head right now. And this is just the beginning. Like, we are not done. You can also change it so you can only see the back of the woman with a very similar prompt for example. So we can get something like this. And I mean we can keep going. But I think that you get the message and that is really really super cool. You can also change the position of the woman so that she’s sitting on the ground instead. And we get something like this which is really really good. Her legs are a little bit like hiding into the ground but still like this is really really good. And you can really change the position as you wish. All of that while preserving the original features of the person. I mean, this is really, really cool. One thing that we can also do is completely change the lighting and setting of the original image. Like, for example, if I want to change it into a winter scene with snow while keeping the original composition and style, well, it can do it very, very easily. As you can see, it even has like some snow on the woman’s head and hair, on the shoulders, on her clothes. And since I was afraid she would get a little cold, I just added a coat and scarf to keep her warm. So, another very cool thing that you can do is to transform a very old black and white photos into a colored version photo. So, if you have some family photos that you want to modernize or repair, well, you can do so very very easily with context. Like for example, I have this photo portrait, a very old photo of this woman and I want to transform it into a colored version. Well, it’s very simple. If I write something like add realistic color to this photo while maintaining the original composition. If not I click run, it gives me something like this which is really really cool because I mean as you can see the only thing that has basically changed is the fact that now we have a colored photo instead of a black and white photo. Everything like the composition, like the hair, everything is basically kept the exact same way which is really really cool. And it even works if your photo is actually damaged or very very old with a bunch of like scraps or holes. Context can do it easily as well. The only thing is that you need to change the prompt and say something like restore and colorize this image. Remove any scratches or imperfections. Change it to a modern photo style vibrant colors. And now if I click run and there you go. As you can see it does work quite well. All the blemishes and holes and imperfections were basically erased. it was converted into a colored photo. So yeah, I mean this is really really cool. Another very cool thing that context can do that is really really amazing. It’s awesome is the ability to change the text of an image without modifying anything else. So like for example, let’s say you have a movie poster like the Godfather and I want to change the title from the Godfather to something like I don’t know like uh the mafia for example and I wanted to keep the original font style. Well, I can actually simply do that. All you have to do is basically say replace the Godfather with the mafia while maintaining the same font style and color and keep the logo intact. And there you go. As you can see, it has worked quite well because not only we kept the exact same font style, but we also kept the logo from the Godfather. The only thing that was changed, the only thing that has changed is the text of the movie, which is really, really cool. And you can do it with pretty much any image that you want. Let’s say I want to change the Guardians of the Galaxy to something like the I don’t know like the lame Avengers. Well, same thing right here. Same prompt, just different text. And as you can see, it worked quite well. It is really really good at like keeping the exact same font style. I’m not sure how they do it, but I mean it it works really really well. And really I mean with this um the sky is the limit. I mean it’s it’s really up to you. just just have fun. Also, another thing that you can do is to remove things from a photo. So, let’s say for example, you have taken this absolutely amazing photo of a street, but uh just like me, you are not very fond of lots of people in your photo and you want to erase them from a photo. Well, uh you can actually do that very easily. Once again, just input the prompt, remove all the people from the photo while maintaining the original composition. And bada boom, bada boom. A few seconds later, no more people in the picture. As you can see, everything is pretty much preserved the exact same way, except that now there is absolutely no people in the photo. I mean, this is I mean this is insane. This is amazing. Like before something like this would take you like maybe hours to do correctly and now it is done in only a few seconds. I mean I that’s that’s incredible. All right, that that’s really incredible. Okay, so now although there is plenty of other things that we can do using one single image, I think it is time to show you that context is way more powerful than simply changing one single image by introducing in the mix a second image. That’s right. Because context can actually use multiple images together to create a new one. So like for example, let’s say that I want this woman to like for example hold a cake on the left side for example and I have a very specific cake in mind. Well, for example, I can input the second image right here, an image of this beautiful blue cake that says happy birthday on it with a single candle. If now in my prompt I put make the woman in red holding the light blue cake in the photo on the left while maintaining the original composition. And if now I actually enable the custom resolution and keep it the same as the original one. If now I click run, well, we get something like this. As you can see, this workflow actually combines two images into one, one on the left and one on the right to create like this big image. And when you actually leave the switch latent on false, it will use like this huge resolution image, which is definitely not something that we want because it’s way too big. Hence why we activate this option and use custom resolution right here. And then I’m basically kind of fuse these two images together by almost copy and pasting this cake into the hands of this woman in red. And you can do this with pretty much anything. If you actually own like a e-commerce company for example and you need images of your product well you can use the exact same workflow to generate a bunch of images of a bunch of character holding and using your product which is really really cool which really makes this model very very flexible. Oh also quick note this is me from the future while editing the video. One thing that you can also do this is something that I saw on Twitter from a user named Nomador. You can actually make some very crude copy and paste on Photoshop first, like on a white background, and then make it so that the white background is replaced by a normal background of the city and make the woman actually hold the bag, hold the yellow bag that is present in the photo. And by actually doing like this very weird but very quick Photoshop composition, like in a few seconds, you can make an absolutely amazing image in the end. So when I told you that this model was fairly new and we’re still learning new things every single day, well I mean there you go. This is a very very good example of that. So definitely try this out and let me know if it works for you. Now also obviously instead of simply combining things, you can also combine different characters together. So like for example, if I pull these two characters and in my prompt I see something like place both cute 3D characters together in one scene where they are hugging. Then if this time I actually disable this and if now I click run in the end we get something like this. As you can see we have basically that kind of merged these two characters together into one scene into one big image while preserving the likeness of these two characters. And you can of course do this with different characters like for example if I change these two images to something else like for example Makima from Chensman and an image of yours truly. If now I click on run, we get something like this. I mean, so cute. Now I can finally know the feeling of being hugged by a woman. It really brings a tear to my eye. Oh, and also don’t forget that if you do that, try to pay attention to the proportions. As you can see here, Makima is definitely bigger than me, which in real life shouldn’t be true cuz I’m 61. I mean, come on now. So, make sure that when you do combine these two images together, you need to pay attention to the scale of the characters compared to one another because obviously context uh doesn’t know which character is bigger than the other. So, it’s really up to you to do that. Now, you can of course change the prompt and make them for example like shake hands instead of hugging. This would also work very well. Like there you go. Something like that looks pretty good. And yeah, I mean, as you can see, this works fairly well even when you have multiple characters interacting together. Now, speaking of multiple characters, I think there’s also now time to talk about the experimental context workflow and kind of try to understand what is the difference between these two workflow and which one should you use depending on what you want. Now, one bad news unfortunately is that this experimental workflow does not work with Nunchaku. So unfortunately you either going to have to use the FP8 version if you have a good GPU or the GGUF side if you have a slow GPU. Now one thing that you can notice is that the experimental workflow is a little bit bigger in terms of size and that is because in this example in this workflow you can actually use negative prompts using N A which stands for normalized attention guidance. Basically, it allows the model to follow the prompt better and use negative prompts, which is pretty cool. So, the big difference here is that each image is encoded separately and then combined together into a bigger image. So, in practice, the big difference between these two workflow is that you would use the normal workflow, the FP8 context or like the nunchaku context or the GGUF1. This normal workflow works better when you want to combine multiple characters together into a brand new scene. Whereas the experimental context works better if you want to take information from an image to modify the first one. So like for example, let’s say I want to put both of these characters together into a single image. So if I put something like the two men are playing poker at the Las Vegas casino and I click run. As you can see, by using the same exact seed and the same exact prompt merges very well these two characters together into a very coherent scene, whereas the experimental context kind of doubles the Samuel L. Jackson character into two while completely forgetting the John Wick character that we introduced in the second image. If now we decide to, for example, change the second image to something different. Let’s say that you have like this very weird silly hat and you want to put that silly hat on the first image on Savil L. Jackson. I can just say put the weird hat on his head and do the same thing on the other side. So, as you can see with the same exact prompt with the same exact seed on the left, we have kind of like a much better implementation of the hat in the second image onto the first image. Whereas the normal workflow kind of made a well like a weird image kind of like a weird mix of the two and although the design of the weird hat is more or less preserved you can clearly see that the image is not as good as the one made with the experimental workflow. So basically TLDDR the normal context workflow is made to combine two characters together in a scene in a coherent scene whereas the experimental workflow is made to combine elements from the second image onto the first image as well as the use of negative prompts. So yeah definitely try this out yourself. Oh, and also another thing. I know that I talked about the of the nunchaku context model being extremely fast and being able to generate an image like this in under 12 seconds, but you can actually, guess what, make it even faster. Yeah, this is not a joke. You can actually use a Laura called Flux One Turbo Alpha that allows you to generate images with only eight steps. So, for this, you’re going to increase that strength to one. And in here under steps instead of 20 you’re going to change it to eight. And now let’s generate. And in the end we get something like this. Something that works completely fine and perfectly respect our prompt. But this time it was generated in only 8 seconds. So yeah, I mean this is really super incredibly fast. Now obviously you will see a little bit of degradation. It’s not going to be perfect, but for something that was generated this fast, it still looks pretty good. But obviously, I highly recommend you if you want the best quality possible to keep it at 20 steps and only use the Laura if you want something a little bit different with the same exact seed. But other than that, usually I just keep it at zero for this particular Lura. Now, speaking of Loras, what an amazing transition. I’m sure that a lot of you after seeing this absolutely cute and wholesome image might be thinking, hm, there is really a lot of clothes on these characters. If only there weren’t there, if you know what I mean. So, entrepreneur, is it possible? Can we do it with the context model? And my answer to you is of course no. Actually, the base context model is probably one of the most censored model I have ever seen. And they have done it on purpose because of how easy it would be to do whatever you think that you want to do. However, I can hear you crying behind your screen and saying, “Wait, wait, wait, wait, wait. But I saw the title of the video, Entrepreneur. It said not safe for work.” Did you Did you lie? Did you click bait us? Was it bait? Oh. Oh, well done, entrepreneur. This was really a master class in baiting. Oh, entrepreneur, you really are a master baiter. Oh god, I’m laughing at my old jokes. And to this I respond, well actually it wasn’t bait because actually you can actually that’s a lot of actually because once again this is what I tell you all the time. This is the power of opensource models. The reason why open source models are so cool is because you can train them. That’s right. You can train them. And one of the things that you can train for the model is a Lora. That’s right. There is indeed already a lora capable of doing whatever you want to do. However, because of the Black Forest Labs very weird licensing, it was actually deleted from CIT AI. However, no worries because not only I kept a copy, but also if you want to find the latest context loras, you can either go on CVI and look for a save for work context lura that you want, or if you want something a little bit more spicy, in that case, you need to go to hugginface.co. This is where you’re going to find whatever Laura that you want to find. Unfortunately, I cannot link you to that Laura directly. You can just go here and search for contacts, then click on the models, see all the models available, maybe search for like most likes, and you will probably see something uh you you might you might enjoy. Now, obviously, I cannot show you something like this on YouTube, but all the explanation will be on that lura page. And yes, I have tried and it does work very well. One thing that is also very, very good is that the context loras are actually really, really, really powerful. And I think that maybe training a Laura for context for the context model might be really really cool. So actually let me know if you want a video on that and I might do it if people really want it. But don’t worry, even if you don’t have a powerful GPU or no GPU at all, you can still run the GPU for a few cents an hour on a website like run pod and run context as if this was running on your local computer. So if you don’t have a runpod account already, you can click on the link in the description down below and create a new account. Then you’re going to click on pods and choose a GPU with at least 24 GB of VRAM like a 4090 for example. Then here you’re going to click deploy with a template and you’re going to look for entrepreneur and you’re going to choose the config UI entrepreneur template. Then click edit template and change the container disk from 10 GB to 80 GB and then click set overrides. Then you’re going to scroll down and then click deploy on demand. Then once this is done you’re going to wait until the pod is ready. So once it’s ready you’re going to click on connect then right here. So once you are right here, you’re going to go inside the confi folder. And if you are one of my patron supporters, this next step will be very easy for you because I prepared an easy one-click installer. So once you have the installer, you’re going to drag and drop it right here. Then click on the terminal icon and then copy and paste these two lines of code that you will find in the Patreon post and then press enter. And this will download all the files that you need to run context. Simple as that. You don’t need to do anything. Then once everything is installed, you’re going to go back and click on this link right here. Then you’re going to click on the manager. Click update all. You can also follow the update by going inside the logs folder, then clicking on new tab, then on the terminal icon, and then copy and pasting this command line so that you see exactly what is happening at all times. Much easier to know when everything is ready. So then next, you’re going to click restart. Then you’re going to click refresh by pressing F5 on your keyboard. Once it’s done, you’re going to drag and drop the config workflow. Here you’re going to see a message with the missing new chunk of nodes, which is okay. It’s fine. You’re going to close this. Click on the manager button. Click install missing custom nodes. Then select confiru. Click install. Then once it’s done, you’re going to click restart. Then click confirm. And when it asks you to refresh, you’re going to say confirm again. And then there you go. Now the workflow is ready to be used as well as the nunchaku context workflow. So there you go. Oh, and also don’t forget that I provide priority support for my patre supporters. So if you have any question whatsoever, do not hesitate to send me a DM and I will try to answer your question as soon as possible. And there you go. This has been Flux Context, an absolutely amazing multimodel image AI model that is able to automatically edit images, combine multiple images together, keep amazing character consistency and all of that inside one single model. It is seriously one of the best models we have ever gotten and I personally really had a lot of fun using it. So, that being said, definitely try this out yourself and have some fun. And there you have it, folks. Thank you guys so much for watching. Don’t forget to subscribe and smash the like button for the YouTube algorithm. Thank you also so much to my Patre supporters for supporting my videos. You guys are absolutely awesome. You people are the reason why I’m able to make these videos. So, thank you so much and I’ll see you guys next time. Bye-bye.
THE NEW MULTIMODAL EDIT KING IS HERE! KONTEXT! Craft jaw-dropping images and edits right on your local PC for FREE!
WHY SETTLE FOR TEXT-ONLY AI when you can have a 12-BILLION-PARAMETER powerhouse that understands your words and your reference images, keeps every face and layout locked in place, and lets you swap styles, objects, or backgrounds without touching the rest and is 100 % open-source!
IN THIS VIDEO, I’ll show you how to INSTALL KONTEXT and UNLOCK its TRUE POWER with THE BEST SETTINGS inside my ULTIMATE ComfyUI workflow perfect even for complete beginners!
What do YOU think about KONTEXT? Let me know in the comments below! 👇
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
SOCIAL MEDIA LINKS!
✨ Support my work on Patreon: https://www.patreon.com/aitrepreneur
⚔️ Join the Discord server: https://discord.gg/3ErYSdyUPt
🧠 My Second Channel THE MAKER LAIR: https://bit.ly/themakerlair
📧 Business Contact: theaitrepreneur@gmail.com
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
✨ PATREON LINK: https://www.patreon.com/aitrepreneur
RUNPOD: https://bit.ly/runpodAi
Manual Installation Guide: https://pastebin.com/jTGgsFbA
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
►► My PC & Favorite Gear:
i9-12900K: https://amzn.to/3L03tLG
RTX 3090 Gigabyte Vision OC : https://amzn.to/40ANaue
SAMSUNG 980 PRO SSD 2TB PCIe NVMe: https://amzn.to/3oBR0WO
Kingston FURY Beast 64GB 3200MHz DDR4 : https://amzn.to/3osdZ6z
iCUE 4000X – White: https://amzn.to/40y9BAk
ASRock Z690 DDR4 : https://amzn.to/3Amcxph
Corsair RM850 – White : https://amzn.to/3NbXlm2
Corsair iCUE SP120 : https://amzn.to/43WR9nW
Noctua NH-D15 chromax.Black : https://amzn.to/3H7qQSa
EDUP PCIe WiFi 6E Card Bluetooth : https://amzn.to/40t5Lsk
Recording Gear:
Rode PodMic : https://amzn.to/43ZvYlm
Rode AI-1 USB Audio Interface : https://amzn.to/3N6ybFk
Rode WS2 Microphone Pop Filter : https://amzn.to/3oIo9Qw
Elgato Wave Mic Arm : https://amzn.to/3LosH7D
Stagg XLR Cable – Black – 6M : https://amzn.to/3L5Fuue
FetHead Microphone Preamp : https://amzn.to/41TWQ4o
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
Special thanks to Royal Emperor:
– TNSEE
– RG
– Dean Newton
– fer v
– Cinemalecular
Thank you so much for your support on Patreon! You are truly a glory to behold! Your generosity is immense, and it means the world to me. Thank you for helping me keep the lights on and the content flowing. Thank you very much!
#kontext #ai #texttoimage #imagegeneration
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
WATCH MY MOST POPULAR VIDEOS:
RECOMMENDED WATCHING – All LLM & ChatGPT Video:
►► https://www.youtube.com/playlist?list=PLkIRB85csS_tqEhGFLAPIYuQ-Rwhg_kpQ
RECOMMENDED WATCHING – My “Tutorial” Playlist:
►► https://bit.ly/TuTPlaylist
Disclosure: Bear in mind that some of the links in this post are affiliate links and if you go through them to make a purchase I will earn a commission. Keep in mind that I link these companies and their products because of their quality and not because of the commission I receive from your purchases. The decision is yours, and whether or not you decide to buy something is completely up to you.