comfyui sdxl refiner. 0: An improved version over SDXL-refiner-0. comfyui sdxl refiner

 
0: An improved version over SDXL-refiner-0comfyui sdxl refiner Discover the Ultimate Workflow with ComfyUI in this hands-on tutorial, where I guide you through integrating custom nodes, refining images with advanced tool

5x), but I can't get the refiner to work. 下载此workflow的json文件并把他Load加载到comfyUI里,即可以开始你的sdxl模型的comfyUI作图之旅了。 . json: sdxl_v0. 9 ComfyUI) best settings for Stable Diffusion XL 0. These were all done using SDXL and SDXL Refiner and upscaled with Ultimate SD Upscale 4x_NMKD-Superscale. If you want it for a specific workflow you can copy it from the prompt section # of the image metadata of images generated with ComfyUI # keep in mind ComfyUI is pre alpha software so this format will change a bit. Sample workflow for ComfyUI below - picking up pixels from SD 1. StabilityAI have release Control-LoRA for SDXL which are low-rank parameter fine tuned ControlNet for SDXL which. 5支. 5 + SDXL Base shows already good results. Drag the image onto the ComfyUI workspace and you will see. Think of the quality of 1. json: sdxl_v1. png","path":"ComfyUI-Experimental. 0, with refiner and MultiGPU support. Misconfiguring nodes can lead to erroneous conclusions, and it's essential to understand the correct settings for a fair assessment. 35%~ noise left of the image generation. 0. Img2Img works by loading an image like this example image, converting it to latent space with the VAE and then sampling on it with a denoise lower than 1. x for ComfyUI. It'll load a basic SDXL workflow that includes a bunch of notes explaining things. The ratio usually 8:2 or 9:1 (eg: total 30 steps, base stops at 25, refiner starts at 25 ends at 30) This is the proper way to use Refiner. go to img2img, choose batch, dropdown refiner, use the folder in 1 as input and the folder in 2 as output. ZIP file. 9 Base Model + Refiner Model combo, as well as perform a Hires. The only important thing is that for optimal performance the resolution should be set to 1024x1024 or other resolutions with the same amount of pixels but a different aspect ratio. If the refiner doesn't know the LoRA concept any changes it makes might just degrade the results. Since SDXL 1. I've a 1060 GTX, 6gb vram, 16gb ram. g. 0. So if ComfyUI / A1111 sd-webui can't read the image metadata, open the last image in a text editor to read the details. The result is a hybrid SDXL+SD1. safetensors. 5 refined model) and a switchable face detailer. 0 links. RunPod ComfyUI Auto Installer With SDXL Auto Install Including Refiner. AP Workflow 6. Models and UI repoMostly it is corrupted if your non-refiner works fine. Links and instructions in GitHub readme files updated accordingly. 0 for ComfyUI - Now with support for SD 1. Thanks for this, a good comparison. 0 Base Lora + Refiner Workflow. SD1. 0. It is not AnimateDiff but a different structure entirely, however Kosinkadink who makes the AnimateDiff ComfyUI nodes got it working and I worked with one of the creators to figure out the right settings to get it to give good outputs. The workflow should generate images first with the base and then pass them to the refiner for further refinement. You can use this workflow in the Impact Pack to. These are examples demonstrating how to do img2img. 今天,我们来讲一讲SDXL在comfyui中更加进阶的节点流逻辑。第一、风格控制第二、base模型以及refiner模型如何连接第三、分区提示词控制第四、多重采样的分区控制comfyui节点流程这个东西一通百通,逻辑正确怎么连都可以,所以这个视频我讲得并不仔细,只讲搭建的逻辑和重点,这东西讲太细过于. The CLIP Text Encode SDXL (Advanced) node provides the same settings as its non SDXL version. 5 model (directory: models/checkpoints) Install your loras (directory: models/loras) Restart. Drag & drop the . 0 ComfyUI. Explain the Ba. But if SDXL wants a 11-fingered hand, the refiner gives up. The refiner model. Great job, I've tried to using refiner while using the controlnet lora _ canny, but doesn't work for me , only take the first step which in base SDXL. 236 strength and 89 steps for a total of 21 steps) 3. You want to use Stable Diffusion, use image generative AI models for free, but you can't pay online services or you don't have a strong computer. 5 models. The refiner is although only good at refining noise from an original image still left in creation, and will give you a blurry result if you try to add. It compromises the individual's DNA, even with just a few sampling steps at the end. u/EntrypointjipThe two-model setup that SDXL uses has the base model is good at generating original images from 100% noise, and the refiner is good at adding detail at 0. Second KSampler must not add noise, do. I'ma try to get a background fix workflow goin, this blurry shit is starting to bother me. silenf • 2 mo. Restart ComfyUI. It might come handy as reference. Part 3 (this post) - we. Prior to XL, I’ve already had some experience using tiled. It makes it really easy if you want to generate an image again with a small tweak, or just check how you generated something. Then inside the browser, click “Discover” to browse to the Pinokio script. Next support; it's a cool opportunity to learn a different UI anyway. 0已更新!遥遥领先~快来看看更新内容及使用体验~,免费开源AI音乐:文本生成音乐,使用Riffusion实现音乐实时生成,【AI绘画】SDXL进阶篇:如何生成不同艺术风格的优质画面SDXL VAE: Optional as there is a VAE baked into the base and refiner model, but nice to have is separate in the workflow so it can be updated/changed without needing a new model. With resolution 1080x720 and specific samplers/schedulers, I managed to get a good balanced and a good image quality, first image with base model not very high. you are probably using comfyui but in automatic1111 hires. For reference, I'm appending all available styles to this question. The SDXL base checkpoint can be used like any regular checkpoint in ComfyUI. 9. AP Workflow v3 includes the following functions: SDXL Base+RefinerBased on Sytan SDXL 1. Note that in ComfyUI txt2img and img2img are the same node. patrickvonplaten HF staff. I think this is the best balanced I. The prompt and negative prompt for the new images. Place LoRAs in the folder ComfyUI/models/loras. ai has released Stable Diffusion XL (SDXL) 1. SDXL Refiner: The refiner model, a new feature of SDXL; SDXL VAE: Optional as there is a VAE baked into the base and refiner model, but nice to have is separate in the workflow so it can be updated/changed without needing a new model. 0 base and have lots of fun with it. After an entire weekend reviewing the material, I think (I hope!) I got. The I cannot use SDXL + SDXL refiners as I run out of system RAM. 0 + LoRA + Refiner With Comfy UI + Google Colab fot FREEExciting news! Introducing Stable Diffusion XL 1. Reduce the denoise ratio to something like . Join. Link. When I run them through 4x_NMKD-Siax_200k upscaler for example, the. Step 1: Download SDXL v1. Prerequisites. Try DPM++ 2S a Karras, DPM++ SDE Karras, DPM++ 2M Karras, Euler a and DPM adaptive. Inpainting. Use "Load" button on Menu. tool guide. I think we don't have to argue about Refiner, it only make the picture worse. Special thanks to @WinstonWoof and @Danamir for their contributions! ; SDXL Prompt Styler: Minor changes to output names and printed log prompt. ComfyUI SDXL Examples. 15:49 How to disable refiner or nodes of ComfyUI. 0 SDXL-refiner-1. Updating ControlNet. The SDXL base model performs significantly better than the previous variants, and the model combined with the refinement module achieves the best overall performance. 3) Not at the moment I believe. py --xformers. 0, it has been warmly received by many users. Two Samplers (base and refiner), and two Save Image Nodes (one for base and one for refiner). In my experience t2i-adapter_xl_openpose and t2i-adapter_diffusers_xl_openpose work with ComfyUI; however, both support body pose only, and not hand or face keynotes. SDXL Models 1. 5 and HiRes Fix, IPAdapter, Prompt Enricher via local LLMs (and OpenAI), and a new Object Swapper + Face Swapper, FreeU v2, XY Plot, ControlNet and ControlLoRAs, SDXL Base + Refiner, Hand Detailer, Face Detailer, Upscalers, ReVision [x-post]Using the refiner is highly recommended for best results. My advice, have a go and try it out with comfyUI, its unsupported but its likely to be the first UI that works with SDXL when it fully drops on the 18th. The base doesn't - aesthetic score conditioning tends to break prompt following a bit (the laion aesthetic score values are not the most accurate, and alternative aesthetic scoring methods have limitations of their own), and so the base wasn't trained on it to enable it to follow prompts as accurately as possible. SDXL Offset Noise LoRA; Upscaler. 🧨 DiffusersThe way to use refiner, again, I compared this way (from on of the similar workflows I found) and the img2img type - imo quality is very similar, your way is slightly faster but you can't save image without refiner (well of course you can but it'll be slower and more spagettified). Two Samplers (base and refiner), and two Save Image Nodes (one for base and one for refiner). Updated ComfyUI Workflow: SDXL (Base+Refiner) + XY Plot + Control-LoRAs + ReVision + ControlNet XL OpenPose + Upscaler . Starts at 1280x720 and generates 3840x2160 out the other end. It does add detail but it also smooths out the image. 9_comfyui_colab (1024x1024 model) please use with: refiner_v0. The test was done in ComfyUI with a fairly simple workflow to not overcomplicate things. Play around with different Samplers and different amount of base Steps (30, 60, 90, maybe even higher). All models will include additional metadata that makes it super easy to tell what version is it, if it's a LORA, keywords to use with it, and if the LORA is compatible with SDXL 1. AP Workflow 3. SDXL includes a refiner model specialized in denoising low-noise stage images to generate higher-quality images from the base model. (early and not finished) Here are some more advanced examples: “Hires Fix” aka 2 Pass Txt2Img. After testing it for several days, I have decided to temporarily switch to ComfyUI for the following reasons:. download the SDXL models. I feel like we are at the bottom of a big hill with Comfy, and the workflows will continue to rapidly evolve. i've switched from a1111 to comfyui for sdxl for a 1024x1024 base + refiner takes around 2m. Why so slow? In comfyUI the speed was approx 2-3 it/s for 1024*1024 image. 0 You'll need to download both the base and the refiner models: SDXL-base-1. WAS Node Suite. First, make sure you are using A1111 version 1. google colab安装comfyUI和sdxl 0. History: 18 commits. See "Refinement Stage" in section 2. Searge-SDXL: EVOLVED v4. 61 To quote them: The drivers after that introduced the RAM + VRAM sharing tech, but it creates a massive slowdown when you go above ~80%. 9 the latest Stable. Part 3 ( link ) - we added the refiner for the full SDXL process. Using SDXL 1. 35%~ noise left of the image generation. Works with bare ComfyUI (no custom nodes needed). It is highly recommended to use a 2x upscaler in the Refiner stage, as 4x will slow the refiner to a crawl on most systems, for no significant benefit (in my opinion). The refiner is entirely optional and could be used equally well to refine images from sources other than the SDXL base model. In ComfyUI this can be accomplished with the output of one KSampler node (using SDXL base) leading directly into the input of another KSampler node (using SDXL. VAE selector, (needs a VAE file, download SDXL BF16 VAE from here, and VAE file for SD 1. Yesterday I woke up to this Reddit post "Happy Reddit Leak day" by the Joe Penna. Instead you have to let it VAEdecode to an image, then VAEencode it back to a latent image with the VAE from SDXL and then upscale. SDXL VAE. Before you can use this workflow, you need to have ComfyUI installed. 0. I also used a latent upscale stage with 1. IDK what you are doing wrong to wait 90 seconds. 8s (create model: 0. Locate this file, then follow the following path: SDXL Base+Refiner. 9 vào RAM. Reload ComfyUI. You can try the base model or the refiner model for different results. 🧨 Diffusers Examples. This one is the neatest but. 5 model (directory: models/checkpoints) Install your loras (directory: models/loras) Restart. 节省大量硬盘空间。. 6B parameter refiner model, making it one of the largest open image generators today. My research organization received access to SDXL. Upscale model, (needs to be downloaded into ComfyUImodelsupscale_models Recommended one is 4x-UltraSharp, download from here. These are what these ports map to in the template we're using: [Port 3000] AUTOMATIC1111's Stable Diffusion Web UI (for generating images) [Port 3010] Kohya SS (for training) [Port 3010] ComfyUI (optional, for generating images. safetensors”. Custom nodes and workflows for SDXL in ComfyUI. Hi all, As per this thread it was identified that the VAE on release had an issue that could cause artifacts in fine details of images. 20:57 How to use LoRAs with SDXL. x and SD2. The two-model setup that SDXL uses has the base model is good at generating original images from 100% noise, and the refiner is good at adding detail at 0. . To simplify the workflow set up a base generation and refiner refinement using two Checkpoint Loaders. それ以外. 🧨 Diffusers I recommend trying to keep the same fractional relationship, so 13/7 should keep it good. You can disable this in Notebook settingsYesterday, I came across a very interesting workflow that uses the SDXL base model, any SD 1. 9vae Refiner checkpoint: sd_xl_refiner_1. ·. It will only make bad hands worse. The only important thing is that for optimal performance the resolution should be set to 1024x1024 or other resolutions with the same amount of pixels but a different aspect ratio. Part 3 - we added the refiner for the full SDXL process. And to run the Refiner model (in blue): I copy the . Place VAEs in the folder ComfyUI/models/vae. You really want to follow a guy named Scott Detweiler. You can Load these images in ComfyUI to get the full workflow. The SDXL workflow includes wildcards, base+refiner stages, Ultimate SD Upscaler (using a 1. update ComyUI. 0 in ComfyUI, with separate prompts for text encoders. License: SDXL 0. 75 before the refiner ksampler. main. 5 + SDXL Refiner Workflow but the beauty of this approach is that these models can be combined in any sequence! You could generate image with SD 1. I just downloaded the base model and the refiner, but when I try to load the model it can take upward of 2 minutes, and rendering a single image can take 30 minutes, and even then the image looks very very weird. Includes LoRA. useless) gains still haunts me to this day. 23:06 How to see ComfyUI is processing the which part of the. com is the number one paste tool since 2002. r/StableDiffusion. refiner_output_01036_. Starts at 1280x720 and generates 3840x2160 out the other end. An SDXL base model in the upper Load Checkpoint node. For me, this was to both the base prompt and to the refiner prompt. It also lets you specify the start and stop step which makes it possible to use the refiner as intended. In ComfyUI this can be accomplished with the output of one KSampler node (using SDXL base) leading directly into the input of another KSampler node (using. safetensors and sd_xl_base_0. Lý do là ComfyUI tải toàn bộ mô hình refiner của SD XL 0. Or how to make refiner/upscaler passes optional. 0は、標準で1024×1024ピクセルの画像を生成可能です。 既存のモデルより、光源と影の処理などが改善しており、手や画像中の文字の表現、3次元的な奥行きのある構図などの画像生成aiが苦手とする画像も上手く生成できます。 Refiners should have at most half the steps that the generation has. @bmc-synth You can use base and/or refiner to further process any kind of image, if you go through img2img (out of latent space) and proper denoising control. git clone Restart ComfyUI completely. +Use SDXL Refiner as Img2Img and feed your pictures. This is more of an experimentation workflow than one that will produce amazing, ultrarealistic images. Then this is the tutorial you were looking for. Install SDXL (directory: models/checkpoints) Install a custom SD 1. 0 base model. safetensors and then sdxl_base_pruned_no-ema. Voldy still has to implement that properly last I checked. 0-RC , its taking only 7. 9 comfyui (i would prefere to use a1111) i'm running a rtx 2060 6gb vram laptop and it takes about 6-8m for a 1080x1080 image with 20 base steps & 15 refiner steps edit: im using Olivio's first set up(no upscaler) edit: after the first run i get a 1080x1080 image (including the refining) in Prompt executed in 240. 0s, apply half (): 2. He puts out marvelous Comfyui stuff but with a paid Patreon and Youtube plan. safetensors. This seems to give some credibility and license to the community to get started. It is highly recommended to use a 2x upscaler in the Refiner stage, as 4x will slow the refiner to a crawl on most systems, for no significant benefit (in my opinion). Learn how to download and install Stable Diffusion XL 1. Ive had some success using SDXL base as my initial image generator and then going entirely 1. 9. Example workflow can be loaded downloading the image and drag-drop on comfyUI home page. Set the base ratio to 1. 0の概要 (1) sdxl 1. — NOTICE: All experimental/temporary nodes are in blue. You can use the base model by it's self but for additional detail you should move to. -Drag and Drop *. We all know SD web UI and ComfyUI - those are great tools for people who want to make a deep dive into details, customize workflows, use advanced extensions, and so on. 5 refiner tutorials into your ComfyUI browser and the workflow is loaded. The SDXL base model performs significantly better than the previous variants, and the model combined with the refinement module achieves the best overall performance. The workflow should generate images first with the base and then pass them to the refiner for further. 120 upvotes · 31 comments. 0 + LoRA + Refiner With Comfy UI + Google Colab fot FREEExciting news! Introducing Stable Diffusion XL 1. 0 Alpha + SD XL Refiner 1. In the case you want to generate an image in 30 steps. To use the refiner model: Navigate to the image-to-image tab within AUTOMATIC1111 or. SDXL Models 1. However, with the new custom node, I've. A number of Official and Semi-Official “Workflows” for ComfyUI were released during the SDXL 0. 1.sdxl 1. Comfyroll. With SDXL I often have most accurate results with ancestral samplers. Subscribe for FBB images @ These configs require installing ComfyUI. . g. 0 for ComfyUI, today I want to compare the performance of 4 different open diffusion models in generating photographic content: SDXL 1. SDXL Offset Noise LoRA; Upscaler. The test was done in ComfyUI with a fairly simple workflow to not overcomplicate things. It now includes: SDXL 1. Input sources-. 0 Refiner Automatic calculation of the steps required for both the Base and the Refiner models Quick selection of image width and height based on the SDXL training set XY Plot ControlNet with the XL OpenPose model (released by Thibaud Zamora)To use the Refiner, you must enable it in the “Functions” section and you must set the “refiner_start” parameter to a value between 0. SDXL places very heavy emphasis at the beginning of the prompt, so put your main keywords. Otherwise, I would say make sure everything is updated - if you have custom nodes, they may be out of sync with the base comfyui version. If you want to open it. 0 base checkpoint; SDXL 1. 7 contributors. Fixed SDXL 0. SDXL-refiner-1. 9 - Pastebin. So I used a prompt to turn him into a K-pop star. 0 SDXL-refiner-1. I was able to find the files online. 0 with both the base and refiner checkpoints. A CheckpointLoaderSimple node to load SDXL Refiner. These ports will allow you to access different tools and services. I also automated the split of the diffusion steps between the Base and the. I tried with two checkpoint combinations but got the same results : sd_xl_base_0. 详解SDXL ComfyUI稳定工作流程:我在Stability使用的AI艺术内部工具接下来,我们需要加载我们的SDXL基础模型(改个颜色)。一旦我们的基础模型加载完毕,我们还需要加载一个refiner,但是我们会稍后处理这个问题,不用着急。此外,我们还需要对从SDXL输出的clip进行一些处理。The chart above evaluates user preference for SDXL (with and without refinement) over SDXL 0. {"payload":{"allShortcutsEnabled":false,"fileTree":{"ComfyUI-Experimental/sdxl-reencode":{"items":[{"name":"1pass-sdxl_base_only. SDXL VAE. If you don't need LoRA support, separate seeds, CLIP controls, or hires fix - you can just grab basic v1. 0 refiner model. Part 2 ( link )- we added SDXL-specific conditioning implementation + tested the impact of conditioning parameters on the generated images. Copy the sd_xl_base_1. 5 models and I don't get good results with the upscalers either when using SD1. For example: 896x1152 or 1536x640 are good resolutions. 🚀LCM update brings SDXL and SSD-1B to the game 🎮photo of a male warrior, modelshoot style, (extremely detailed CG unity 8k wallpaper), full shot body photo of the most beautiful artwork in the world, medieval armor, professional majestic oil painting by Ed Blinkey, Atey Ghailan, Studio Ghibli, by Jeremy Mann, Greg Manchess, Antonio Moro, trending on ArtStation, trending on CGSociety, Intricate, High. 3. Download . sdxl-0. Other than that, same rules of thumb apply to AnimateDiff-SDXL as AnimateDiff. 5 + SDXL Refiner Workflow : StableDiffusion. 78. Table of Content. Now that you have been lured into the trap by the synthography on the cover, welcome to my alchemy workshop! 现在,你已经被封面上的合成图所吸引. Install this, restart ComfyUI and click “manager” then “install missing custom nodes” restart again and it should work. GTM ComfyUI workflows including SDXL and SD1. In summary, it's crucial to make valid comparisons when evaluating the SDXL with and without the refiner. 0 refiner checkpoint; VAE. 1min. With ComfyUI it took 12sec and 1mn30sec respectively without any optimization. A dark and stormy night, a lone castle on a hill, and a mysterious figure lurking in the shadows. ComfyUI allows setting up the entire workflow in one go, saving a lot of configuration time compared to using base and. I found it very helpful. Automate any workflow Packages. ComfyUI Manager: Plugin for CompfyUI that helps detect and install missing plugins. SDXL two staged denoising workflow. The two-model setup that SDXL uses has the base model is good at generating original images from 100% noise, and the refiner is good at adding detail at 0. Restart ComfyUI. 0 on ComfyUI. 6B parameter refiner. Installing. I tried Fooocus yesterday and I was getting 42+ seconds for a 'quick' generation (30 steps). Compare the outputs to find. Having issues with refiner in ComfyUI. I describe my idea in one of the post and Apprehensive_Sky892 showed me it's arleady working in ComfyUI. A couple of the images have also been upscaled. Now that you have been lured into the trap by the synthography on the cover, welcome to my alchemy workshop! 现在,你已经被封面上的合成图所吸引. base model image: . An SDXL base model in the upper Load Checkpoint node. I mean, it's also possible to use it like that, but the proper intended way to use the refiner is a two-step text-to-img. So I created this small test. High likelihood is that I am misunderstanding how I use both in conjunction within comfy. 0 with ComfyUI. 1 latent. 24:47 Where is the ComfyUI support channel. It also works with non. 0. 5 base model vs later iterations. python launch. from_pretrained (. I'd like to share Fooocus-MRE (MoonRide Edition), my variant of the original Fooocus (developed by lllyasviel), new UI for SDXL models. 0を発表しました。 そこで、このモデルをGoogle Colabで利用する方法について紹介します。 ※2023/09/27追記 他のモデルの使用法をFooocusベースに変更しました。BreakDomainXL v05g、blue pencil-XL-v0. It MAY occasionally fix. I trained a LoRA model of myself using the SDXL 1. Question about SDXL ComfyUI and loading LORAs for refiner model. 0 with ComfyUI's Ultimate SD Upscale Custom Node in this illuminating tutorial. 0. 0. Workflows included. but ill add to that, currently only people with 32gb ram and a 12gb graphics card are going to make anything in a reasonable timeframe if they use the refiner. 5B parameter base model and a 6. To simplify the workflow set up a base generation and refiner refinement using two Checkpoint Loaders. 14. In this series, we will start from scratch - an empty canvas of ComfyUI and, step by step, build up SDXL workflows. generate a bunch of txt2img using base. You can't just pipe the latent from SD1. 1 is up, added settings to use model internal VAE and to disable refiner. I found that many novice users don't like ComfyUI nodes frontend, so I decided to convert original SDXL workflow for ComfyBox. but if I run Base model (creating some images with it) without activating that extension or simply forgot to select the Refiner model, and LATER activating it, it gets OOM (out of memory) very much likely when generating images. The lower. Especially on faces. 2、Emiを追加しました。Refiners should have at most half the steps that the generation has. r/linuxquestions. set COMMANDLINE_ARGS=--medvram --no-half-vae --opt-sdp-attention. 5 models. Text2Image with SDXL 1. SEGSDetailer - Performs detailed work on SEGS without pasting it back onto the original image. It's doing a fine job, but I am not sure if this is the best. All the list of Upscale model is. ComfyUIでSDXLを動かす方法まとめ. Run ComfyUI with colab iframe (use only in case the previous way with localtunnel doesn't work) You should see the ui appear in an iframe. SDXL Base + SD 1. 0 in ComfyUI I've come across three different methods that seem to be commonly used: Base Model with Latent Noise Mask, Base Model using InPaint VAE Encode and using the UNET "diffusion_pytorch" InPaint specific model from Hugging Face. Embeddings/Textual Inversion. g. ComfyUI shared workflows are also updated for SDXL 1. Upscale the. 0 - Stable Diffusion XL 1. 25-0. You can download this image and load it or. This uses more steps, has less coherence, and also skips several important factors in-between I recommend you do not use the same text encoders as 1. Compatible with: StableSwarmUI * developed by stability-ai uses ComfyUI as backend, but in early alpha stage. SDXL 1. I will provide workflows for models you find on CivitAI and also for SDXL 0. 99 in the “Parameters” section. It'll load a basic SDXL workflow that includes a bunch of notes explaining things. 这一期呢我们来开个新坑,来讲sd的另一种打开方式,也就是这个节点化comfyUI。那熟悉我们频道的老观众都知道,我一直是用webUI去做演示和讲解的. Fooocus-MRE v2. . I don't want it to get to the point where people are just making models that are designed around looking good at displaying faces. I just uploaded the new version of my workflow. To experiment with it I re-created a workflow with it, similar to my SeargeSDXL workflow. 这才是SDXL的完全体。stable diffusion教学,SDXL1. The idea is you are using the model at the resolution it was trained. On the ComfyUI Github find the SDXL examples and download the image (s). Hires. You can type in text tokens but it won’t work as well. 9 the latest Stable. 5 inpainting model, and separately processing it (with different prompts) by both SDXL base and refiner models:Such a massive learning curve for me to get my bearings with ComfyUI. Cũng nhờ cái bài trải nghiệm này mà mình phát hiện ra… máy tính mình vừa chết một thanh RAM, giờ chỉ còn có 16GB. For an example of this. SD XL. Functions.