Comfyui workflows examples
Comfyui workflows examples
Comfyui workflows examples. Intermediate Here is a workflow for using it: Example. x, SD2. [2024/07/16] 🌩️ BizyAir Controlnet Union SDXL 1. Always refresh your browser and click refresh in the ComfyUI window after adding models or custom_nodes. To load a workflow, simply click the Load button on the right sidebar, and select the workflow . ComfyUI workflow with all nodes connected. The workflow is the same as the one above but with a different prompt. . Put them in the models/upscale_models folder then use the UpscaleModelLoader node to load them and the ImageUpscaleWithModel node to use them. Manual way is to clone this repo to the ComfyUI/custom_nodes-folder. There should be no extra requirements needed. This example serves the ComfyUI inpainting example workflow, which “fills in” part of Upscale Model Examples. You signed in with another tab or window. Unlike other Stable Diffusion tools that have basic text fields where you enter values and information for generating an image, a node-based interface is different in the sense that you’d have to create nodes to build a workflow to generate images. 5. 5 checkpoint model. x and SDXL; Asynchronous Queue system Note that in ComfyUI txt2img and img2img are the same node. Keybind Explanation; This is the input image that will be used in this example: Here is an example using a first pass with AnythingV3 with the controlnet and a second pass without the controlnet with AOM3A3 (abyss orange mix 3) and using their VAE. This guide is about how to setup ComfyUI on your Windows computer to run Flux. Aug 26, 2024 · Hello, fellow AI enthusiasts! 👋 Welcome to our introductory guide on using FLUX within ComfyUI. By examining key examples, you'll gradually grasp the process of crafting your unique workflows. Quickstart. /output easier. 1. Then I created two more sets of nodes, from Load Images to the IPAdapters, and adjusted the masks so that they would be part of a specific section in the whole image. com In this post we'll show you some example workflows you can import and get started straight away. Edit models also called InstructPix2Pix models are models that can be used to edit images using a text prompt. x model for the second pass. XNView a great, light-weight and impressively capable file viewer. Hunyuan DiT 1. safetensors to your ComfyUI/models/clip/ directory. You can then load or drag the following image in ComfyUI to get the workflow: Flux Schnell. We recommend: trying it with your favorite workflow and making sure it works; writing code to customise the JSON you pass to the model, for example changing seeds or prompts You signed in with another tab or window. I then recommend enabling Extra Options -> Auto Create your comfyui workflow app,and share with your friends. All LoRA flavours: Lycoris, loha, lokr, locon, etc are used this way. Below is an example with the reference image on the left, in the Some workflows (such as the Clarity Upscale workflow) include custom nodes that aren't included in base ComfyUI. It covers the following topics: Introduction to Flux. true. Put the GLIGEN model files in the ComfyUI/models/gligen directory. Comfyui-workflow-JSON-3162. This repo (opens in a new tab) contains examples of what is achievable with ComfyUI (opens in a new tab). Nodes interface can be used to create complex workflows like one for Hires fix or much more advanced ones. For some workflow examples and see what ComfyUI can do you can check out: The default startup workflow of ComfyUI (open image in a new tab for better viewing) Before we run our default workflow, let's make a small modification to preview the generated images without saving them: Right-click on the Save Image node, then select Remove. Aug 16, 2023 · Download JSON workflow. ComfyUI also has a mask editor that can be accessed by right clicking an image in the LoadImage node and "Open in MaskEditor". Examples of what is achievable with ComfyUI open in new window. Some commonly used blocks are Loading a Checkpoint Model, entering a prompt, specifying a sampler, etc. Hunyuan DiT is a diffusion model that understands both english and chinese. Put it under ComfyUI/input . These are examples demonstrating how to use Loras. 5 models and is a very beginner-friendly workflow allowing anyone to use it easily. serve a ComfyUI workflow as an API. ComfyUI workflows for Stable Diffusion, offering a range of tools from image upscaling and merging. A good way of using unCLIP checkpoints is to use them for the first pass of a 2 pass workflow and then switch to a 1. anyway. 1. safetensors from this page and save it as t5_base. Aug 21, 2024 · GLIGEN Examples. Open the YAML file in a code or text editor Aug 29, 2024 · You can Load these images in ComfyUI (opens in a new tab) to get the full workflow. The SDXL base checkpoint can be used like any regular checkpoint in ComfyUI. For these examples I have renamed the files by adding stable_cascade_ in front of the filename for example: stable_cascade_canny. EZ way, kust download this one and run like another checkpoint ;) https://civitai. Examples of ComfyUI workflows. This example is an example of merging 3 different checkpoints using simple block merging where the input, middle and output blocks of the unet can have a Aug 29, 2024 · Then you can load this image in ComfyUI to get the workflow that shows how to use the LCM SDXL lora with the SDXL base model: The important parts are to use a low cfg, use the "lcm" sampler and the "sgm_uniform" or "simple" scheduler. lol, thats silly, its a chance to learn stuff you dont know, and thats always worth a look. Here is an example: You can load this image in ComfyUI to get the workflow. Here's a list of example workflows in the official ComfyUI repo. To reproduce this workflow you need the plugins and loras shown earlier. Some loras have been renamed to lowercase, otherwise they are not sorted alphabetically. com/models/283810 The simplicity of this wo Download & drop any image from the website into ComfyUI, and ComfyUI will load that image's entire workflow. Stable Cascade supports creating variations of images using the output of CLIP vision. Simply head to the interactive UI, make your changes, export the JSON, and redeploy the app. These are examples demonstrating how to do img2img. 792. Easy starting workflow These versatile workflow templates have been designed to cater to a diverse range of projects, making them compatible with any SD1. Flux Schnell is a distilled 4 step model. Basic Vid2Vid 1 ControlNet - This is the basic Vid2Vid workflow updated with the new nodes. The image below is the empty workflow with Efficient Loader and KSampler (Efficient) added and connected to each other nodes. Shortcuts. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. By combining the visual elements of a reference image with the creative instructions provided in the prompt, the FLUX Img2Img workflow creates stunning results. Here’s a simple workflow in ComfyUI to do this with basic latent upscaling: Non latent Upscaling. Download hunyuan_dit_1. The lower the value the more it will follow the concept. Inpainting a cat with the v2 inpainting model: Inpainting a woman with the v2 inpainting model: It also works with non inpainting models. Diverse Options: A myriad of workflows from the ComfyUI official repository are at your fingertips. Contribute to comfyanonymous/ComfyUI_examples development by creating an account on GitHub. A repository of well documented easy to follow workflows for ComfyUI. You can Load these images in ComfyUI to get the full workflow. Once loaded go into the ComfyUI Manager and click Install Missing Custom Nodes. Img2Img works by loading an image like this example image, converting it to latent space with the VAE and then sampling on it with a denoise lower than 1. The easiest way to get to grips with how ComfyUI works is to start from the shared examples. Check my ComfyUI Advanced Understanding videos on YouTube for example, part 1 and part 2. 😋 the workflow is basically an image loader combined with a whole bunch of little modules for doing various tasks like build a prompt with an image, generate a color gradient, batchload images. Set your number of frames. ComfyUI (opens in a new tab) Examples. But let me know if you need help replicating some of the concepts in my process. Here are the official checkpoints for the one tuned to generate 14 frame videos (opens in a new tab) and the one for 25 frame videos (opens in a new tab). Saving/Loading workflows as Json files. More advanced Workflows. One of the best parts about ComfyUI is how easy it is to download and swap between workflows. Img2Img Examples. Hunyuan DiT Examples. A collection of my own ComfyUI workflows for working with SDXL - sepro/SDXL-ComfyUI-workflows. Then press “Queue Prompt” once and start writing your prompt. Here is an example of how to use upscale models like ESRGAN. This repo contains examples of what is achievable with ComfyUI. Create ap p from comfyui workflow for free; Share your app with your friends or anyone; Join our benifits program, and get paid; Keep your workflow private; Create a app from comfyui workflow, in seconds; Focus on workflow creation without worrying about server & GPU; Update Documentation. ComfyUI Inspire Pack. The following images can be loaded in ComfyUI open in new window to get the full workflow. Achieves high FPS using frame interpolation (w/ RIFE). If you need an example input image for the canny, use this . This Efficient Loader node in ComfyUI KSampler(Efficient) node in ComfyUI. Load the workflow, in this example we’re using Basic Text2Vid. The workflows are meant as a learning exercise, they are by no means "the best" or the most optimized but they should give you a good understanding of how ComfyUI works. 0 node is released. In ComfyUI the saved checkpoints contain the full workflow used to generate them so they can be loaded in the UI just like images to get the full workflow that was used to create them. Jan 8, 2024 · The optimal approach for mastering ComfyUI is by exploring practical examples. As of writing this there are two image to video checkpoints. SD3 performs very well with the negative conditioning zeroed out like in the following example: SD3 Controlnet. Dec 19, 2023 · Recommended Workflows. ComfyUI workflows can be run on Baseten by exporting them in an API format. Since ESRGAN A good way of using unCLIP checkpoints is to use them for the first pass of a 2 pass workflow and then switch to a 1. It’s one that shows how to use the basic features of ComfyUI. Area Composition Examples. Here is an example of how the esrgan upscaler can be used for the upscaling step. 5 checkpoint with the FLATTEN optical flow model. 0. 1; Flux Hardware Requirements; How to install and use Flux. 2. Keybind Explanation; Aug 29, 2024 · Save this image then load it or drag it on ComfyUI to get the workflow. Here you can download both workflow files and images. Video Examples Image to Video. Install these with Install Missing Custom Nodes in ComfyUI Manager. I made this using the following workflow with two images as a starting point from the ComfyUI IPAdapter node repository. Area composition with Anything-V3 + second pass with AbyssOrangeMix2_hard This image contain 4 different areas: night, evening, day, morning Feb 7, 2024 · My ComfyUI workflow that was used to create all example images with my model RedOlives: https://civitai. yaml. ControlNet Feb 24, 2024 · ComfyUI is a node-based interface to use Stable Diffusion which was created by comfyanonymous in 2023. My actual workflow file is a little messed up at the moment, I don't like sharing workflow files that people can't understand; my process is a bit particular to my needs and the whole power of ComfyUI is for you to create something that fits your needs. You can load this image in ComfyUI to get the full workflow. You signed out in another tab or window. In this example we will be using this image. Easy starting workflow. This example is an example of merging 3 different checkpoints using simple block merging where the input, middle and output blocks of the unet can have a Collection of ComyUI workflow experiments and examples - diffustar/comfyui-workflow-collection Here is a workflow for using it: Save this image then load it or drag it on ComfyUI to get the workflow. ComfyUI breaks down a workflow into rearrangeable elements so you can easily make your own. Discovery, share and run thousands of ComfyUI Workflows on OpenArt. All LoRA flavours: Lycoris, loha, lokr, locon, etc… are used this way. 0 reviews. For some workflow examples and see what ComfyUI can do you can check out: Workflow examples can be found on the Examples page. Introducing ComfyUI Launcher! new. Area composition with Anything-V3 + second pass with AbyssOrangeMix2_hard. You can then load or drag the following image in ComfyUI to get the workflow: A comprehensive collection of ComfyUI knowledge, including ComfyUI installation and usage, ComfyUI Examples, Custom Nodes, Workflows, and ComfyUI Q&A. I then recommend enabling Extra Options -> Auto Queue in the interface. The following images can be loaded in ComfyUI to get the full workflow. Then press "Queue Prompt" once and start writing your prompt. See the following workflow for an example: Example. safetensors. Flux. This image contain 4 different areas: night, evening, day, morning. Depending on your frame-rate, this will affect the length of your video in seconds. This example is an example of merging 3 different checkpoints using simple block merging where the input, middle and output blocks of the unet can have a ComfyUI IPAdapter Plus; ComfyUI InstantID (Native) ComfyUI Essentials; ComfyUI FaceAnalysis; Not to mention the documentation and videos tutorials. Empowers AI Art creation with high-speed GPUs & efficient workflows, no tech setup needed. Example. Here is the workflow for the stability SDXL edit model, the checkpoint can be downloaded from: here (opens in a new tab). Start with the default workflow. Example Image Variations. The openpose PNG image for controlnet is included as well. Fully supports SD1. from a folder Feb 19, 2024 · How to use AnimateDiff. You switched accounts on another tab or window. It is a simple workflow of Flux AI on ComfyUI. See the following workflow for an example: See this next workflow for how to mix multiple images together: You can find the input image for the above workflows on the unCLIP example page 73 votes, 25 comments. This is how the following image was generated. com/models/628682/flux-1-checkpoint Jul 25, 2024 · Step 2: Modifying the ComfyUI workflow to an API-compatible format. This is what the workflow looks like in ComfyUI: Start by running the ComfyUI examples . Put them in the models/upscale_models folder then use the UpscaleModelLoader node to load them and the ImageUpscaleW Try an example Canny Controlnet workflow by dragging in this image into ComfyUI. For this tutorial, the workflow file can be copied from here. Aug 29, 2024 · Hypernetwork Examples. Check out our blog on how to serve ComfyUI models behind an API endpoint if you need help converting your workflow accordingly. SDXL Examples. Download the model. It can be a little intimidating starting out with a blank canvas, but by bringing in an existing workflow, you can have a starting point that comes with a set of nodes all ready to go. You can Load these images in ComfyUI (opens in a new tab) to get the full workflow. Keybind Explanation; Aug 29, 2024 · Inpaint Examples. The SDXL base checkpoint can be used like any regular checkpoint in ComfyUI (opens in a new tab). How to Load a New Workflow? Simple Steps: Simple workflow for using the new Stable Video Diffusion model in ComfyUI for image to video generation. 1 ComfyUI install guidance, workflow and example. Audio Examples Stable Audio Open 1. The default workflow is a simple text-to-image flow using Stable Diffusion 1. The only important thing is that for optimal performance the resolution should be set to 1024x1024 or other resolutions with the same amount of pixels but a different aspect ratio. Txt2Img is achieved by passing an empty image to the sampler node with maximum denoise. Example GIF [2024/07/23] 🌩️ BizyAir ChatGLM3 Text Encode node is released. Goto ComfyUI_windows_portable\ComfyUI\ Rename extra_model_paths. Hence, we'll delve into the most straightforward text-to-image processes in ComfyUI. Apr 26, 2024 · Workflow. (you can load it into ComfyUI open in new window to get the workflow): Aug 16, 2024 · If you have a previous installation of ComfyUI with Models, or would like to use models stored in an external location, you can use this method to reference them instead of re-downloading them. 1 with ComfyUI Explore thousands of workflows created by the community. ComfyUI Examples; 2 Pass Txt2Img (Hires fix) Examples You can also use them like in this workflow that uses SDXL to generate an initial image that is Recommended way is to use the manager. 4K. You can then load up the following image in ComfyUI to get the workflow: Please note that in the example workflow using the example video we are loading every other frame of a 24 frame video and then turning that into at 8 fps animation (meaning things will be slowed compared to the original video) Workflow Explanations. Here is a link to download pruned versions of the supported GLIGEN model files (opens in a new tab). You can then load up the following image in ComfyUI to get the workflow: AuraFlow 0. ComfyUI Workflows. ControlNet and T2I-Adapter Feb 1, 2024 · The first one on the list is the SD1. Jul 6, 2024 · What is ComfyUI? ComfyUI is a node-based GUI for Stable Diffusion. Aug 29, 2024 · SDXL Examples. Combining the UI and the API in a single app makes it easy to iterate on your workflow even after deployment. Let's embark on a journey through fundamental workflow examples. ComfyUI AnyNode: Any Node you ask for - AnyNodeLocal (6) ComfyUI-N-Nodes - LoadVideo [n Examples of ComfyUI workflows. You can Load these images in ComfyUI open in new window to get the full workflow. Includes the Ksampler Inspire node that includes the Align Your Steps scheduler for improved image quality. json workflow file from the C:\Downloads\ComfyUI\workflows folder. A Aug 29, 2024 · Lora Examples. Download aura_flow_0. As evident by the name, this workflow is intended for Stable Diffusion 1. What Makes ComfyUI Workflows Stand Out? Flexibility: With ComfyUI, swapping between workflows is a breeze. 2. Aug 29, 2024 · Img2Img Examples. safetensors and put it in your ComfyUI/checkpoints directory. I found that sometimes simply uninstalling and reinstalling will do it. safetensors, stable_cascade_inpainting. Stable Zero123 is a diffusion model that given an image with an object and a simple background can generate images of that object from different angles. Save this image then load it or drag it on ComfyUI to get the workflow. Each ControlNet/T2I adapter needs the image that is passed to it to be in a specific format like depthmaps, canny maps and so on depending on the specific model if you want good results. Any Node workflow examples. Hypernetworks are patches applied on the main MODEL so to use them put them in the models/hypernetworks directory and use the Hypernetwork Loader node like this: In this post we'll show you some example workflows you can import and get started straight away. Reload to refresh your session. Here’s an example with the anythingV3 model: Outpainting. Node: Load Checkpoint with FLATTEN model. Be sure to check the trigger words before running the Aug 26, 2024 · What is the ComfyUI FLUX Img2Img? The ComfyUI FLUX Img2Img workflow allows you to transform existing images using textual prompts. Dec 10, 2023 · Moreover, as demonstrated in the workflows provided later in this article, comfyUI is a superior choice for video generation compared to other AI drawing software, offering higher efficiency and Aug 29, 2024 · 3D Examples - ComfyUI Workflow Stable Zero123. json file. ComfyUI Workflows are a way to easily start generating images within ComfyUI. Use the sdxl branch of this repo to load SDXL models; The loaded model only works with the Flatten KSampler and a standard ComfyUI checkpoint loader is required for other KSamplers Here is how you use it in ComfyUI (you can drag this into ComfyUI (opens in a new tab) to get the workflow): noise_augmentation controls how closely the model will try to follow the image concept. zip ComfyUI: main repository; ComfyUI Examples: examples on how to use different ComfyUI components and features; ComfyUI Blog: to follow the latest updates; Tutorial: tutorial in visual novel style; Comfy Models: models by comfyanonymous to use in ComfyUI Loading full workflows (with seeds) from generated PNG, WebP and FLAC files. The models are also available through the Manager, search for "IC-light". Another Example and observe its amazing output. (you can load it into ComfyUI to get the workflow): In ComfyUI the saved checkpoints contain the full workflow used to generate them so they can be loaded in the UI just like images to get the full workflow that was used to create them. See full list on github. See this next workflow for how to mix multiple images together: Example. You can also use similar workflows for outpainting. 🚀 For some workflow examples and see what ComfyUI can do you can check out: Workflow examples can be found on the Examples page. Here is a basic example how to use it: As a reminder you can save these image files and drag or load them into ComfyUI to get the workflow. 5. [2024/07/25] 🌩️ Users can load BizyAir workflow examples directly by clicking the "☁️BizyAir Workflow Examples" button. Lora Examples. example to extra_model_paths. Img2Img works by loading an image like this example image open in new window, converting it to latent space with the VAE and then sampling on it with a denoise lower than 1. Run any ComfyUI workflow w/ ZERO setup (free & open source) Try now. The following is an older example for: aura_flow_0. Description. This image has had part of it erased to alpha with gimp, the alpha channel is what we will be using as a mask for the inpainting. Introduction. The only way to keep the code open and free is by sponsoring its development. Mixing ControlNets ControlNet and T2I-Adapter - ComfyUI workflow Examples Note that in these examples the raw image is passed directly to the ControlNet/T2I adapter. Here is an example for how to use the Canny Controlnet: Here is an example for how to use the Inpaint Controlnet, the example input image can be found here. 🌟 In this tutorial, we'll dive into the essentials of ComfyUI FLUX, showcasing how this powerful model can enhance your creative process and help you push the boundaries of AI-generated art. You can construct an image generation workflow by chaining different blocks (called nodes) together. Img2Img works by loading an image like this example image (opens in a new tab), converting it to latent space with the VAE and then sampling on it with a denoise lower than 1. 1; Overview of different versions of Flux. It shows the workflow stored in the exif data (View→Panels→Information). All the images in this page contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. FLUX is a cutting-edge model developed by Black Forest Labs. You can find the input image for the above workflows on the unCLIP example page. SD3 Controlnets by InstantX are also supported. Also has favorite folders to make moving and sortintg images from . These are examples demonstrating the ConditioningSetArea node. Area Composition; Inpainting with both regular and inpainting models. You can find the Flux Schnell diffusion model weights here this file should go in your: ComfyUI/models/unet/ folder. Download it and place it in your input folder. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. Loads any given SD1. Jan 6, 2024 · RunComfy: Premier cloud-based Comfyui for stable diffusion. The initial set includes three templates: Simple Template. Aug 29, 2024 · Image Edit Model Examples. 0. For some workflow examples and see what ComfyUI can do you can check out: ComfyUI Examples Installing ComfyUI Features Nodes/graph/flowchart interface to experiment and create complex Stable Diffusion workflows without needing to code anything. Select a feature below to learn more about it. These versatile workflow templates have been designed to cater to a diverse range of projects, making them compatible with any SD1. 5 Template Workflows for ComfyUI which is a multi-purpose workflow that comes with three templates. Examples. This should update and may ask you the click restart. You can also easily upload & share your own ComfyUI workflows, so that others can build on top of them! :) Why I built this: I just started learning ComfyUI, and really like how it saves the workflow info within each image it generates. ComfyUI Workflows: Your Ultimate Guide to Fluid Image Generation. Load the . It seems also that what order you install things in can make the difference. vspvv thrcy qcrh vjgu zebzqj elevxxrf cqkkms flcm vgmhxqml lta