Comfyui upscale image reddit

Comfyui upscale image reddit. natural or MJ) images. There are also "face detailer" workflows for faces specifically. This way I can upscale my images while I am away from my system. For example, I can load an image, select a model (4xUltrasharp, for example), and select the final resolution (from 1024 to 1500, for example). com and my result is about the same I was running some tests last night with SD1. thats Welcome to the unofficial ComfyUI subreddit. Images are too blurry and lack of details, it's like upscaling any regular image with some traditional methods. 5 model, since their training was done at a low resolution. Hires fix with add detail lora. this breaks the composition a little bit, because the mapped face is most of the time to clean or has a slightly different lighting etc. Edit: Also I woudn't recommend doing a 4x Upscale using a 4x Upscaler (such as 4x Siax). 5=1024). 5, but appears to work poorly with external (e. Uses face Detailer to enhance faces if required. 9 , euler Along with normal image preview other methods are: Latent Upscaled 2x Hires fix 2x(two pass img) Upscaled img 4x using nearest-exact upscale method. My problem is that my generation produce a 1 pixel line at the right/bottom of the image which is weird/white. LOL yeah I push the denoising on Ultimate Upscale too, quite often, just saying "I'll fix it in Photoshop". Instead, I use Tiled KSampler with 0. e. positive image conditioning) is no longer a simple text description of what should be contained in the total area of the image; they are now a specific description that in the area defined by the coordinates starting from x:0px y:320px, to x:768px y oh, because in SD i noticed the aspect ratio of the latent image will influence the result of the output - like if you wanted a tall, standing person, but had the aspect ratio of a standard desktop (1920x1080, or 1. The image in the left (directly after generation) is blurry and lost some tiny details; the image on the right (after mask-compose node) retains the sharpness, but you can see clearly the bad composition line, with sharp transition. I have a custom image resizer that ensures the input image matches the output dimensions. And since Ultimate upscale only renders a section of the image at a time, the prompt and the image doesn't necessarily go along well together at higher Denoise levels. 4, but use a relevent to your image control net so you don't lose to much of your original image, and combining that with the iterative upscaler and concat a secondary posative telling the model to add detail or improve detail. Overall: - image upscale is less detailed, but more faithful to the image you upscale. The workflow isn't attached to this image you'll have to download from the G-drive link. After borrowing many ideas, and learning ComfyUI. g. Still working on the the whole thing but I got the idea down Welcome to the unofficial ComfyUI subreddit. So I basically want to select multiple images from my drive so that the upscaler scales all the images I have selected, using the same sampler settings and whatnot. But I probably wouldn't upscale by 4x at all if fidelity is important. Is this possible? Thanks! Welcome to the unofficial ComfyUI subreddit. Hello, For more consistent faces i sample an image using the ipadapter node (so that the sampled image has a similar face), then i latent upscale the image and use the reactor node to map the same face used in the ipadapter on the latent upscaled image. Thanks! IMAGE: The input image to be upscaled. Basically it doesn't open after downloading (v. k. This next queue will then create a new batch of four images, but also upscale the selected images cached in the previous prompt. This is not the case. A lot of people are just discovering this technology, and want to show off what they created. - comfyanonymous/ComfyUI Welcome to the unofficial ComfyUI subreddit. 5 denoise (needed for latent idk why though) through a second ksample. 6 denoise and either: Cnet strength 0. Click on the image below and drag and drop the full-size image to the ComfyUI canvas. 5 and I was able to get some decent images by running my prompt through a sampler to get a decent form, then refining while doing an iterative upscale for 4-6 iterations with a low noise and bilinear model, negating the need for an advanced sampler to refine the image. That's where cn tile comes in allowing you to push your i2i denoise levels WAY up without loosing the input image composition. A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) I've so far achieved this with the Ultimate SD image upscale and using the 4x-Ultramix_restore upscale model. In Image 3 I compare pre-compose with post-compose results. py, in order to allow the the 'preview image' node to Because the upscale model of choice can only output 4x image and they want 2x. The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. Look at this workflow : The Upscaler function of my AP Workflow 8. (I think I haven't used A1111 in a while. 0. That's exactly what I ended up planning, I'm a newbie to ComfyUI, so I setup Searg's workflow, then copied the official ComfyUI i2v workflow into it and pass into the node whatever image I like. For example, if you start with a 512x512 latent empty image, then apply a 4x model, apply "upscale by" 0. You could add a latent upscale in the middle of the process then a image downscale in pixel space at the end (use upscale node with 0. You either upscale in pixel space first and then do a low denoise 2nd pass or you upscale in latent space and do a high denoise 2nd pass. 0 for ComfyUI, which is free, uses the CCSR node and it can upscale 8x and 10x without even the need for any noise injection (assuming you don't want "creative upscaling"). I think I have a reasonable workflow, that allows you to test your prompts and settings and then "flip a switch", put in the image numbers you want to upscale and rerun the workflow. The resolution is okay, but if possible I would like to get something better. So generate a batch, and then right click the one you want to send on to upscale. (Optional) Upscale to 3x by Default and using ControlNet to stick to base image, speed provided by Automatic CFG. Do the same comparison with images that are much more detailed, with characters and patterns. 5, euler, sgm_uniform or CNet strength 0. 2x upscale using Ultimate SD Upscale and TileE Controlnet. After 2 days of testing, I found Ultimate SD Upscale to be detrimental here. You could try to pp your denoise at the start of an iterative upscale at say . Does anyone have any suggestions, would it be better to do an ite Welcome to the unofficial ComfyUI subreddit. a. So instead of one girl in an image you got 10 tiny girls stitch into one giant upscale image. Click on Install Models on the ComfyUI Manager Menu. This is the fastest way to test images vs an image I have a higher rez sample of for testing. The issue is likely caused by a quirk in the way MultiAreaConditioning works: its sizes are defined in pixels. Both these are of similar speed. The best method I Thanks for all your comments. Also, I did edit the custom node ComfyUI-Custom-Scripts' python file: string_function. 5 models (seems pointless to go larger). View community ranking In the Top 1% of largest communities on Reddit. - latent upscale looks much more detailed, but gets rid of the detail of the original image. Welcome to the unofficial ComfyUI subreddit. Please share your tips, tricks, and workflows for using this software to create your AI art. Personally in my opinion your setup is heavily overloaded with incomprehensible stages for me. 0 coins. Then I have a nice result I do composition ( Image 2). Once I've amassed a collection of noteworthy images, my plan is to compile them into a folder and execute a 2x upscale in a batch. In this ComfyUI tutorial we look at my favorite upscaler, the Ultimate SD Upscaler Generate initial image at 512x768 Upscale x1. The area you inpaint gets rendered in the same resolution as your starting image. Latent quality is better but the final image deviates significantly from the initial generation. There’s only so much you can do with an SD1. Again, would really appreciate any of your Comfy 101 materials, resources, and creators, as well as your advice re. And above all, BE NICE. Depending on the noise and strength it end up treating each square as an individual image. so i tested with aspect ratios < 1 (more vertical) and it definitely changed the output. With no finishing (i. ComfyUI's upscale with model node doesn't have an output size option like other upscale nodes, so one has to manually downscale the image to the appropriate size. This is useful to redraw parts that get messed up when you use ultimate SD upscale with a high denoise. Jan 8, 2024 · Learn how to upscale images using ComfyUI and the 4x-UltraSharp model for crystal-clear enhancements. After that I send it through a face detailer and an ultimate sd upscale. ComfyUI for Image Upscale . SD upscaler and upscale from that. Save image with meta data. I wanted to know what difference they make, and they do! Credit to Sytan's SDXL workflow, which I reverse engineered, mostly because I'm new to ComfyUI and wanted to figure it all out. First you need to do is stop the generation mid way or later like if you have 40 steps, instruct sampler to stop at 29, then you upscale the unfinished photo (either as a latent model or as an image, I found that it's better to upscale it as an image and redecode it as a new latent) feed it to a new sampler and instruct to continue generation Welcome to the unofficial ComfyUI subreddit. Ideally, I'd love to leverage the prompt loaded from the image metadata (optional), but more crucially, I'm seeking guidance on how to efficiently batch load images from a folder for subsequent upscaling. X values) if you want to benefit from the higher res processing. Click on Manager on the ComfyUI windows. . It uses CN tile with ult SD upscale. , inpainting, hires fix, upscale, face detailer, etc) and no control net. Hi, I am upscaling a long sequence (batch - batch count) of images, 1 by 1, from 640x360 to 4k. ) This makes the image larger but also makes the inpainting more detailed. (206x206) when I'm then upscaling in photopea to 512x512 just to give me a base image that matches the 1. "LoadImage / Load Image" "Upscale Model Loader / Load Upscale Model" "ImageUpscaleWithModel / Upscale Image (using Model)" "Image Save / Image Save" or "SaveImage / Save Image" That will upscale with no latent invention/injection of creative bits, but still intelligently adds pixels per ESRGAN upscaler models. That is using an actual SD model to do the upscaling that, afaik, doesn't yet exist in ComfyUI. I liked the ability in MJ, to choose an image from the batch and upscale just that image. Custom nodes are Impact pack for wildcards, rgthree because it's the shit, and Ult SD upscale. Ugh. The quality and dimensions of the output image are directly influenced by the original image's properties. This works best with Stable Cascade images, might still work with SDXL or SD1. Upscaled by ultrasharp 4x upscaler. For upscaling with img2img, you first upscale/crop the source image (optionally using a dedicated scaling model like ultrasharp or something) convert it to latent and then run the ksampler on it. I switched to comfyui not too long ago, but am falling more and more in love. Switch the toggle to upscale, make sure to enter the right CFG, make sure randomize is off, and press queue. A step-by-step guide to mastering image quality. So, I've used the simple tiles custom nodes to break it up and process each tile one at a time, there is a batch-list switch you can toggle to do it all as a batch if you have the V-ram. But more useful is that you can now right-click an image in the `Preview for Image Chooser` and select `Progress this image` - which is the same as selecting it's number and pressing go. 22, the latest one available). The latent upscale in ComfyUI is crude as hell, basically just a "stretch this image" type of upscale. If your starting image is 1024x1024, the image gets resized so that the inpainted area becomes the same size as the starting image which is 1024x1024. upscale_method: COMBO[STRING] Specifies the method used for upscaling I tried installing the ComfyUI-Image-Selector plugin, which claims that I can simple mute or disconnect the Save Image node, etc. With it, I either can't get rid of visible seams, or the image is too constrained by low denoise and so lacks detail. As my test bed, i'll be downloading the thumbnail from say my facebook profile picture, which is fairly small. Love it! Thanks ComfyUI. 5 if you want to divide by 2) after upscaling by a model. yet when I try to upscale more than 500 - 1000 images in a single batch from 1024x576 --> 1920x1080, it blows up Welcome to the unofficial ComfyUI subreddit. 5, don't need that many steps From there you can use 4x upscale model and run sample again at low denoise if you want higher resolution. Advertisement Coins. Ultimate SD Upscale 2x and Ultimate Upscale 3x. Thanks. A followup composition using IPAdapter with a simple color mask and three input images (2 characters and a background) Note how the girl in blue has her arm around the warrior girl, A bit of detail that the AI put in. This parameter is central to the node's operation, serving as the primary data upon which resizing transformations are applied. Upscale to 2x and 4x in multi-steps, both with and without sampler (all images are saved) Multiple LORAs can be added and easily turned on/off (currently configured for up to three LORAs, but it can easily add more) Details and bad-hands LORAs loaded I use it with dreamshaperXL mostly and works like a charm. And I'm sometimes too busy scrutinizing the city, landscape, object, vehicle or creature in which I'm trying to encourage insane detail to see what hallucinations it has manifested in the sky. It will replicate the image's workflow and seed. upscale_method: COMBO[STRING] Specifies the method used for upscaling I am now just setting up ComfyUI and I have issues (already LOL) with opening the ComfyUI Manager from CivitAI. At the moment i generate my image with detail lora at 512 or 786 to avoid weird generations I then latent upscale them by 2 with nearest and run them with 0. The best method as said below is to upscale the image with a model ( then downscale if necessary to desirate size because most upscalers do X4 and it's often too big size to process) then send it back to VAE encode and sample it again. 5 to get a 1024x1024 final image (512 *4*0. 9, end_percent 0. A homogenous image like that doesn't tell the whole story though ^^. It works beautifully to select images from a batch, but only if I have everything enabled when I first run the workflow. The issue I think people run into is that they think the latent upscale is the same as the Latent Upscale from Auto1111. You end up with images anyway after ksampling so you can use those upscale node. 2x upscale using lineart controlnet. The key observation here is that by using the efficientnet encoder from huggingface , you can immediately obtain what your image should look like after stage C if you were to create it with stage Pause/Preview images to proceed forward in workflow. Belittling their efforts will get you banned. Jan 5, 2024 · Start ComfyUI. I have a much lighter assembly, without detailers, but gives a better result, if you compare your resulting image on comfyworkflows. The workflow has different upscale flow that can upscale up to 4x and in my recent version I added a more complex flow that is meant to add details to a generated image. 7777) the person often comes kneeling. Working on larger latents, the challenge is to keep the model somehow still generating an image that is relatively coherent with the original low resolution image. This is done after the refined image is upscaled and encoded into a latent. , and then re-enable once I make my selections. I want to replicate the "upscale" feature inside "extras" in A1111, where you can select a model and the final size of the image. Mar 22, 2024 · You have two different ways you can perform a “Hires Fix” natively in ComfyUI: Latent Upscale; Upscaling Model; You can download the workflows over on the Prompting Pixels website. I was just using Sytan’s workflow with a few changes to some of the settings, and I replaced the last part of his workflow with a 2-steps upscale using the refiner model via Ultimate SD upscale like you mentioned. Maybe it doesn't seem intuitive but it's better to go 4x Upscaler for a 2x Upscale and 8x Upscaler for a 4x Upscale. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. mask by text workflow that identifies specific things in the image by prompt and inpaints them. You just have to use the node "upscale by" using bicubic method and a fractional value (0. Give an upscaler model an image of a person with super smooth skin and it will output a higher resolution picture of smooth skin, but give that image to a ksampler (using a low denoise value) and it can now generate new details, like skin texture. Like many XL users out there, I’m also new to ComfyUI and very much just a beginner in this regard. This means that your prompt (a. I generate an image that I like then mute the first ksampler, unmute Ult. The final node is where comfyui take those images and turn it into a video. I gave up on latent upscale. Or cancel if you don't like any of them. 5 ~ x2 - no need for model, can be a cheap latent upscale Sample again, denoise=0. ComfyUI : Ultimate Upscaler - Upscale any image from Stable Diffusion, MidJourney, or photo! - YouTube. downscale a high-resolution image to do a whole image inpaint, and the upscale only the inpainted part to the original high resolution. Please keep posted images SFW. Grab the image from your file folder, drag it onto the entire ComfyUI window. Search for upscale and click on Install for the models you want. We would like to show you a description here but the site won’t allow us. I don't get where the problem is, I have checked the comfyui examples and used one of their hires fix, but when I upscale the latent image I get a glitchy image (only the non masked part of the original I2I image) after the second pass, if I upscale the image out of the latent space then into latent again for the second pass the result is ok. Enhance image by adding HDR effects. The "Upscale and Add Details" part splits the generated image, upscales each part individually, adds details using a new sampling step and after that stiches the parts together Welcome to the unofficial ComfyUI subreddit. Hello, I did some testing of KSampler schedulers used during an upscale pass in ComfyUI. ComfyUI Manager issue. I am now just setting up ComfyUI and I have issues (already LOL) with opening the ComfyUI Manager from CivitAI. Here is a workflow that I use currently with Ultimate SD Upscale. Here is details on the workflow I created: This is an img2img method where I use Blip Model Loader from WAS to set the positive caption. kgzqowsn esz gzhfgfifl vrpg igec ufvrdl hzrsg oymrna nbak hrwf


Powered by RevolutionParts © 2024