I was trying to upscale this image in img2img to 1024x1024 and was really struggling. I couldn’t preserve the patterns in her gown and for some reason the gown wouldn’t stay in her right hand, it’d fall against her leg, and was longer for some reason. Trying to force-correct these things often ended up with distortions. Any ideas?

Example of fail

Here’s the generation for the 512x512 above:

Prompt:

extremely detailed CG, high resolution, beautiful detailed eyes, Corinna, (french braid:1.2), adorable, standing, looking back over her shoulder, cheerful, smile, thin legs, sheer nightgown

Negative prompt:

3d, 3d render, painting, digital painting, watermark, sepia, black & white, NG_DeepNegative_V1_75T, EasyNegative, verybadimagenegative_v1.3, bad_pictures, (worst quality:2), (low quality:2), (normal quality:2), lowres, normal quality, ((monochrome)), ((grayscale)), (ugly:1.33), bad face, bad fingers, bad anatomy, spot, (poorly eyes:1.2), pubic hair, pubes, hairy, missing fingers, long hair, ((butt)), ((thick legs))

Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 2809846013, Size: 512x512, Model hash: 0c874641a9, Model: myneFactoryAscendance_v20, Version: v1.4.0

Used embeddings: easynegative [119b], verybadimagenegative_v1.3 [89de]

  • awoo@burggit.moe
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    An alternative to img2img is to simply use the highres fix scaling when you generate it. IMO if you have the original setup (prompt, seed, model, embedding, lora, etc) as opposed to only the output image, you should always give highres fix scaling a try.