If you've been generating images with Stable Diffusion without a negative prompt, you've been generating with one hand tied behind your back. Negative prompts are not an optional add-on — they're a fundamental control mechanism that can mean the difference between an unusable output and a gallery-worthy image. This guide covers everything from the underlying math of how negative prompts work to complete, copy-paste negative prompt libraries for every use case.

How Negative Prompts Work: The CFG Scale Explained

To understand negative prompts, you first need to understand Classifier-Free Guidance (CFG) — the algorithm that makes them work.

During each denoising step in Stable Diffusion, the model makes two predictions: one conditioned on your positive prompt (what you want) and one conditioned on your negative prompt (what you don't want). The final denoising direction is calculated as:

final_direction = unconditional_prediction + CFG_scale × (conditional_prediction − unconditional_prediction)

When you add a negative prompt, the "unconditional prediction" is replaced by the negative-prompt-conditioned prediction. The CFG scale multiplier then determines how strongly the model moves away from the negative concepts while moving toward the positive concepts.

In practical terms:

Key insight: Your negative prompt is working in proportion to your CFG scale. If your negative prompt isn't having enough effect, increasing CFG scale (up to about 9) will amplify it. But increasing CFG too high degrades overall image quality — there's a sweet spot around 7–8 for most workflows.

Universal Negative Prompts That Work for Most Images

These negative prompts address the most common failure modes across all SD models and use cases. Start with these and add use-case-specific terms on top.

Core Universal Negative Prompt

(worst quality:1.4), (low quality:1.4), (normal quality:1.2), lowres, bad anatomy, bad hands, missing fingers, extra fingers, too many fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, ugly, blurry, bad proportions, extra limbs, cloned face, disfigured, malformed limbs, missing arms, missing legs, extra arms, extra legs, watermark, signature, text, username, artist name

Expanded Universal Negative

For more demanding outputs where you want tighter quality control:

(worst quality:1.4), (low quality:1.4), (normal quality:1.3), lowres, bad anatomy, bad hands, ((missing fingers)), ((extra digit)), ((fewer digits)), missing limb, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, ugly, disgusting, blurry, out of focus, bad proportions, bad body, extra limbs, cloned face, disfigured, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck, cross-eyed, floating limbs, disconnected limbs, watermark, signature, text, logo, cropped, duplicate, error, jpeg artifacts, oversaturated, grainy

Negative Prompts by Use Case

Portrait Photography / Realistic Faces

Face generation is where SD models fail most visibly. Add these to the universal base:

asymmetrical eyes, uneven eyes, crossed eyes, lazy eye, unfocused eyes, poorly lit face, unnatural skin texture, plastic-looking skin, oversaturated skin, uncanny valley, dead eyes, missing pupils, extra pupils, floating head, disconnected head, double face, multiple faces, blurry face, deformed face, disfigured face

Landscapes and Environments

Landscapes fail less on anatomy but more on structural coherence:

overexposed sky, blown out highlights, flat lighting, unrealistic colors, cartoonish, anime style, illustrated, painted, fake looking, sky hole, horizon errors, floating objects, impossible architecture, deformed trees, unrealistic foliage, color banding, lens flare artifacts

Anime / Illustration

(worst quality:1.4), (low quality:1.4), (normal quality:1.3), lowres, bad anatomy, bad hands, missing fingers, extra fingers, floating limbs, disconnected limbs, poorly drawn face, asymmetrical eyes, blurry, out of focus, signature, watermark, text, jpeg artifacts, low resolution, username, bad legs, bad feet, missing legs

Concept Art / Fantasy

photorealistic, photo, photograph, 3D render, blender render, CGI, realistic, hyperrealistic, watermark, signature, text, cropped, low quality, bad anatomy, deformed, ugly, amateur, sketch lines, incomplete, missing details

Product Photography

soft focus, blurry, out of focus, poorly lit, dark shadows obscuring product, color inaccurate, distorted perspective, watermark, text overlaid, cluttered background, distracting elements, uneven lighting, harsh flash, blown highlights, low resolution, pixelated

Model-Specific Negative Prompts

Different SD models have different failure modes based on their training data. Using the right model-specific negative prompt makes a significant difference.

SD 1.5 (Base Model and Fine-Tunes)

SD 1.5 is most prone to anatomical errors, particularly hands and fingers. It also tends toward muddied color palettes without guidance:

(worst quality:1.4), (low quality:1.4), (normal quality:1.2), bad anatomy, bad hands, (missing fingers:1.3), (extra fingers:1.3), poorly drawn hands, poorly drawn face, mutation, deformed, ugly, blurry, bad proportions, extra limbs, cloned face, disfigured, malformed, watermark, signature, text, (oversaturated:1.2), muddy colors

SDXL

SDXL has better anatomy than SD 1.5 but can produce overly polished, plastic-looking skin and tends toward over-sharpening:

worst quality, low quality, bad anatomy, deformed hands, ugly, watermark, signature, text, oversaturated, plastic skin, artificial lighting, overly smooth skin, uncanny valley, dull colors, flat lighting, overexposed

Note: SDXL is less sensitive to negative prompts than SD 1.5. The (keyword:weight) syntax has less impact in SDXL — you often get better results using plain text without weights for SDXL negative prompts.

SD 3.5 (Stable Diffusion 3.5)

SD 3.5 uses a different architecture (multimodal diffusion transformer) and is less responsive to traditional SD negative prompt syntax. Simpler, descriptive negative prompts work better:

low quality, blurry, bad anatomy, watermark, text, deformed, ugly, distorted

Realistic Vision / CyberRealistic (SD 1.5 Checkpoints)

Photorealistic SD 1.5 checkpoints have a tendency toward over-smoothed skin and slightly over-exposed highlights:

(worst quality:1.4), (low quality:1.4), bad anatomy, bad hands, missing fingers, extra fingers, (plastic skin:1.2), (oversaturation:1.2), overexposed, blown highlights, poorly drawn face, deformed, mutation, watermark, signature, text, (smooth face:1.2), unnatural skin, fake looking

Weighted Negative Prompts: The (keyword:weight) Syntax

The parentheses-colon syntax allows you to increase or decrease the emphasis on specific negative terms. This is most effective in SD 1.5-based models and has diminishing returns in SDXL and later architectures.

Weight Values and Their Effects

Syntax Weight Multiplier Effect When to Use
keyword 1.0x Standard avoidance General quality terms
(keyword:1.1) 1.1x Slightly stronger Stubborn but non-critical issues
(keyword:1.2) 1.2x Noticeably stronger Common failure modes
(keyword:1.3) 1.3x Strong avoidance Persistent errors that keep appearing
(keyword:1.4) 1.4x Very strong Quality tier terms (worst/low quality)
(keyword:1.6+) 1.6x+ Overpowering — causes artifacts Avoid for most terms

Double Parentheses Shortcut

In AUTOMATIC1111 and ComfyUI, double parentheses multiply weight by 1.1 per layer:

Explicit weights (keyword:1.4) are more predictable than stacked parentheses, but both approaches work.

Strategic Weighting Examples

/* For a portrait where hands keep appearing malformed: */
(bad hands:1.4), (missing fingers:1.3), (extra fingers:1.3), bad anatomy

/* For a landscape where the sky keeps blowing out: */
(overexposed sky:1.3), (blown highlights:1.2), bad lighting

/* For anime where faces keep looking wrong: */
(poorly drawn face:1.4), (asymmetrical eyes:1.3), bad anatomy

EasyNegative, bad-artist-anime, and Other Embedding Negatives

Textual inversion embeddings are special files that encode complex concepts into a single token. For negative prompts, embeddings can replace long lists of quality-reduction descriptors with a single word.

EasyNegative

The most widely used negative prompt embedding. It was trained to compress hundreds of quality-reduction concepts into the token EasyNegative. Using it in your negative prompt is roughly equivalent to including a 50-word quality-reduction list.

Use with SD 1.5 models. Not compatible with SDXL without specific SDXL versions.

EasyNegative, (worst quality:1.2), bad anatomy, watermark

bad-artist-anime

Specifically trained on low-quality anime art to encode common anime failure modes. Pairs well with anime-focused checkpoints like Anything V5 and Counterfeit.

bad-artist-anime, (worst quality:1.4), (low quality:1.3), bad anatomy

verybadimagenegative_v1.3

An alternative to EasyNegative with slightly different training emphasis. Some users find it works better for photorealistic outputs.

How to Use Embeddings

  1. Download the embedding file (.pt or .safetensors) from Civitai or Hugging Face
  2. Place it in your A1111 embeddings/ folder (or ComfyUI's equivalent)
  3. Reference it by filename (without extension) in your negative prompt
  4. Refresh your embeddings list in the UI if needed

Important: Embeddings are checkpoint-specific. An embedding trained on SD 1.5 data will not work correctly with SDXL. Always check the embedding's model requirements before using it.

What NOT to Put in Negative Prompts (Common Mistakes)

Negative prompts can backfire if used incorrectly. These mistakes are common:

  1. Negating desired subject matter. If your positive prompt includes "forest" and you add "trees" to your negative prompt, you'll fight against yourself. Only negative-prompt attributes you genuinely don't want.
  2. Using concepts the model barely knows. Adding highly specific or obscure terms to negative prompts that the model didn't encounter much in training has little effect and may introduce unexpected biases.
  3. Extremely long negative prompts (200+ words). CLIP has a 77-token limit. Any text beyond 77 tokens is either truncated (in standard SD) or requires extended attention (in A1111 with "enable parsing"). Focus on the terms that matter most.
  4. Using positive language in negative prompts. "No bad quality" doesn't work — SD processes the semantic content of words, not their grammatical relationship to "no." Write "bad quality" in the negative field, not "no bad quality."
  5. Over-weighting everything. If every term in your negative prompt has a weight of 1.4+, you've effectively raised the baseline, and none of the weights provide differential emphasis. Reserve high weights for the most persistent problems.
  6. Copying anime negative prompts for photorealistic work. The quality tokens differ between anime and photorealistic workflows. Mixing them can produce unexpected stylistic drift.

How ImageToPrompt Generates Negative Prompts from Reference Images

When you use ImageToPrompt.dev with Stable Diffusion as the target model, the tool analyzes your reference image and generates both a positive prompt and a contextually appropriate negative prompt.

The negative prompt generation is intelligent rather than template-based. If your reference image is a portrait, the tool emphasizes face and anatomy negatives. If it's a landscape, it skips anatomy terms and focuses on exposure and color negatives. If it identifies an anime style, it switches to anime-appropriate quality tokens.

For example, uploading a reference portrait of a professional headshot and targeting SD produces:

Positive: (masterpiece:1.2), (best quality:1.1), (photorealistic:1.1), professional headshot portrait, man in his 40s, short dark hair, business attire, neutral background, soft studio lighting, sharp focus, 85mm portrait lens

Negative: (worst quality:1.4), (low quality:1.3), bad anatomy, bad hands, (poorly drawn face:1.3), asymmetrical eyes, deformed, ugly, blurry, plastic skin, unnatural skin texture, oversaturation, watermark, signature, text

This saves time compared to manually assembling a negative prompt from scratch, and it's contextually appropriate rather than a generic paste.

Stable Diffusion output with negative prompts — showing how exclusion terms prevent common AI generation artifacts like bad anatomy and low quality
With strong negative prompts: clean anatomy, sharp detail
Stable Diffusion comparison output — demonstrating the quality difference between different negative prompt configurations in Stable Diffusion
Use-case specific negatives: even sharper targeted results

Negative Prompt Impact Comparison

Scenario Without Negative Prompt With Basic Negative Prompt With Optimized Negative Prompt
Portrait face quality ~40% acceptable ~65% acceptable ~85% acceptable
Hand rendering ~20% correct ~50% correct ~70% correct
Watermarks/text artifacts Frequent (~30%) Rare (~5%) Very rare (~2%)
Extra limbs/anatomy errors ~35% occurrence ~15% occurrence ~8% occurrence
Overall usability (no editing needed) ~25% of generations ~55% of generations ~75% of generations

These figures are approximate and vary by model, subject matter, and seed. But the pattern is consistent: negative prompts significantly reduce the iteration burden. The difference between no negative prompt and an optimized one is typically the difference between needing 10+ generations to get one usable image versus 3–4.

Final recommendation: Start every Stable Diffusion session with a saved negative prompt preset appropriate to your use case. In AUTOMATIC1111, you can save negative prompts as styles. In ComfyUI, use a negative prompt node connected to all samplers. Treating negative prompts as a permanent fixture rather than an optional add-on is one of the single most impactful habits in SD workflows.