So, having been doing this for a while, and having been reading a lot around, I have noticed that there is a faction of SD users that swear, and very heatedly, that negative prompts for anatomic accuracy (those that mention what one does not want, like multiple fingers, bad hands, deformed, etc) are completely useless and the desirable results obtained are just observer bias or pure chance.
Their rationale is that the models are trained on the descriptions of the images sampled and nobody puts things like deformed hands or multiple fingers in those descriptions, because they all sample desirable images. So, if you cannot search for “deformed hands” because no image of them exists, it stands to reason that you cannot EXCLUDE “deformed hands” either as they should not be recognisable by the AI.
Having seen the images on which a number of LoRAs are trained, this seems true for LoRAs. But is it true for the checkpoint models as well?
Or do the larger models have a way to parse this kind of parameter?