But that's not how it works. At least currently, you have to choose the correct model and maybe a Lora. Then you have to choose your settings carefully and try multiple times. After that you have to use img2img or a classic editor to fix it.
And that's leaving out the main offender: prompt engineering. Which is really hard, and is a very iterative process.
It's really not as simple as you make it out to be.
the point is that i was like “wow someone made a salad to look like this! that looks like a fun food gimmick. :)” and then i saw how weird and blurry the onions were and i was sad because it wasn’t an actual salad
art also is not valuable based on the work put into it; that’s not the argument i was making. it’s about stylistic choices during it’s creation. you can’t touch up the background or fix lighting or change the brush with an ai because you are putting words into a box. you can choose the words and choose the box but nothing more
you definitely can change anything you want about the image you are generating, thats what the commenter above is saying. there are many tools you can use that can edit the image after or during generation. you can even just put it any image editor and back for even more fine tuning. the point of these ai's isnt to completely replace the artistic processes. but to enhance it with new tools. the same way digital art did when it was first introduced to the art world