Annoyingly, I can’t seem to get Bing to generate an image that isn’t square.
My experience with Stable Diffusion -- which is trained on square images -- has been that one is generally better-off generating square images to get an initial scene, and then cropping, upscaling (something that generative AIs do well and I'd guess that Bing probably can do, though I don't know for sure), and possibly outpainting as a way of getting more pixels and the aspect ratio desired.
Non-square images have been more-prone to things like weird mutant monsters with lots of legs, though I did try a run with my current model (based on SDXL) on a non-square image and it seemed to be working all right, so I don't know if things have improved here or if I just got lucky.
Annoyingly, I can’t seem to get Bing to generate an image that isn’t square.
Maybe try to cheat that it. Like "... with large black border on top and bottom" (or left/right, depending on what you want) and then manually cropping the result.