Not really, they'd just need to exist in a dimension above ours. We can readily observe the lower three dimensions of length, width and depth because we exist in the one above that - duration. We can't observe time except by passing through it point by point. A being capable of observing actual timelines would have to do so from a vantage point above them.
The extra eyes and wings are just them being a fucking showoff.
You need one eye to see 2D. You need two eyes to see 3D. Presumably, you need 3+ eyes to see in 4D. Don't conflate spatial dimensions with the temporal one, it's oranges and apples.
Most of the "3D" we see is made up by our brains. For evidence of this, look at a photograph, and look at how far away things are.
Having eyes spaced apart does help us to tell the distance to things that are close to us, but that is only useful for a short distance. Our brains also track the parallax and occlusion of numerous objects, which helps over longer distances, but works just fine with 1 eye.
I think there are two ways eyes could work in higher spacial dimensions, you could either have an n dimensional eye, which percieves an n-1 dimensional image, and then an understanding of "distance" is used to fill in the remaining information, or (which may just be my own 3D-ness showing) you could have several 3D eyes in different directions, each percieving different 2D images, with enough overlap to fully see the n-dimensional space. That would take n-1 eyes to properly see.
You can see depth with a single eye, you just need to move your eye
Two eyes in animals are used either to get extra view angle (in a cow, for instance) or to give instant depth information (in a human or tiger for example) or for both (in dragonflies)
You can get depth information from parallax, which can come from either capturing multiple moments or using multiple viewpoints. IDK if I would call this seeing in 3D, as you can still only see 2d surfaces, just with an additional data point of depth (Think of it like an array of data, with one eye, you get res^2 * (r+g+b) data points, with two, you get res^2 * (r+g+b+r+g+b+d) instead of actual 3D which would be res^3 * (r+g+b)). Having 3 eyes just means you can estimate depth more accurately. Of course, in real animals with many eyes the eyes serve different purposes, such as having a different fov, resolution, color perception, etc.
I think on the level of physics, there might be enough information in the photo to describe what’s under the car actually, but I don’t know enough about photons or physics lol. Bless the day
What...? you literally cannot see the bottom of the car. It's a 3d object. You cannot see all sides of the 3d object. You can only see up down left and right.
Our eyes can also see forwards and backwards so we can perceive 3d I believe, and then 4 d one present moment at a time.
But, I’m saying it is possible even with something like sonar to make a map of a thing that is on the other side of something else. That’s sound waves but we know that the light information is there to make a map similarly using light, and if we could see that information in real life, we might be able to perceive from the photons captured in this image to have an understanding of what’s under the car
Yea see what you just said doesn't make sense. Our eyes have depth perception based on the shadows of the 2d images shown... We don't have some kind of magical infrared sighting.