I don't think it's the brain but rather our consciousness that is limited.
Our sensory inputs are always on and processed by the brain, but our consciousness is very picky and also slow.
People can sometimes recall true memories that they weren't aware of, or react to things they didn't think of and such.
Consciousness is also somehow lagging behind the actual decision making, but always presents itself as the cause of action.
Sort of like Windows telling you that you removed a USB stick 2 seconds after you did it and was well aware of it happening. Consciousness is like that, except it takes responsibility for it too..
When it encounters something that it didn't predict, it'll tell you that "yeah this happened and this is why you did that". Quite often the explanation for doing something is made up after it happened.
This is a good thing mostly, because it allows you to react faster than having to consider your options consciousnessly. You do not need to or have time to make a conscious decision to dodge a dodgeball, but you'll still think you did.
NAILED IT! Yeah, our subconscious is driving and only sends an executive summary up top. And we think, "I did this!" Nah. You didn't. You are just along for the ride.
People hate this notion because it negates free will. Well, yeah, it kinda does.
Everybody reading these comments and considering the implications needs to go read Blindsight by Peter Watts. It's a first contact story set in the near-ish future, and really goes into consciousness and intelligence. Very thought provoking if you thought this comment chain was interesting.
You can train your subconscious! Well, at least influence it's decisions. Videogames are a great example. Trained reaction/response. Repeated response to similar stimulus can create a trained subconscious response.
However, I have difficulty, especially now that I'm older, where subconscious and conscious will compete and I will lose acuity of what I actually did.
It is also possible to consciously alter the subconsciousness. For instance, by creating sensory input for yourself by saying things out loud to a mirror. Your ears will hear it, your eyes will see it, and your subconsciousness will then process it just the same as any other experience.
With enough repetition it will make a difference in which neurons are active whenever the brain comes to making a decision on that thing.
When it encounters something that it didn’t predict, it’ll tell you that “yeah this happened and this is why you did that”. Quite often the explanation for doing something is made up after it happened.
There are interesting stories about tests done with split-brain patients, where the bridge connecting the left and right brain hemispheres, the corpus callosum, is severed. There are then ways to provide information to one hemisphere, have that hemisphere initiate an action, and then ask the other hemisphere why it did that. It will immediately make up a lie, even though we know that’s not the actual reason. Other than being consciouss, we’re not that different from ChatGPT. If the brain doesn’t know why something happened, it’ll make up a convincing explanation.
They're very interesting, but also quite spooky as sometimes it seems to indicate there's two different minds inside your head that are not aware of each other.
If you search for "split brain experiments" you should be able to find more.
You're right. OPs second question is more specifically about vision, while I answered more broadly.
Anyway, comparing it to data from a camera is not really possible.
Analoge vs. digital and so, but also in the way that we experience it.
The minds interpretation of vision is developed after birth. It takes several weeks before an infant can recognise anything and use the eyes for any purpose. Infants are probably blissfully experiencing meaningless raw sensory inputs before that. All the pattern recognition that is used to focus on things are learned features and so also dependent on actually learning them.
I can't find the source for this story, but allegedly there was this missionary in Africa who came across a tribe who lived in the jungle and was used to being surrounded by dense forest their entire life. He took some of them to the savannah and showed them the open view. They then tried to grab the animals that were grassing miles away. They didn't develop a sense of perspective for things in longer distance, because they'd never experienced it.
I don't know if it's true, but it makes a point. Some people are better at spotting things in motion or telling colours apart etc. than others.
It matters how we use vision.
Even in the moment. If I ask you to count all the red things in a room, you'll see more red things that you were generally aware of. So the focus is not just the 6° angle or whatever. It's what your brain is recognising for the pattern at mind.
So the idea of quantifying vision to megapixels and framerate is kind of useless in understanding both vision and the brain. It's connected.
Same with sound. Some people have proved being able to use echo localisation similar to bats. You could test their vision blindfolded and they'd still make their way through a labyrinth or whatever.
Testing senses is difficult because the brain tends to compensate in that way. It'd need to be a very precise testing method to make any kind of quantisation for a particular sense.