I think one of the biggest issues with FOSS-minded people is that they automatically consider open source software private, safe and having good intentions in mind, but they never actually go beyond the surface to check if it actually is.
Most people who use FOSS are not qualified to check source code for ill-intent (like me) and rely on people smarter than them (and me) to review the code and find any problems. FOSS isn't automatically private, safe, and having good intentions, but if it isn't, at least the code is transparent and the review process is open for all. Commercial software has no review, and zero transparency.
The problem is that quite often everything rests on that belief in someone else being there to check. Most of the time, even if some of the users are qualified to do it, they don't have the time to go through all of the code and then be on it through each update.
Good point and worth considering. For the more popular stuff, though, it's likely someone somewhere is looking at it, and even the threat of discovery is enough to discourage malfeasance. And in either case, it's better to have the observability rather than a black box system with no possibility to check it.
Why can't proprietary software be private or secure? You cannot verify it for yourself, but nothing about the licensing model precludes it. In highly regulated industries (such as health care or banking), I would expect a very large investment by software vendors into security.
But you can't legally modify it and distribute your modified version. You can't fix a vulnerability and share the patched version with others. Only the developer can, so you are at their mercy. If they add spyware into the program, users can't do anything about it.