I'm still reading into it. Why is it closely related to apache? Does inly apache push it? Meaning, if apache drops it, there'd be no interest from others to push it further?
It's published under apache hadoop license. It is a permissive license. Is there a drawback to the license?
Do you use it? When?
I assume for sharing small data, csv is sufficient. Also, I assume csv is more accessible than parquet.
In the deep learning community, I know of someone using parquet for the dataset and annotations. It allows you to select which data you want to retrieve from the dataset and stream only those, and nothing else. It is a rather effective method for that if you have many different annotations for different use cases and want to be able to select only the ones you need for your application.
Graphql is a protocol for interacting with a remote system, parquet is about having a local file that you can index and retrieve data from in a more efficient way. It's especially useful when the data has a fairly well defined structure but may be large enough that you can't or don't want to bring it all into memory. They're similar concepts, but different applications