Its stupid fast, reliable, and rarely has any conflicts. If it does it seems to work them out without intervention. I've tried Nextcloud including the AIO image and its just so clunky and slow. I was getting sync errors just on the simple Notes apps. Repeatedly. I mean I get why people like it, it can do way more than Seafile. But for a pure Dropbox replacement, I love it.
The fact I can reach any file on any device from any other device without syncing EVERYTHING is fantastic. I know Syncthing is also popular, but seems to require more manual settings if you want to be selective on what syncs.
I will say, I've tried and failed numerous times to get Collabora CODE and S3 storage integration to work with Seafile and that is a nightmare, at least for me. I cannot get my head around it. But standing Seafile up itself was fairly easy.
Does anyone else use it? If so, have you tried the CODE and/or multiple storage backend integrations?
I mentioned it in another comment but you can use rclone to mount the seafile data structure. And at least in my testing it works really well. I'll have to test with more data and of course remote data. If I ever get the Backblaze B2 backend working then I could more easily test a use case where I didn't have access to the server like you're talking about. I have had great success with rclone mount with Dropbox, but those are not chunked files. :)
I do wonder if folks who are hesitant to use it because of the chunked files are also not using apps like Borg backup or Duplicacy. Both of which also chunk the data. I believe in both cases you can still leverage rclone to mount them as whole files for retrieval.
Im not sure if I can follow you.
you mean you use rclone to clone the seafile database to you phone and use then nonetheless seafile on the phone to access it?
I mentioned rclone and its mount function as it's an alternate method of accessing Seafile's backend. So if Seafile clients and web interface are somehow inaccessible, you can use rclone mount to "reassemble" the chunked data and then recover or copy to another location as needed.
The best way I can describe the phone example is that each Seafille client is a portal to the data on the Seafile server. I have it setup like this:
Documents - MBP (macbook pro)
Documents - Note10Plus (my phone)
Documents - Pop (primary desktop running Pop!_OS)
From my phone I can pull any data in any of the 3 libraries without needing to sync the entire thing to each device, which is what Syncthing wants to do by default. I understand there is an ignore function but from what I can tell you'd have to manually mark quite a few folders as such so you don't sync all data to each client.
One scenario I tested last night was using rclone mount on the server, which "un-chunks" the data back into whole, flat files and mounts it in a temporary folder. I then used rclone to copy it to a Backblaze B2 bucket. Which now has fully assembled flat files sitting as a backup in B2 storage. My thought is to script that function because damned if I can't seem to get database dumps to work properly when performing backups on pretty much any self hosted product that uses them. Still learning though.
That is probably way more info than you needed to answer your question, sorry about that.
You don't need to put it back. Rclone mount is another "portal" to the seafile data, but fully assembled. It mounts to a folder you specify. Then you reach in and pull anything you might need if all of the seafile clients and web app are down.
They do have their own tool called Seafuse that will assemble the data as well, but I've not tried it since rclone works great and has a ton of support. It's fantastic.
I have a HP Microserver Gen 8. And about 22 docker containers including Seafile running. I've read folks had good success with it on Rpi, but I haven't tried myself. I ought to spin one up on one I have that's not doing anything.
I'm a recent convert to Rclone. I've struggled with other CLI backup tools like Borg and Restic, but rclone is very approachable.