Its stupid fast, reliable, and rarely has any conflicts. If it does it seems to work them out without intervention. I've tried Nextcloud including the AIO image and its just so clunky and slow. I was getting sync errors just on the simple Notes apps. Repeatedly. I mean I get why people like it, it can do way more than Seafile. But for a pure Dropbox replacement, I love it.
The fact I can reach any file on any device from any other device without syncing EVERYTHING is fantastic. I know Syncthing is also popular, but seems to require more manual settings if you want to be selective on what syncs.
I will say, I've tried and failed numerous times to get Collabora CODE and S3 storage integration to work with Seafile and that is a nightmare, at least for me. I cannot get my head around it. But standing Seafile up itself was fairly easy.
Does anyone else use it? If so, have you tried the CODE and/or multiple storage backend integrations?
Their non-standard way of storing files, which makes them basically inaccessible without Seafile, is a disaster waiting to happen. With Nextcloud at least I can do normal filesystem level backups and access the files like any others if I really need to.
I really want to use Syncthing for something. I just haven't figured out what yet. Only thing I can think of is to sync games saves from my Steam Deck for non-Steam games, since they don't have cloud saves.
sorting pictures in my pc, have them right in my gallery on my phone.
fetched a pdf when doomscrolling why commuting, have it instantly on my pc when Im back home.
I am currently migrating away from tiddlywiki as I want to have my notes integrated into my plain file life as well.
I have a Mealie instance running on a VPS. It has a backup function built in, but it just dumps a .zip locally. I could leverage Syncthing to send that over to my server. Other than that, what you described is exactly how I use Seafile. I have my documents folders on all PCs and my phone synced. Had to print something off downstairs and didn't want to go get the laptop upstairs to either send to myself or print from the laptop, so seafile just let me reach to the server and pull it down via my Linux desktop client.
yeah I see, however in my use case I dont have all the time access to my server (which is also the case wifh syncthimg) plus the mentioned culprit that the seafile datastructure is not able to retrieve the files without seafile.
I mentioned it in another comment but you can use rclone to mount the seafile data structure. And at least in my testing it works really well. I'll have to test with more data and of course remote data. If I ever get the Backblaze B2 backend working then I could more easily test a use case where I didn't have access to the server like you're talking about. I have had great success with rclone mount with Dropbox, but those are not chunked files. :)
I do wonder if folks who are hesitant to use it because of the chunked files are also not using apps like Borg backup or Duplicacy. Both of which also chunk the data. I believe in both cases you can still leverage rclone to mount them as whole files for retrieval.
Im not sure if I can follow you.
you mean you use rclone to clone the seafile database to you phone and use then nonetheless seafile on the phone to access it?
I mentioned rclone and its mount function as it's an alternate method of accessing Seafile's backend. So if Seafile clients and web interface are somehow inaccessible, you can use rclone mount to "reassemble" the chunked data and then recover or copy to another location as needed.
The best way I can describe the phone example is that each Seafille client is a portal to the data on the Seafile server. I have it setup like this:
Documents - MBP (macbook pro)
Documents - Note10Plus (my phone)
Documents - Pop (primary desktop running Pop!_OS)
From my phone I can pull any data in any of the 3 libraries without needing to sync the entire thing to each device, which is what Syncthing wants to do by default. I understand there is an ignore function but from what I can tell you'd have to manually mark quite a few folders as such so you don't sync all data to each client.
One scenario I tested last night was using rclone mount on the server, which "un-chunks" the data back into whole, flat files and mounts it in a temporary folder. I then used rclone to copy it to a Backblaze B2 bucket. Which now has fully assembled flat files sitting as a backup in B2 storage. My thought is to script that function because damned if I can't seem to get database dumps to work properly when performing backups on pretty much any self hosted product that uses them. Still learning though.
That is probably way more info than you needed to answer your question, sorry about that.
You don't need to put it back. Rclone mount is another "portal" to the seafile data, but fully assembled. It mounts to a folder you specify. Then you reach in and pull anything you might need if all of the seafile clients and web app are down.
They do have their own tool called Seafuse that will assemble the data as well, but I've not tried it since rclone works great and has a ton of support. It's fantastic.
I get that hesitancy. But I see two ways of addressing it. They have their own FUSE mount and it also works with Rclone's Mount function. But the way I've been doing it is pointing my iDrive account on my Windows desktop at the SeaDrive client. Since each client gets fully assembled files vs the git-like chunks that are server side, it backs up the flat files to my iDrive account without pulling every single file down to the Windows client. Note I'm not trying to convince you, just letting it be known there are options and they work. I did have a cronjob tht was using Rclone to mount then backup the data from the server running Seafile to my Backblaze buckets, but I want to address it and look at something like Borg to back it up first. My hope is to take up less space in the B2 side of things.
EDIT: I just had a look again because I started doubting myself that Rclone mount worked for this purpose. I have a bit of a bad memory and apparently didn't write this down. But yes it does work. Rclone config is pointed at your seafile domain (even on the same server as is the case with mine). Then rclone mount : /path/to/mount/location. I'll have to double check once I get more than a few gigs in my seafile libraries but it works so nicely in this case. Kinda defeats the purpose of the chunking though, doesn't it? My understanding is that is for effective deduplication.
This is one of the reasons I passed on Pydio Cells. I access files added to Nextcloud via some external services (e.g. music) and make extensive use of external (local) storage.
If another service touted the speeds of Seafile/Physio without using a flat file, I would jump.
There is no need to spread FUD like that. Their "disaster waiting to happen" way of chunking and saving files is actually what makes it superior to Nextcloud for my and many other usecases. Without the active chunking while up- or downloading one needs to sync the whole file all over again if one bit changed. By chunking and indexing every file, you have the benefit of delta sync. On top of that you get versioning which ironically can be used as kind of a backup function on file level.
Besides that you can do proper backups of the Seafile data repositories and database for disaster recovery or use the FUSE mount for file backups.