Nothing wrong with that!
Just saying that not necessarily everything should be about money.
Also, knowledge and sharing has been critical for advancement of human civilization. Imagine if scientists where to sell their research instead of publishing it(*) where would be today?
- = I mean, you might have to pay to read those publications, but they are literally free and can ask the authors for a copy free in most cases....
I keep a wiki on all that I do.
This is the page on radicale: here
This is the more general page on reverse proxy here
And so on, Check the sidebar.
I mostly write it so that in future I remember what I did and how I did it, but I use some unusual techniques compared to the mainstream point of view from this community, so keep that in mind.
Side hustles should be hobbies and done with no need to monetize them.
What the fuck, your job should be enough to support you and live, which includes free time to enjoy your life and hobbies.
But I understand, and more than once in my life I had to look for side hustles.
Gentoo linux with Radicale on bare metal. Radicale is behind NGINX reverse proxy that slaps HTTPS and authelia redirect for authentication on top.
And of course i use DAV5X on android.
Never had an issue with caldav and cardav. Maybe you are using broken servers or clients?
Even notes can be done efficiently on those standards...
But, hey, I am using only FOSS server and clients, maybe you referring to proprietary ones? You know, those who are mare by vendors who have no interest in interoperability?
Home Assistant with a bunch of ZigBee sensors?
Go https, today there is no real reason not to and tons of good reasons to do it.
Let's encrypt is 100% free and using their certbot its also automated and easy to do.
The open source market for smart watches sucks. Gadget bridge doesn't really work for any "new" (i might be wrong here) devices.
Luckly Garmin watches are top notch and the software side its pretty good. You can not install the app and manually download all activities files via USB if you like.
That's really true. I was lucky enough to lose data, but be able to recover it. Very lucky.
And you find out you are not really backing up enough!
All my got repos are on my server, not public. Then backupped on my restic, encrypted.
Only the public keys are under backup tough, for the private ones, I prefer to have to regenerate them rather get stolen.
I mean, when like in foegejo you add the public keys for git push and such.
Tell me how that would have helped at all? Can zfs Unformat a drive? Don't think so...
Zfs is not backup guys. Snapshots too, are not backup!
I feel you man! Lessons are really learnt only the hard way!
Many suggest zfs, I want to spend a word on ext4 instead. Solid, reliable, well proven. Does the job and works pretty well.
Been on ext4 on RAID1 for decades, since it got stable. Never had an issue, except when I borked it by my mistake.
It has maybe less features than zfs, but doesn't need external kernel patches or complex tools, and again its solid, well proven and very stable
Edit: ext4 on top of Linux software raid (mdadm)
Well, here my story, might it be useful to others too.
I have a home server with 6Tb RAID1 (os on dedicated nvme). I was playing with bios update and adding more RAM, and out of the blue after the last reboot my RAID was somehow shutdown unclean and needed a fix. I probably unplugged the power chord too soon while the system was shutting down containers.
Well, no biggie, I just run fsck and mount it, so there it goes: "mkfs.ext4 /dev/md0"
Then hit "y" quickly when it said "the partition contains an ext4 signature blah blah" I was in a hurry so...
Guess what? And read again that command carefully.
Too late, I hit Ctrl+c but already too late. I could recover some of the files but many where corrupted anyway.
Lucky for me, I had been able to recover 85% of everything from my backups (restic+backrest to the rescue!) Recreate the remaining 5% (mostly docker compose files located in the odd non backupped folders) and recovered the last 10% from the old 4Tb I replaced to increase space some time ago. Luckly, that was never changing old personal stuff that I would have regret losing, but didn't consider critical enough on backup.
The cold shivers I had before i checked my restic backup discovering that I didn't actually postponed backup of those additional folders...
Today I will add another layer of backup in the form of an external USB drive to store never-changing data like... My ISOs...
This is my backup strategy up to yesterday, I have backrest automating restic:
- 1 local backup of the important stuff (personal data mostly)
- 1 second copy of the important stuff on an USB drive connected to an openwrt router on the other side of home
- 1 third copy of the important stuff on a remote VPS
And since this morning I have added:
- a few git repos (pushed and backup in the important stuff) with all docker compose, keys and such (the 5%)
- an additional USB local drive where I will be backup ALL files, even that 10% which never changes and its not "important" but I would miss if I lost it.
Tools like restic and Borg and so critical that you will regret not having had them sooner.
Setup your backups like yesterday. If you didn't already, do it now.
Exactly! FWA... My typo ;)
Yes, my typo.. FWA ;)
DNW, it happens, but you made me doubt my memort eheheh
Yes, but if I can stream games to my mobile device that could be an acceptable treadeoff, if the card doesn't drain too much when idle
Crappy (30-40mbit/sec) but uncapped FTTC here, plus 5G FVA at 300mbit/sec but 1Tb monthly cap here.
Combining both and separating heavy traffic (fucking fortnite and many steam big games) on the crappy uncapoed, and arr'ing too, leaves tons of data for high speed anything.
Total cost? 22€ + 24€ = 46€/month, no surprises. A lot more expensive than having fiber indeed, but I am deep into the woods, so.
Ah, and when i go over my 1Tb data cap on the FVA, I get throttled to 6mbit/sec, nothing extra to pay.
In my server I currently have an Intel i7 9th gen CPU with integrated Intel video.
I don't use or need A.I. or LLM stuff, but we use jellyfin extensively in the family.
So far jellyfin worked always perfectly fine, but I could add (for free) an NVIDIA 2060 or a 1060. Would it be worth it?
And as power consumption, will the increase be noticeable? Should I do it or pass?
Hi!
I have setup ScanServJS which is an awesome web page that access your scanner and let you scan and download the scanned pages from your self hosted web server. I have the scanner configured via sane locally on the server and now I can scan via web from whatever device (phone, laptop, tablet, whatever) with the same consistent web interface for everyone. No need to configure drivers anywhere else.
I want to do the same with printing. On my server, the printer is already configured using CUPS, and I can print from Linux laptops via shared cups printer. But that require a setup anyway, and while I could make it work for phones and tablets, I want to avoid that
I would like to setup a nice web page, like for the scanner, where the users no matter the device they use, can upload files and print them. Without installing nor configuring anything on their devices.
Is there anything that I can self-host to this end?
Hi fellow hosters!
I do selfhost lots of stuff, starting from the classical '*Arrs all the way to SilberBullet and photos services.
I even have two ISPs at home to manage failover in case one goes down, in fact I do rely on my home services a lot specially when I am not at home.
The main server is a powerful but older laptop to which i have recently replaced the battery because of its age, but my storage is composed of two raid arrays, which are of course external jbods, and with external power supplies.
A few years ago I purchased a cheap UPS, basically this one: EPYC® TETRYS - UPS https://amzn.eu/d/iTYYNsc
Which works just fine and can sustain the two raids for long enough until any small power outage is gone.
The downside is that the battery itself degrades quickly and every one or two years top it needs to be replaced, which is not only a cost but also an inconvenience because i usually find out always the worst possible time (power outage), of course!
How do you tackle the issue in your setups?
I need to mention that I live in the countryside. Power outages are like once or twice per year, so not big deal, just annoying.
I have a home network with an internal DNS resolver. I have some subdomains (public) that maps to a real world IP address, and maps to the home server private address when inside home.
In short, i use unbound and have added some local-data entries so that when at home, those subdomains points to 192.168.x.y instead.
All works perfectly fine from Windows and from Linux PCs.
Android, instead, doesnt work.
With dynamic DHCP allocation on android, the names cannot be resolved (ping will fail...) from the android devices. With specific global DNS servers (like dns.adguard.com) of course will always resolve to the public IP.
The only solution i found is to disable DHCP for the Wifi on android and set a static IP with the 192.168.x.y as DNS server, in this case it will work.
But why? Aynbody has any hints?
It's like Android has some kind of DNS binding protection enabled by default, but i cannot find any information at all.
As the title goes, is there a way to download content from amazon prime video?
Like yt-dl or similar...
Hi! i am selfhosting my services and using a DNSMasq setup to provide ad-blocking to my home network.
I was thinkering with Unbound to add a fully independent DNS resolver and not depend on Google/Adblock/Whatever upstream DNS server but i am unable to make Unbound work.
Top Level Domains (like com, org...) are resolved fine, but anything at second level doesn't. I am using "dig" (of course i am on linux) and Unbound logging to find out what's going on, but i am at a loss.
Could be my ISP blocking my requests? If i switch back to google DNS (for example) all works fine, but using my Unbound will only resolve TLDs and some random names. For example, it will resolve google.com but not kde.org...
Edit: somehow fixed by nuking config file and starting over.
If I remember correctly, FitTrackee Dev do post on this community.
Well, I want to thank him/her as this is a very nice piece of software that I just started using but looks so promising and well done! A breeze to install, even on bare metal, and so well designed (even a CLI? Come on!).
Looking forward to try Garmin integration tomorrow.
Thank buddy!/Appreciated.
Looking for a self hosted diary type of service. Where I can login and write small topics, ideas, tag them and date them. No need for public access.
Any recommendations?
Edit: anybody using monicahq or has experience with it?
Clarification: indeed I could use a general note taking app for this task. I already host and use silverbullet for general notes and such. I am looking at something more focused on daily events and connections. Like noting people met, sport activities and feedbacks, names, places... So tagging and date would be central, but as well as connections to calendar and contacts, and who knows what else... So I want to explore existing more advanced, more specialized apps.
Edit2: I ended up with BookStack. MonicaHQ seems very nice but proved unable to install using containers. It would not obey APP_URL properly and would mess up constantly HTTP / HTTPS redirection. Community was unrepsonsive and apparently github issues are ignore lately. So i ditched MonicaHQ and switched to BookStack: installed in a breeze (again container) and a very simple NGINX setup just worked. I will be testing it out now.
Hi, Using radicale since I switched from next cloud, using dav5x on android pretty nicely.
I was thinking about adding a web ui to access my calendars too from web... Any recommendations?
Radicale web ui only manages accounts and stuff, not the calendars contents.
Hi! i have a mixed set of containers (a few, not too many) and bare-metal services (quite a few) and i would like to monitor them.
I am using good old "monit" that monitors my network interfaces, filesystems status and traditional services (via pid files). It's not pretty, but get the work done. It seems i cannot find a way to have it also monitor my containers. Consider that i use podman and have a strict one service, one user policy (all containers are rootless).
I also run "netdata" but i find it overwhelming, too much data, too much graphics, just too much for my needs.
I need something that:
- let me monitor service status
- let me monitor containers status
- let me restart services or containers (not mandatory, but preferred)
- has a nice web GUI
- the web gui is also mobile friendly (not mandatory, but appreciated)
- Can print some history data (not manatory, but interesting)
- Can monitor CPU usage (mandatory)
- Can monitor filesystem usage (mandatory)
I don't care for authentication features, since it will be behind a reverse proxy with HTTPS and proxy authentication already.
I am not looking for a fancy and comples dashboard, but for something i can host on a secondary page that i open if/when i want to check stuff. Also, if the tool can be scripted or accessed via an API could be useful, so i would write some extractors to print something in a summary page in my own dashboard.
I have spent quite a lot of time trying to find the best photo management solution for my use case, and i think i have finally got a solution in mind. Please follow me and help me understanding what could be improved.
The use case: I took, over the decades, thousand of pictures with manual, film based SLR, digital DSLR and many other devices. Today i mostly only take pictures with my phone and occasionally (like 1-5 rolls per year) B/W film photos. I like to have all the pictures neatly organized per album. Albums are events, trips, occasion or just a collection of photos for any good reason together. I have always organized albums my folders and stored metadata either in the photo or in sidecar files. Over the decades i changed many management tools (the longest has been Digikam) but they all faded away for one reason or the other. I do not want to change organization since it proved solid over decades. I do not trust putting all eggs in a database or a proprietary tool format.
The needs: backup photos from family phones. Organize photos in albums (format as stated above), share & show pictures with family (maybe broader public too), archive for long term availability. Possibly small edits like rotation. Face recognition is a good plus, geographical mapping and reverse geotagging is a great plus. General object recognition could be useful but not a noticeable plus. Also i need multi-user support for family members both on backup and gallery-like browsing. My galleries need to be all shared (or better one big gallery, plus individual backups for users)
What i don't need: complex editing / elaboration (would be done offline with darktable)
Non-negotiable needs: storing photos in album-based subfolders structure with all metadata inside photos or sidecar files. No other solution will ever stand the test of time.
I tried many tools and none fits the bill. Here are my experiences:
- Immich: by far the most polished, great for phone backup&sync, not good for album organization (photos cannot be sorted into folders, albums are logical only). Has the best face detection and reverse geocoding.
- Photoprism: given up because i don't like open-source with money tags (devs have all the rights to ask for money, but i distrust a model where they might give up support unless they make money)
- Librephoto: feels abandoned and UI & Face detection is subpar with immich
- PiGallery2: blazing fast and great UI, but cannot be used for backups nor organization. But can cope well with my long lasting collections of photos.
- Piwigo: i used this decades ago. By today standards feels ugly bloated and slow as hell. No benefits anyway for my use case that compensate slugginesh. And my server is powerfull.
- Damselfly: great tool and super friendly dev, unfortunately i could not fit into my use case. It can work on folders, but it's actions are too limited and beside downloads and exports and tagging... not much else. Not even backups from phone. I understand it's use case is totally different from mine. Still a great piece of software.
My solution: more of the idea of how i want to proceed from here on...
Backup: keep the great Immich for phone backups. Limitations: requiring emails as user logins breaks my home server authentication scheme but i can live with it. The impossibility to organize photos in folders is a deal breaker but luckily, you can define "logical" albums and download them.
Organization: good old filesystem stuff, i don't need any specific tools. Existing photos are already sorted in subfolders, new albums can be created from Immich, downloaded, and stored on new subfolders on the server. Non-phone albums (DSLR, film cameras...) can just be added as well directly on filesystem
Viewing: PiGallery2 pointed at the subfolders, blazing fast viewing online for all family members.
Global workflow: take photos from phones, upload automatically to immich, then manually go sort them in albums, download albums and create appropriate subfolders on the server (if needed to save space, delete downloaded photos from immich). Upload/unzip and enjoy from PiGallery2. -- OR -- take photos with other cameras, scan/process on PC (darktable), create appropriate subfolders on the server, upload and enjoy from PiGallery2.
All in all what pisses me off of all this is:
- Immich requiring a fucking email address to login (not a privacy concern here, but my users will need to remember a different login for this specific part)
- Immich not supporting subpaths, i will need two subdomains to achieve this workflow, while just one would have been less complex for the users (something like photos.mydomain.org/gallery and photos.mydomain.org/backup, instead of photobackup.mydomain.org and photogallery.mydomain.org, you get the idea). I know all the blah blah on subdomains being better and such, i don't care, this is an usability issue for dumb users and, in general, it's the way i prefer it to be.
Of course, the best course would be to have Immich support folders (not external libraries, but actually folder based albums which is totally different approach) and it being able to move photos to folders, but hey, it wouldn't be fun in that case :)
Amy thoughts?
UPDATE: Immich storage templates seems to be the missing link. Using that properly would cut out the manual download/reupload approach. Need to experiment a bit, but looks promising.
I am setting up my notes approach which is using dedicated apps on my devices plus syncthing.
I tried lots of tools like Joplin obsidian etc but are too overkill or had something I don't like.
So I am using markor on android and another dedicated app on Linux and so on.
I would like to add also a web app to edit the MD files directly on my server when I don't have any way to install syncthing or an editor app.
The web GUI would need to list the MD files local on the server and let me edit/view/save them. Upload and download is not required as I already have that setup via filebrowser.
Any hints?
Edit: to be clear, i am not looking for an IDE or anything fancy, i only need to edit some notes online on my server. I do not want to spin containers or deploy full VS solutions just for this, all i need is a web gui editor for MD with the capability to load files on the server
Second edit: i ended up selfhosting Silverbullet.md which made my day. Exactly what i was looking for, even more than that. Thanks all!
I have finally got my selfhost wiki up to a satisfying shape. Its here: https://wiki.gardiol.org
Take a look i hope it can help somebody.
I am open to any suggestions about it.
Note: the most original part is the one about multi-homed routing and failbacks and advanced routing.
Hi fellow sailors,
i have lots of downloaded... ISOs... that i need to converto to save space. I would like to make them all of the same size, let's say 720p, and same format, let's say h265.
I am on linux and the... ISOs... are sorted into a neatly named hierarchy of sub-folderds, which i want to preserve too. What is the best tool (CLI) for the scope? I can use ffmpeg with a bash script, but is there anything better suited?
Let's say i download an iso for my latest favourite distro and, after unpacking the rar (usenet) i find the right contents but all the filenames are a bunch of hexadecimal strings. The files are legit, but how do i "decode" the names to know which one is file n.1, file n.2 and so on?
I use Joplin and I do like it very much, but I would like to be able to at least view (not edit) the notes from web browser... Which is not supported.
Are there good alternatives that are:
- fully open source
- have android client
- have web client or viewer
- can be synched VOA WebDAV or native method
I can also settle for a Joplin web viewer of sorts!
UPDATE: i opened up a can of worms. I would have never tought there would be so many tools for this task, and so many different shades of how it can be done. Even excluding ALL the non-truly-FOSS solutions out there, there are still tons of tools with good points and bad points. Of course, NONE fits my bill so i will spin mine… Joking, i have no time for that.
Using joplib-webview feels too much. Spinning containers just for that meh. Will try tough. The joplin .md files are only "sync" files, from which yo ucan probably extract the notes. But that would be not the best idea. Maybe some kind of link to Joplin terminal would be the way forward. I will see.
I will stay on Joplin, it's the closest i could find to what i need, the only lacking is a web viewer, which i can live without for the time being after all.
Thank you all, and to anybody still chiming in!
After all the amazing reviews and post i read immich I decided to give it a try.
To be honest I am quite impressed, it's fast and polished, it just works.
But I found a few quirks, and hit a wall with the developer that doesn't seems kind to listen to users that much (on these issues at least!)
Maybe you guys have suggestions?
Here I go:
One: it does not support base URLs, witch means that I had to spin a dedicated sub domain to be able to access it over internet while all my other services are on a single sub domain. I can work with that, but why. Dev already shut this request down in the past as "insecure". Which I find baffling. (I mean use mydomain/immich instead of immich.mydomain)
Two: auth cannot be tied to reverse proxy. I get it, it provides OAuth. But it's much more complex than proxy based auth... And overkill for many cases, mine for sure.
Three: impossible to disable authentication at all, which would just work fine in my use case. There is a switch that seems for that, but no, it's only for using OAuth.
Four: I cannot find a way to browse by location, only by map. (Locations list seems to be half baked unless I am missing something).
Five: no way to deploy on bare metal, and I tried! due to lack of documentation (only info I found where very very outdated), and no willingness to provide info about that either. Seems that docker is so much better that supporting bare metal is a waste of time.
Six: basically impossible to manage easily public albums. like a public landing page. I get this might be outside immich scope.
Seven: even if now you can import existing libraries, it still does not detect albums withinbthem (sub folders) which is very annoying.
So, overall its a great project and very promising, faster and more reliable than Libre Photos in my use case, but still lacking some basic features that the Dev seems not interested in adding. He developed it to please his wife, I get it :) - no pun intended, doing all this take lots of time, I know.
These are the alternatives I know of:
Photo prism requires a subscription for reverse Geo coding.
LibrePhotos feels sluggish and kind if abandoned.
Are there any others? (Piwigo and Lytchee are great tools, but different kind of tools)
Let's hope for immich, Dev is working a lit, let's hope for the best.
Hi! Question in the title.
I get that its super easy to setup. But its really worthwhile to have something that:
- runs everything as root (not many well built images with proper useranagement it seems)
- you cannot really know which stuff is in the images: you must trust who built it
- lots of mess in the system (mounts, fake networks, rules...)
I always host on bare metal when I can, but sometimes (immich, I look at you!) Seems almost impossible.
I get docker in a work environment, but on self hosted? Is it really worth while? I would like to hear your opinions fellow hosters.