![User banner](https://lemmy.dbzer0.com/pictrs/image/618e5c34-bf9e-4f40-acdd-3baa581bc196.jpeg)
You could use Uptime-Kuma to actually ping any IP every 5 seconds.
But it's proprietary, unfortunately.
Thanks, looks promising. I'll give it a try.
I don't want to configure a whole Dashboard for at least CPU, RAM, Storage and Network for up to 5 hosts.
I used the following dashboard now, but it's not really satisfying and also doesn't really fit more than 4 nodes. https://grafana.com/grafana/dashboards/11756-hpc-node-exporter-server-metrics-v2/
I'm looking for a simply solution to monitor all my servers and systems using a single dashboard. I want to see metrics like CPU usage, used RAM and storage to see if something is wrong. I just set up Node-Exporter, Prometheus and Grafana but haven't found an existing dashboard that shows multiple hosts at once. Now I looked into Checkmk and Zabbix but I feel like both are a little overpowered for what I'm looking for. Do you have any recommendations?
No, you have to activate it, see 5.
I made a simple script and timer for a friend to automatically switch between light and dark theme on Plasma. In case anybody needs this.
Oh boy have I bad news for you. You ever heard of copyright?
You're technically right, but nobody anticipated and therefore agreed on their posts being used for training LLMs.
@ChatGPT@lemmings.world What LLM are you using?
Have you even tried educating yourself prior to posting this bullshit?
I created this small script and thought it might be useful to someone else. Any feedback is welcome!
Full on conspiracy?
there are hidden interests and hands that pull the strings of the dynamics that are harmful
Interesting, because Tailacale doesn't use any special ports. How would that be detected? And could you maybe use Headscale on a dynamic port to circumvent that?
Turkey is invading Rojava/Kurdistan right now. What a fucking double standard.
Systemd-haters would rather install MacOS than admit that systemd is not that bad.
Listen to MigraKid @ Happy Market Kassablanca | 20.04.2024 by MigraKid #np on #SoundCloud
![MigraKid @ Happy Market Kassablanca | 20.04.2024](https://lemmy.world/pictrs/image/e8d23247-0e5c-44e5-8366-b1478707b178.jpeg?format=webp&thumbnail=256)
Headscale is pretty straight forward to set up and easy to use. And there are multiple WebGUIs available to choose from, if you need. If you have any questions, let me know.
Yes I'm running it on Docker and therefore have the docker0 interface.
Permanently Deleted
I set up Headscale and Tailscale using Docker on a VPS, which I want to use as my public IPv4 and Reverse Proxy to route incoming traffic to my local network and e. g. my home server. I also set up Tailscale using Docker on my home server and connected both to my Headscale server. I am able to ping on Tailscale container from the other and vice versa and set up --advertise-routes=192.168.178.0/24 on my home server as well as --accept-routes on my VPS, but I can't ping local IP addresses from my VPS. What am I missing? Both container are connected to the host network, I have opened UDP ports 41641 and 3478 on my VPS.
A directory of self-hosted software and applications for easy browsing
![Introducing selfh.st/apps, a Directory of Self-Hosted Software](https://lemmy.world/pictrs/image/ce9f0994-dbe5-4231-9067-e264087ddfee.png?format=webp&thumbnail=256)
I'm looking for an easy way to upload files from my Android smartphone to my home server. is there a - ideally dockerized - solution for that? Some simple web GUI where I can click on "Upload" and the files will be saved to a certain directory on my home server?
EDIT: I should've added that I want to do this remotely and not in my local network. I want to be able to send files from my Android smartphone from anywhere via the internet to my home server. That's why I thought about a services hosted on my server, which frontend I could access through my smartphone. But I might've answered my question already with the following: https://github.com/zer0tonin/Mikochi
EDIT #2: Thanks guys, I ended up creating my own Docker container running nextcloudcmd inspired by this: https://github.com/juanitomint/nextcloud-client-docker But I built the container from scratch and it's very minimalistic. I can publish it on my Gitlab when it's somewhat ready. Here's a little preview.
Dockerfile
FROM alpine:latest RUN apk update && apk add nextcloud-client COPY nc.sh . RUN chmod +x ./nc.sh VOLUME /data CMD ./nc.sh
nc. sh (How can I prevent automatic hyperlinking?)
#!/bin/sh while true do nextcloudcmd /data https://${username}:${passwort}@${nextcloud-domain} sleep 300 done
I followed this tutorial to create local certificates for my home server, but now it failed to renew automatically and I have no clue waht to do. Can anybody assist me in debugging, please? https://notthebe.ee/blog/easy-ssl-in-homelab-dns01/
I'm using duckdns.org, added mydomain.duckdns.org and the local IP of my home server. In Nginx-Proxy-Manager I have created the respective wildcard certificate. The log of my NPM container reports the following:
[3/10/2024] [1:55:50 PM] [SSL ] › ℹ info Renewing Let'sEncrypt certificates via DuckDNS for Cert #6: *.mydomain.duckdns.org, mydomain.duckdns.org [3/10/2024] [1:55:50 PM] [SSL ] › ℹ info Command: certbot renew --force-renewal --config "/etc/letsencrypt.ini" --work-dir "/tmp/letsencrypt-lib" --logs-dir "/tmp/letsencrypt-log" --cert-name "npm-6" --disable-hook-validation --no-random-sleep-on-renew [3/10/2024] [1:55:50 PM] [Global ] › ⬤ debug CMD: certbot renew --force-renewal --config "/etc/letsencrypt.ini" --work-dir "/tmp/letsencrypt-lib" --logs-dir "/tmp/letsencrypt-log" --cert-name "npm-6" --disable-hook-validation --no-random-sleep-on-renew [3/10/2024] [1:55:53 PM] [Express ] › ⚠ warning Saving debug log to /tmp/letsencrypt-log/letsencrypt.log Failed to renew certificate npm-6 with error: The DNS response does not contain an answer to the question: mydomain.duckdns.org. IN TXT All renewals failed. The following certificates could not be renewed: /etc/letsencrypt/live/npm-6/fullchain.pem (failure) 1 renew failure(s), 0 parse failure(s)
I noticed my home servers SSD running out of space and it ended up being my Jellyfin Docker container which wasn't clearing the directory for transcodes in /var/lib/jellyfin/transcodes
correctly.
I simply created a new directory on my media hard drive and bind mounted the above mentioned directory to it. Now Jellyfin got over 1 TB of free space to theoretically clutter. To prevent this I simply created a cronjob to delete old files in case Jellyfin isn't.
@daily /usr/bin/find /path/to/transcodes -mtime +1 -delete
Easy!
I got a bunch of self-hosted stuff and use a VPS that has a public IPv4 to access my services because my home network has only DS-Lite. My home server ist connected to the VPS using Wireguard. Now I want to connect my Smartphone to my VPN to be able to access some local services remotely. I'm able to add a second peer to the Wireguard config on the VPS, but I'm struggeling to configure the AllowedIPs correctly. The VPS apparently needs AllowedIPs 10.0.0.0/24 and 192.168.178.0/24, but the Smartphone as well for both to redirect request into my home network. But it's not possible to configure the same IP ranges for two peers. What do I do?
EDIT: Solved: https://iliasa.eu/wireguard-how-to-access-a-peers-local-network/
I'm running Jellyfin in Docker in my home server for movies and shows. I recently added a music directory and apparently after that I'm getting almost hourly notifications from my Uptime-Kuma instance connected to Gotify that Jellyfin is down with status code 502. It's quickly up again, but I'm wondering what's causing this. I have Nginx Proxy Manager configured for a local and a public domain pointing to my Jellyfin instance. Any idea what could be causing this?
At a time when people are questioning democracy, we want to publicly state our support for democratic values, diversity, openness, and fairness.
![Statement: Nextcloud stands for an open and free society - Nextcloud](https://lemmy.ml/pictrs/image/023ba4c6-5f8f-4669-926a-30fd294e0a28.png?format=webp&thumbnail=256)
I just discovered the Sota Remix of Bring Me The Horizon's "Can You Feel My Heart" and feel like I've heard it in someone's set, but I don't know who's. I listened to a lot of Nitepunk, Imanu, Buunshin, DJ Ride, Gladde Paaling and Luude lately. Maybe someone recognizes it. Here's the track: https://www.youtube.com/watch?v=6SRzEgCY0Xs
I have an Intel Core i5-7600K and just passed through my Intel HD 630 iGPU from my Proxmox host to a virtual machine running Debian to be able to use it in a Jellyfin Docker container. Everything worked fine, but I used only the basic configuration that I found which I don't really get. Can someone explain to me whether I'm missing something?
First I followed this tutorial: https://3os.org/infrastructure/proxmox/gpu-passthrough/igpu-passthrough-to-vm/
But I only added intel_iommu=on iommu=pt
to my boot parameters and vfio, vfio_iommu_type1, vfio_pci, vfio_virqfd
to /etc/modules.
But what are all the other parameters good for?
pcie_acs_override=downstream,multifunction initcall_blacklist=sysfb_init video=simplefb:off video=vesafb:off video=efifb:off video=vesa:off disable_vga=1 vfio_iommu_type1.allow_unsafe_interrupts=1 kvm.ignore_msrs=1 modprobe.blacklist=radeon,nouveau,nvidia,nvidiafb,nvidia-gpu,snd_hda_intel,snd_hda_codec_hdmi,i915"
Then I added the iGPU as a PCIe device to my VM using the Proxmox webUI and added the render device /dev/dri/renderD128
to the Jellyfin Docker container.
I followed the official instructions from Jellyfin: https://jellyfin.org/docs/general/administration/hardware-acceleration/intel/#configure-with-linux-virtualization
But I haven't added the host group ID, what is that good for?
And I also installed the intel-media-va-driver
, i965-va-driver
and firmware-linux-nonfree
as well as firmware-misc-nonfree
. Are all of those necessary?
And then I had to add options i915 enable_guc=2
to /etc/modprobe.d/i915.conf
to get it to work. This is supposedly only necessary for Low-Power Encoding, but it was necessary to get hardware transcoding to work at all?
I'm happy that it is working now, but I don't really feel like I fully understood what I did. Were some steps unnecessary or did I miss anything?
I followed this guide: https://notthebe.ee/blog/easy-ssl-in-homelab-dns01/
But my Nginx Proxy Manager is running on a VPS that is connected to my local network through a WireGuard tunnel. Could that be an issue? I don't know why it's not working?
My NPM is also accessible to the local IP of my homeserver on which WireGuard is running.
I want to start Konsole in Split View which is possible using the --layout command but then also run a command like htop in one of the sessions. How do I do that?
I'm currently using BackInTime and TimeShift to create local backups of my system and personal data. I configured both to keep 7 daily, 4 weekly and 6 monthly backups. I use this setup on my Desktop-PC as well as on my laptop. Now I also use BorgBackup to create an additional backup of my local backups and right now use the same retention policy, so keeping 7 daily, 4 weekly, 6 monthly and 1 yearly backup. This turned out to be a lot of data and my 1 TB off-site storage solution is currently using 953 GB of storage. Now I was wondering if my double retention policy is even making sense and what others are using. So how long do you usually keep backups? How many backups do you keep and what would you suggest for my setup?
Solved, see below.
I recently reinstalled my home server and was unable to open my LUKS-encrypted hard drive. Neither my usual passphrase nor a newly created keyfile were working. I tested on different distros, initially on my new Proxmox installation, later on a the Arch ISO. I eventually tried the disk on my main system, on which it used to be and I still had an old keyfile on - et voilá. So I created keyfiles as suggested in the wiki and occasionally md5sum returned a different hash for the keyfile! Why is this happening? I find' this extremely concerning because this could potentially result in massive data loss due to a keyfile apparently randomly not working as I was experiencing it. What am I missing?
For reference because I don't know how to share what I exactly did.
Scenario #1: A directory on a mounted hard drive on my desktop. ``` $ echo -n '$mypassphrase' > ./dir/keyfile $ md5sum ./dir/keyfile c6dd9329dbe030127ce5e19d85de4df9 ./dir/keyfile
chown root:root ./dir/keyfile; chmod 400 ./dir/keyfile
md5sum ./dir/keyfile
c6dd9329dbe030127ce5e19d85de4df9 ./dir/keyfile ```
Scenario #2: My old keyfile in /etc on my desktop containing $mypassphrase. ```
md5sum /etc/keyfile
a1c10c2d023c982259f6c945ebee664e /etc/keyfile ```
Scenario #3: Booted from the Arch ISO on my server. ```
echo -n '$mypassphrase' > keyfile
chown root:root keyfile; chmod 400 keyfile
md5sum keyfile
c6dd9329dbe030127ce5e19d85de4df9 keyfile ```
Scenario #4:
A directory in /home on my desktop.
$ echo -n '$mypassphrase' > ./keyfile $ md5sum keyfile a1c10c2d023c982259f6c945ebee664e keyfile
EDIT: I just moved the disk back into my server and tried echo'ing my passphrase into a keyfile which returned the hash starting with c6, whereas opening a file using nano and pasting the passphrase into the file returned the hash starting with a1.
EDIT: I moved the disk back into my server, reinstalled Proxmox and tried again. I was able to unlock the disk after I pasted the passphrase into a file and deleted all trailing spaces/newline. I also tried echo'ing the passphrase into a keyfile and that also did not work, no clue why but it seems to work on some systems on not on others.
I currently have a server running Arch Linux and Jellyfin, one Raspberry Pi 4 running NextCloudPi and one Raspi running Pi-hole. Eventually I want to host all and more services on one maschine. I thought about using Proxmox and Docker, but I'm not sure what the ideal setup would look like. For now I thought I use Proxmox and a simple Debian VM which I run Docker on and running Portainer, Pi-hole, Nextcloud, a reverse proxy and Jellyfin as Docker containers? Is that a smart setup? It gives me the ease of using Docker and a easy way of creating backups of single applications or the whole VM, leaving me with the possibility to add container or VMs for various other services, for testing etc. Or should I just use LXC for said applications? Any guidance would be appreciated!
EDIT: In case my comment was overlooked. Thanks for all your comments, I'll see how I implement things when I get the time to reinstall my server.
![dataprolet](https://lemmy.dbzer0.com/pictrs/image/35447032-a791-4be6-8b71-79f95612bfdf.jpeg?format=webp&thumbnail=64)
Formerly know as u/Arjab. Anarchist | Antifascist | Anticapitalist. Arch Linux | FOSS | Piracy | Security & Privacy
Looking for a Mastodon instance? Check out @serverbot@undefined.social.