Skip Navigation

Search

Selfhosted @lemmy.world

Is it possible to mount a ZFS drive in OpenMediaVault?

Original Post:

I recently had a Proxmox node I was using as a NAS fail catastrophically. Not surprising as it was repurposed 12 year old desktop. I was able to salvage my data drive, but the boot drive was toast. Looks like the sata controller went out and fried the SSD I was using as the boot drive. This system was running TurnKey FileServer as a LXC with the media storage on a subvol on a ZFS storage pool.

My new system is based on OpenMediaVault and I'm am happy with it, but I'm hitting my head against a brick wall trying to get it to mount the ZFS drive from the old system. I tried installing ZFS using the instructions here as OMV is based on Debian but haven't had any luck so far.

Solved:

  1. Download and install OMV Extras
  2. OMV's web admin panel, go to System -> Plugins and install the Kernel Plugin
  3. Go to System -> Kernel and click the blue icon that says Proxmox (looks like a box with a down arrow as of Jan
Selfhosted @lemmy.world

[Solved] OPNSense accessible on WAN by default?

Solved : I was still on my local network instead of my LTE network, so I was accessing the global ip through the local network, and thus the access page.

Hello,

I am running OPNSense as my router for my ISP and my local network.

When I access my global ip, it lands me on the login page of my OPNSense router. Is that normal?

The only Firewall WAN Rule I added is the rule to enable my Wireguard instance (and I disabled it to test if that was the issue)

I was messing with the NAT Outbound for the Road Warrior setup as explained in the OPNSense Road Warrior tutorial, but that rule is also disabled.

I enabled OutboundDNS to override a local domain.

And I have a dynamic DNS to access my VPN with a FQDN instead of the ip directly.

But otherwise, I have the vanilla configuration. I disabled all of these rules I've created to make sure that they weren't the issue, and I can still access my OPNSense from the WAN interface.

So is that a normal default behaviour? If so, how can I

Selfhosted @lemmy.world

Thanks guys! I was finally able to self host my own raw-html "blog"

So, I've been trying to accomplish this for a while. First I posted asking for help getting started, then I posted about trying to open ports on my router. Now, I proudly post about being able to show the world (for the first time ever) my abysmal lack of css and html skills.

I would like to thank everyone in this community, specially to those who took the time to answer my n00b questions. If you'd like to see it, it will be available at: https://kazuchijou.com/

(Beware however, for you might cringe into oblivion and back.)

Since this website is hosted on my desktop computer, there will be some down-time here and then, however I'll leave it on for the next 48 hours (rip electricity bill) only for you guys to see. <3


Now, there are a couple of things that need addressing:

I set it up as a cloudflare tunnel and linked it to my domain. However, I still don't know any docker at all (despite using it for th

Selfhosted @lemmy.world

[Solved] Forward authentication with Authentik for Firefly3

*** For anyone stumbling on this post, and is as newbie as I am right now, forward auth doesn't work with FireflyIII.

I thought that forward auth was the same as a proxy, but in this case, it is the proxy that provides the x-authentik tags.

So for Firefly, set up Authentik as a proxy provider and not a forward auth.

I haven't figured out the rest yet, but at least, x-authentik-email is in my header now.

Good luck ***

Hello,

I am trying to setup Authentik to do a forward auth for Firefly3, using caddy. I am trying to learn External authentication so my knowledge is limited.

My setup is as follows.

By looking at the Firefly doc Firefly doc, I need to set AUTHENTICATION_GUARD=remote_user_guard AUTHENTICATION_GUARD_HEADER=HTTP_X_AUTHENTIK_EMAIL in my .env file. I used the base .env file provided by Firefly and modified only these two lines

Then, in my Authentik, I made a forward auth for a single app

Selfhosted @lemmy.world

Noob stuck on port-forwarding wile trying to host own raw-html website. Pls help

Edit: Solution

Yeah, thanks to u/postnataldrip@lemmy.world I contacted my ISP and found out that in fact they were blocking my port forwarding capabilities. I gave them a call and I had to pay for a public IP address plan and now it's just a matter of testing again. Thank you very much to everyone involved. I love you. It was Megacable by the way. If anyone from my country ever encounters the same problem I hope this post is useful to you.

Here's the original post:

Hey!

Ok, so I'm trying to figure this internet thing out. I may be stupid, but I want to learn.

So, what I'm essentially doing is trying to host my own raw html website on my own hardware and get it out to the internet for everyone to see (temporarily of course, I don't want to get in trouble with hackers and bots) I just want to cross that out of my bucket list.

What I've done so far:

  • I set up a qemu/kvm virtual machine with debian as my server
  • I configured a bridge so that it's available to my local network
  • I g
Selfhosted @lemmy.world

Help Running Scrutiny

Hello All,

I am trying to run scrutiny via docker compose and I am running into an issue where nothing shows up on the wub UI. If anyone here has this working would love some ideas on what the issue could be.

as per there trouble shooting for this I followed those steps and here is the output

 undefined
    
$ smartctl --scan
/dev/sda -d scsi # /dev/sda, SCSI device
/dev/sdb -d sat # /dev/sdb [SAT], ATA device
/dev/nvme0 -d nvme # /dev/nvme0, NVMe device


  
 undefined
    
docker run -it --rm \
  -v /run/udev:/run/udev:ro \
  --cap-add SYS_RAWIO \
  --device=/dev/sda \
  --device=/dev/sdb \
  ghcr.io/analogj/scrutiny:master-collector smartctl --scan
/dev/sda -d scsi # /dev/sda, SCSI device
/dev/sdb -d sat # /dev/sdb [SAT], ATA device


  

So I think I am imputing the devices correctly.

I only really changed the port number for the web UI to 8090 from 8080 in there example as 8080 is

Selfhosted @lemmy.world

How to change qBittorrent admin password in docker-container? [help]

I'm currently trying to spin up a new server stack including qBittorrent. when I launch the web UI, it asks for a login on first launch. According to the documentation, the default user id admin and the default password is adminadmin.

Solved:

For qBittorrent ≥ v4.1, a randomly generated password is created at startup on the initial run of the program. After starting the container, enter the following into a terminal:

docker logs qbittorrent or sudo docker logs qbittorrent (if you do not have access to the container)

The command should return:

******** Information ******** To control qBittorrent, access the WebUI at: http://localhost:5080 The WebUI administrator username is: admin The WebUI administrator password was not set. A temporary password is provided for this session: G9yw3qSby You should set your own password in program preferences.

Use this password to login for this session. Then create a new password by opening http://{localhost}:5080 and navigate the menus

Selfhosted @lemmy.world

Chaining routers and GUA IPv6 addresses

Hey fellow self-hosting lemmoids

Disclaimer: not at all a network specialist

I'm currently setting up a new home server in a network where I'm given GUA IPv6 addresses in a 64 bit subnet (which means, if I understand correctly, that I can set up many devices in my network that are accessible via a fixed IP to the oustide world). Everything works so far, my services are reachable.

Now my problem is, that I need to use the router provided by my ISP, and it's - big surprise here - crap. The biggest concern for me is that I don't have fine-grained control over firewall rules. I can only open ports in groups (e.g. "Web", "All other ports") and I can only do this network-wide and not for specific IPs.

I'm thinking about getting a second router with a better IPv6 firewall and only use the ISP router as a "modem". Now I'm not sure how things would play out regarding my GUA addresses. Could a potential second router also assign addresses to devices in that globally routable space directl

Selfhosted @lemmy.world

How do I redirect to a /path with Nginx Proxy Manager?

Hi folks,

Just set up Nginx Proxy Manager + Pihole and a new domain with Porkbun. All is working and I have all my services service.mydomain.com, however some services such as pihole seem to be strictly reachable with /admin at the end. This means with my current setup it only directs me to pihole.mydomain.com which leads to a 403 Forbidden.

This is what I have tried, but with no prevail. Not really getting the hang of this so would really appriciate a pinpoint on this :)

Selfhosted @lemmy.world

Randomly getting ECH errors on self-hosted services.

In the last couple of weeks, I've started getting this error ~1/5 times when I try to open one of my own locally hosted services.

I've never used ECH, and have always explicitly restricted nginx to TLS1.2 which doesn't support it. Why am I suddenly getting this, why is it randomly erroring, then working just fine again 2min later, and how can I prevent it altogether? Is anyone else experiencing this?

I'm primarily noticing it with Ombi. I'm also mainly using Chrome Android for this. But, checking just now; DuckDuckGo loads the page just fine everytime, and Firefox is flat out refusing to load it at all.

Firefox refuses to show the cert it claims is invalid, and 'accept and continue' just re-loads this error page. Chrome will show the cert; and it's the correct, valid cert from LE.

There's 20+ services going through the same nginx

Selfhosted @lemmy.world

Missing /etc/systemd/resolved.conf file

Solution: I just had to create the file

I wanted to install Pi-Hole on my server and noticed that port 53 is already in use by something.

Apparently it is in use by systemd-resolved:

 undefined
    
~$ sudo lsof -i -P -n | grep LISTEN
[...]
systemd-r    799 systemd-resolve   18u  IPv4   7018      0t0  TCP 127.0.0.53:53 (LISTEN)
systemd-r    799 systemd-resolve   20u  IPv4   7020      0t0  TCP 127.0.0.54:53 (LISTEN)
[...]

  

And the solution should be to edit /etc/systemd/resolved.conf by changing #DNSStubListener=yes to DNSStubListener=no according to this post I found. But the /etc/systemd/resolved.conf doesn't exist on my server.

I've tried sudo dnf install /etc/systemd/resolved.conf which did nothing other than telling me that systemd-resolved is already installed of course. Rebooting also didn't work. I don't know what else I could try.

I'm running Fedora Server.

Is there another wa

Selfhosted @lemmy.world

Having difficulty visiting an mTLS-authenticated website from GrapheneOS

I host a website that uses mTLS for authentication. I created a client cert and installed it in Firefox on Linux, and when I visit the site for the first time, Firefox asks me to choose my cert and then I'm able to visit the site (and every subsequent visit to the site is successful without having to select the cert each time). This is all good.

But when I install that client cert into GrapheneOS (settings -> encryption & credentials -> install a certificate -> vpn & app user certificate), no browser app seems to recognize that it exists at all. Visiting the website from Vanadium, Fennec, or Mull browsers all return "ERR_BAD_SSL_CLIENT_AUTH_CERT" errors.

Does anyone have experience successfully using an mTLS cert in GrapheneOS?

[SOLVED] Thanks for the solution, @Evkob@lemmy.ca

Selfhosted @lemmy.world

Weird (to me) networking issue - can you help?

I have two subnets and am experiencing some pretty weird (to me) behaviour - could you help me understand what's going on?


Scenario 1

 undefined
    
PC:                        192.168.11.101/24
Server: 192.168.10.102/24, 192.168.11.102/24

  

From my PC I can connect to .11.102, but not to .10.102:

 bash
    
ping -c 10 192.168.11.102 # works fine
ping -c 10 192.168.10.102 # 100% packet loss


  

Scenario 2

Now, if I disable .11.102 on the server (ip link set <dev> down) so that it only has an ip on the .10 subnet, the previously failing ping works fine.

 undefined
    
PC:                        192.168.11.101/24
Server: 192.168.10.102/24

  

From my PC:

 bash
    
ping -c 10 192.168.10.102 # now works fine

  

This is baffling to me... any idea why it might be?


Here's some additional information:

  • The two subnets are on different vlans (.10/24 is untagged and .11/24 is tagged 11).
  • The PC and Server are connected to the same managed switch, which however does nothing "strange" (i
Selfhosted @lemmy.world

New Network Stack with an Unknown Issue..?

Update: It was DNS... its always DNS...

Hello there! I'm in a bit of a pickle.. I've recently bought the full budget Tp-link omada stack for my homelab. I got the following devices in my stack:

  • ER605 Router
  • OC200 Controller
  • SG2008P PoE Switch
  • EAP610 Wireless AP
  • EAP625 Wireless AP (getting soon)

I've set it all up and it was working fine for the first few days of using it. However, last few days it's been working very much on and off randomly(?) . Basically devices will state they are connected to WiFi/Ethernet, but they are not actually getting it. (As seen in the picture). This is happening with our phones(Pixel7+S23U) and my server(NAS:Unraid), have not noticed any problems on our desktop PCs. So it is happening on both wired and wireless, as my server and desktop PC is connected to the switch.

I haven't done many configurations in the omada software yet, but am assuming it's something I have done that causes this... Would greatly appreciate any advice to solve/troubl

Selfhosted @lemmy.world

Need help routing Wireguard container traffic through Gluetun container

The solution has been found, see the "Solution" section for the full write up and config files.

Initial Question

What I'm looking to do is to route WAN traffic from my personal wireguard server through a gluetun container. So that I can connect a client my personal wireguard server and have my traffic still go through the gluetun VPN as follows:

client <--> wireguard container <--> gluetun container <--> WAN

I've managed to set both the wireguard and gluetun container up in a docker-compose file and made sure they both work independently (I can connect a client the the wireguard container and the gluetun container is successfully connecting to my paid VPN for WAN access). However, I cannot get route traffic from the wireguard container through the gluetun container.

Since I've managed to set both up independently I don't believe that there is an issue with the docker-compose file I used for setup. What I believe to be the issue is either the routing rules in my wireguard cont

Selfhosted @lemmy.world

Selectively chaining a VPN to another while allowing split tunnelling on clients?

Currently, I have two VPN clients on most of my devices:

  • One for connecting to a LAN
  • One commercial VPN for privacy reasons

I usually stay connected to the commercial VPN on all my devices, unless I need to access something on that LAN.

This setup has a few drawbacks:

  • Most commercial VPN providers have a limit on the number of simulations connected clients
  • I either obfuscate my IP or am able to access resources on that LAN, including my Pi-Hole fur custom DNS-based blocking

One possible solution for this would be to route all internet traffic through a VPN client on the router in the LAN and figuring out how to still be able to at least have a port open for the VPN docker container allowing access to the LAN. But then the ability to split tunnel around that would be pretty hard to achieve.

I want to be able to connect to a VPN host container on the LAN, which in turn routes all internet traffic through another VPN client container while allowing LAN traffic, but still

Selfhosted @lemmy.world

Immich keeps restarting the backup

Hi guys! I'm having my first attempt at Immich (...and docker, since I'm at it). So I have successfully set it up (I think), and connected the phone and it started uploading. I have enabled foreground and background backup, and I have only chosen the camera album from my Pixel/GrapheneOS phone. Thing is, after a while (when the screen turns off for a while, even though the app is unrestricted in Android/GrapheneOS, or whenever changing apps...or whenever it feels like), the backup seems to start again from scratch, uploading again and again the first videos from the album (the latest ones, from a couple of days ago), and going its way until somewhere in December 2023...which is where at some point decides to go back and re-do May 2024. It's been doing this a bunch of times. I've seen mentioned a bunch of times that I should set client_max_body_size on nginx to something large like 5000MB. However in my case it's set to 0, which should read as unrestricted. It doesn't skip large videos

Selfhosted @lemmy.world

I have issues with asymmetric routing

One is the route from my Proxmox (vimes) server to my NAS, (colon) going via my Router (pessimal) (as it should be) Second one is my NAS going to Proxmox directly. However I didn't set any static routes and this is causing issues as the Router Firewalls those Asymmetric Connections. This is happening since I upgraded Proxmox... I am not the best at network stuff, so if someone has some pointers I'd be most grateful.

I'm a moron and had a wrong subnet mask.

Selfhosted @lemmy.world

Traefik + Vaultwarden 502 Error

Edit: Thanks for the help, issue was solved! Had Traefik's loadbalancer set to route to port 8081, not the internal port of 80. Whoops.

Intro

HI everyone. I've been busy configuring my homelab and have run into issues with Traefik and Vaultwarden running within Podman. I've already successfully set up Home Assistant and Homepage but for the life of me cannot get things working. I'm hoping a fresh pair of eyes would be able to spot something I missed or provide some advice. I've tried to provide all the information and logs relevant to the situation.

Expected Behavior:

  1. Requests for *.fenndev.network are sent to my Traefik server.
  2. Incoming HTTPS requests to vault.fenndev.network are forwarded to Vaultwarden
    • HTTP requests are upgraded to HTTPS
  3. Vaultwarden is accessible via https://vault.fenndev.network and utilizes the wildcard certificates generated by Traefik.

Quick Facts

Overview

  • I'm running Traefik and Vaultwarden in Podman, using Quadlet
Selfhosted @lemmy.world

Using Nextcloud as a directory/folder/remote location

Hi guys

Is there any way to access Nextcloud files (self hosted) in a file manager just like a regular directory or remote location? So the way iCloud or Dropbox allow you to access files and use them for example to upload them in a browser. So far I only managed to access the files in the Nextcloud WebUI or via the command line (but then a resync is necessary).

Any input is appreciated. Thanks!