For those kind of issues I'd recommend snapshots instead of backups
Syncthing or unison might be what you want
Prometheus-alertmanager and graphana (especially graphana!) seem a bit too involved for monitoring my homelab (prometheus itself is fine: it does collect a lot of statistics I don't care about, but it doesn't require configuration so it doesn't bother me).
Do you know of simpler alternatives?
My goals are relatively simple:
- get a notification when any systemd service fails
- get a notification if there is not much space left on a disk
- get a notification if one of the above can't be determined (eg. server down, config error, ...)
Seeing graphs with basic system metrics (eg. cpu/ram usage) would be nice, but it's not super-important.
I am a dev so writing a script that checks for whatever I need is way simpler than learning/writing/testing yaml configuration (in fact, I was about to write a script to send heartbeats to something like Uptime Kuma or Tianji before I thought of asking you for a nicer solution).
Your system will appeal to the intersection between people who like gambling and people who like donating to charities.
Even among them, I don't see why anyone would prefer putting 100$ in your web3 thingie instead of just donating 50$, gambling with 45$, and buying a beer with the 5$ they would lose to you... well, there are a lot of stupid peculiar people (especially among crypto bros), so you might actually be ok.
About the implementation, the 50% to charities should be transferred automatically... what's the point of a smart contract if people must trust you to "check the total donations and create a donation on The Giving Block"?
PS:
IDK about the US, but where I live gambling is regulated very strictly: make sure to double check with a lawyer before getting into trouble.
2 more cents :)
I've been using syncthing for a while now, on different devices, and the only unreliability I've run into is with android killing syncthing to save battery life, which is kinda hilarious, considering all the vendor- and google-provided crap they happily waste battery on (I don't use it, but for what I've heard iOS is even worse in this regard).
Specifically, I have a samsung tablet where, no matter how much I tinkered with system settings, synchthing would only run if I manually launched the app or while the tablet was charging (BTW I still use that same tablet, but it now runs LineageOS and syncthing works flawlessly).
All this is to say, you should probably look into system settings and research ways to convince your OS to do what it's supposed to rather than tinkering with syncthing itself.
I don't see the ethics implications of sharing that? What would happen if you did disclose your discoveries/techniques?
I don't know much about LLMs, but doesn't removing these safeguards just make the model as a whole less useful?
I fear it was nothing that entertaining: it was just my "normal" dark panel at the top of the screen and a second "default" white one at the bottom (this last one partially covered the windows I had open). I didn't try triggering notifications or otherwise causing some kind of mayhem.
I'm just messing around with testing/configuring different desktop environment/window managers and I'm looking for a quick way to preview them (running the new session as my user would be fine too - I just thought it would be simpler as a different user)
Wow, that's so neat!
On my machine it opens a fullscreen plasma spash and then it shows the new session intermixed/overlayed with my current one instead of in a new window... basically, it's a mess :D
If I may abuse your patience:
- what distro/plasma version are you running? (here it's opensuse slowroll w/ plasma 6.1.4)
- what happens if you just run
startplasma-wayland
from a terminal as your user? (I see the plasma splash screen and then I'm back to my old session)
I'm not much hoepful, but... just in case :)
I would like to be able to start a second session in a window of my current one (I mean a second session where I log in as a different user, similar to what happens with the various ctrl+alt+Fx, but starting a graphical session rather than a console one).
Do you know of some software that lets me do it?
Can I somehow run a KVM using my host disk as a the disk for the guest VM (and without breaking stuff)?
Read this, delete this post and try again.
Yes, XML is different than JSON and YAML, but it's not particularly easier or harder to manually read/edit than JSON or YAML are (IMO the are all a pain, each in its own way).
If you want to look at it from the programmer's side (which is not what OP was talking about)... marshalling/unmarshalling has been a solved issue for at least 20yrs now :) just have a library do it for you (do map json/yaml properties to you objects manually?).
You don't need to worry about attributes/child elements: <person name="jack" />
and <person><name>jack</name></person>
will work the same (ok, this may depend on what language/library you pick - the lib I used back in the day worked either way).
If anything, the issue with XML is all the unnecessarily complicated stuff they added to its "core" (eg. CDATA, namespaces, non-standalone documents, ...) and all the unnecessarily complicated technologies/standards they developed around XML (from Xinclude to SOAP and many others)... but just ignore that BS (like the rest of the world does) and you'll mostly be fine :)
Yaml is fundamentally the same as the json and xml it has mostly replaced (and the toml that didn't manage to replace yaml)... it's a data serialization format and just doesn't have any facility for making abstractions, which are the main tool we human use to deal with complexity.
Java have had very bad press lately (since the log4j fiasco I guess? maybe since before).
IDK why people blame Java for any issues with any library/project written in it... it's as dumb as blaming C/C++ for all the windows fuckups, and nobody blames php for the various cpanel vulnerabilities or python for all the shit people write in it.
Best of luck to you!
I’m trying to understand Git, but it’s a giant conceptual leap.
Git is not that different from svn (I mean, the biggest hurdle is going from a shared folder to any version control system)... I'd say the main difference is that branches live in a different namespace than files (ie. you don't have trunk/src/whatever but just src/whatever in the main branch). On top of that there's that commit and push are two different things (and the same with fetch and checkout) and that merges are way easier than in svn (where you had to merge stuff manually).
If you create a repo locally and clone it twice in two different directories, you can easily simulate what would happen when you and a coworker collaborate via a centralized repo (say, github) - do a few experiments and you'll see it's not as complicated as it seems (I'd recommend using the CLI instead of some GUI client: it's way easier to figure things out without the overhead of learning to differentiate between git concepts and how the GUI tries to help).
Personally, I would sell everything and get a used PC on ebay (a small "minipc" one, unless space for hard disks is needed).
Take a look at what you could buy on ebay just by selling off the nvidia card.
why is your network like this?
Well, at the moment my network is actually flat :)
This is an experiment I'm doing because I wanted to have all the management stuff on a different subnet (eg. adguard dns is on the "regular" subnet everyone uses, but its web interface is on the special subnet only select devices can talk to).
Of course (like with most stuff in my homelab), it's not like I really have a super-compelling security reason to that, it's mostly that I wondered "what if?" :D
Oh. the ping option you are referring to is -I
(upper case) and takes either an interface name or an ip. I did try giving a .10/24 IP to the PC and the results were consistent with scenario 1 (pings where source and destination are on the same subnet work, pings acrrss subnets don't), so I didn't mention that in the OP
I don't think I quite explained the situation well enough: my server only has 1 ethernet port (same as my PC), otherwise I wouldn't have bothered with vlans (well, I would still have bothered, since my house still only has one "backbone" cable running through it, but I would have configured it on the switches only).
Anyway... a few of the things you say/imply go against my understanding of networking, so one of us would better go back RTFM as you suggest :) (just kidding - most probably I just don't understand what you mean)
Thanks! Forwarding is disabled. I don't want the server to steal the router's job :)
So the request goes trough but the replies are discarded ? That could actually be it!
I think there was an option to allow that... I'll search it and give it a try. Thanks!
I tried dropping the default routes (one at a time) and it doesn't make a difference, which isn't (I think) surprising as all traffic is local as far as the server in scenario 1 is concerned. Also IIUC only the default gateway with the lowest metric actually counts.
I have two subnets and am experiencing some pretty weird (to me) behaviour - could you help me understand what's going on?
----
Scenario 1
PC: 192.168.11.101/24 Server: 192.168.10.102/24, 192.168.11.102/24
From my PC I can connect to .11.102, but not to .10.102:
bash ping -c 10 192.168.11.102 # works fine ping -c 10 192.168.10.102 # 100% packet loss
----
Scenario 2
Now, if I disable .11.102 on the server (ip link set <dev> down
) so that it only has an ip on the .10 subnet, the previously failing ping works fine.
PC: 192.168.11.101/24 Server: 192.168.10.102/24
From my PC:
bash ping -c 10 192.168.10.102 # now works fine
This is baffling to me... any idea why it might be?
----
Here's some additional information:
-
The two subnets are on different vlans (.10/24 is untagged and .11/24 is tagged 11).
-
The PC and Server are connected to the same managed switch, which however does nothing "strange" (it just leaves tags as they are on all ports).
-
The router is connected to the aformentioned switch and set to forward packets between the two subnets (I'm pretty sure how I've configured it so, plus IIUC the second scenario ping wouldn't work without forwarding).
-
The router also has the same vlan setup, and I can ping both .10.1 and .11.1 with no issue in both scenarios 1 and 2.
-
In case it may matter, machine 1 has the following routes, setup by networkmanager from dhcp:
default via 192.168.11.1 dev eth1 proto dhcp src 192.168.11.101 metric 410 192.168.11.0/24 dev eth1 proto kernel scope link src 192.168.11.101 metric 410
- In case it may matter, Machine 2 uses systemd-networkd and the routes generated from DHCP are slightly different (after dropping the .11.102 address for scenario 2, of course the relevant routes disappear):
default via 192.168.10.1 dev eth0 proto dhcp src 192.168.10.102 metric 100 192.168.10.0/24 dev eth0 proto kernel scope link src 192.168.10.102 metric 100 192.168.10.1 dev eth0 proto dhcp scope link src 192.168.10.102 metric 100 default via 192.168.11.1 dev eth1 proto dhcp src 192.168.11.102 metric 101 192.168.11.0/24 dev eth1 proto kernel scope link src 192.168.11.102 metric 101 192.168.11.1 dev eth1 proto dhcp scope link src 192.168.11.102 metric 101
----
solution
(please do comment if something here is wrong or needs clarifications - hopefully someone will find this discussion in the future and find it useful)
In scenario 1, packets from the PC to the server are routed through .11.1.
Since the server also has an .11/24 address, packets from the server to the PC (including replies) are not routed and instead just sent directly over ethernet.
Since the PC does not expect replies from a different machine that the one it contacted, they are discarded on arrival.
The solution to this (if one still thinks the whole thing is a good idea), is to route traffic originating from the server and directed to .11/24 via the router.
This could be accomplished with ip route del 192.168.11.0/24
, which would however break connectivity with .11/24 adresses (similar reason as above: incoming traffic would not be routed but replies would)...
The more general solution (which, IDK, may still have drawbacks?) is to setup a secondary routing table:
bash echo 50 mytable >> /etc/iproute2/rt_tables # this defines the routing table # (see "ip rule" and "ip route show table <table>") ip rule add from 192.168.10/24 iif lo table mytable priority 1 # "iff lo" selects only # packets originating # from the machine itself ip route add default via 192.168.10.1 dev eth0 table mytable # "dev eth0" is the interface # with the .10/24 address, # and might be superfluous
Now, in my mind, that should break connectivity with .10/24 addresses just like ip route del
above, but in practice it does not seem to (if I remember I'll come back and explain why after studying some more)
I want to have a local mirror/proxy for some repos I'm using.
The idea is having something I can point my reads to so that I'm free to migrate my upstream repositories whenever I want and also so that my stuff doesn't stop working if some of the jankiest third-party repos I use disappears.
I know the various forjego/gitea/gitlab/... (well, at least some of them - I didn't check the specifics) have pull mirroring, but I'm looking for something simpler... ideally something with a single config file where I list what to mirror and how often to update and which then allows anonymous read access over the network.
Does anything come to mind?
If going the route of a backup solution, is it feasible to install OpenWRT on all of my devices, with the expectation that I can do some sort of automated backups of all settings and configurations, and restore in case of a router dying?
My two cents: use a "full" computer as your router (with either something like OPNsense or any "regular" linux distro if you don't need the GUI) and OpenWRT on your access points.
Unless you use the GUI and backup/restore the configuration (as you would with proprietary firmwares), OpenWRT is frankly a pain to configure and deploy. At the moment I'm building custom images for all my devices, but (next time™) I'm gonna ditch all that, get an x86 router and just manually manage OpenWRT on my wifi APs (I only have two and they both have the same relatively straightforward config).
It’s a pain that I know can be solved with buying dedicated access points (…right?)
Routers and access points are just computers with network interfaces (there may be level-2-only APs, but honestly I've never heard of any)... most probably your issue is that the firmware of your "routers as access points" doesn't want to be configured as a dumb AP.