probably not... Because I'm comparing it to everything... but id like to share some details about how my app works so you can tell me what im missing. id like to have wording in my app to say something like "most secure chat app in the world"... i probably cant do that because it doesnt qualify... but i want to understand why?
im not an expert on cyber security. im sure there are many gaps in my knowlege in this domain.
using javascript, i created a chat app. it is using peerjs-server to create an encrypted webrtc connection. this is then used to exchange additional encryption keys from cryptography functions built into browsers to add a redundent layer of encryption. the key exchange is done like diffie-helman over webrtc (which can be considered secure when exchanged over public channels)
i sometimes recieve feedback like "javascript is inherently insecure". i disagree with this and have opened sourced my cryptography module. its basically a thin wrapper around vanilla crypto functions of a browser. a prev post on the matter.
another concern for my kind of app (PWA) is that the developer may introduce malicious code. this is an important point for which i open sourced the project and give instructions for selfhosting. selhosting this app has some unique features. unlike many other selfhosted projects, this app can be hosted on github-pages for free (instructions are provided in the readme). im also working on introducing a way that users can selfhost federated modules. a prev post on the matter.
to prevent things like browser extensions, the app uses strict CSP headers to prevent unauthorised code from running. selfhosting users should take note of this when setting up their own instance.
i received feedback the Signal/Simplex protocol is great, etc. id like to compare that opinion to the observation in how my todo app demo works. (the work is all experimental work-in-progress and far from finished). the demo shows a simple functionality for a basic decentralized todo list. this should already be reasonably secure. i could add a few extra endpoints for exchanging keys diffie-helman style. which at this point is relatively trivial to implement. I think it's simplicity could be a security feature.
the key detail that makes this approach unique, is because as a webapp, unlike other solutions, users have a choice of using any device/os/browser.
i think if i stick to the principle of avoiding using any kind of "required" service provider (myself included) and allowing the frontend and the peerjs-server to be hosted independently, im on track for creating a chat system with the "fewest moving parts". im hope you will agree this is true p2p and i hope i can use this as a step towards true privacy and security. security might be further improved by using a trusted VPN.
i created a threat-model for the app in hopes that i could get a pro-bono security assessment, but understandable the project is too complicated for pro-bono work.
while there are several similar apps out there like mine. i think mine is distinctly a different approach. so its hard to find best practices for the functionalities i want to achieve. in particular security practices to use when using p2p technology.
(note: this app is an unstable, experiment, proof of concept and not ready to replace any other app or service. It's far from finished and provided for testing and demo purposes only. This post is to get feedback on the app to determine if i'm going in the right direction for a secure chat app)
If this is web chat where the JS is coming from the server like on normal web pages, that's the death of security right there. The attacker takes over the server and replaces the JS with a backdoored version, which the users receive next time they reload the page, and that's all she wrote. That's what "JS is inherently insecure" refers to, not to the language itself, although that has plenty of its own problems. (Self-hosted servers aren't especially immune to takeover, if that's what you're thinking).
This post is to get feedback on the app to determine if i’m going in the right direction for a secure chat app
You are going in the wrong direction, sorry to say.
Before going further I'd encourage you to spend some time with the wonderful book "Security Engineering" by Ross Anderson. The 2nd edition is online as free pdf's on the author's home page, and parts of the 3rd edition are there too. Don't treat it as something super technical, but rather, as a manual on how to be paranoid.
So, assuming there is once a trusted loaded version (which HAS to be the case anyway otherwise you can't start, the same as one would do with a native executable) then there can't be an arbitrary version loaded next without it being validated first.
PS: I'm not saying this what OP does, I'm saying executing code (Javascript or not) that must be downloaded first is not in itself a security problem.
Ok, I looked at the Mozilla page. If I understand it right, it lets the server specify a hash that the client checks against a remote resource such as script from a CDN. So that can help notice a compromised CDN, but not a compromised server. If the hash is permanently stored in the browser, that is better, but there are also browser updates to say nothing of exploits. This approach just seems doomed.
Added: hmm maybe you could load the page from a local page or bookmarklet containing the hash. But then the whole app might as well be local. It was once possible to sign JS with a code signing certificate but I haven't heard about those in ages.
What you're looking for is called remote attestation but again, many attacks possible.
I think for my app to be regarded well in security I think it's important for people to use their own instances. The "live app" as I call it is an experimental proof of concept. I wondering about the idea that the app is run on your own forks, but occasionally sync from upstream. As it stands my app is too garbage for anyone to want a copy, but that should eliminate those concerns.
It's also an offline first pwa. Right now it fetches the latest version, but I don't see why I can't create a toggle on the UI to not fetch if there is cache... Again the app is unstable and experimental. I'm working on fixes and improvements as I see it to make a better app. It's a while away from being able to advocate selfhosting to users. But in theory it could address your concerns?
Many attack vectors still indeed exist. With P2P web tech it seems that this allows for an interesting approach and could help reduce the attack-surface. The app is available for iOS, android and desktop. Let me know if you have more concerns.
Shrug, it sounds like you can do pretty well using browser capabilities given browser limitations. That doesn't help so much when the browser itself is a huge attack surface. The standalone apps get rid of the browser (they don't use webview or anything like that, I hope) and as such, I'd probably use them in preference to the browser version. In the end though, none of this stuff is anywhere near the level of what payments terminals or bitcoin wallets use. That's probably fine for most users.
There is a site pageintegrity.net that offers a browser extension that allows signing web pages. Again I'm dubious, but at least they are thinking about a valid problem.
I'd probably be satisfied using old fashioned unix talk (ytalk) over ssh tunnels but I think these days, you need mobile clients.
that can help notice a compromised CDN, but not a compromised server.
Not sure I understand the distinction, a CDN is a server, so if OP is hosting code to execute on their server, they would be checked by whatever has already been downloaded and run locally before, i.e a PWA
If the hash is permanently stored in the browser, that is better, but there are also browser updates
I'm rather sure that localStorage persists over browser updates so that can be "permanent enough"
to say nothing of exploits.
I mean... sure but at that point the same apply to native. If you can't trust the running environment you are screwed anyway.
Not sure I understand the distinction, a CDN is a server, so if OP is hosting code to execute on their server, they would be checked by whatever has already been downloaded and run locally before, i.e a PWA
The idea is that the server (yoursite.com) loads some remote resources, like <script src="crappycdn.com/react.js" hash=12345abc> or whatever. The browser checks that the cdn sends what the server told it to expect.
I’m rather sure that localStorage persists over browser updates so that can be “permanent enough”
There is also the issue that OP apparently plans to push frequent updates to the server. Until that settles down, hash checking is useless since the code keeps changing. Also, some of us clear that local storage pretty often.
I mean… sure but at that point the same apply to native. If you can’t trust the running environment you are screwed anyway.
A huge buggy constantly changing program like a browser is less likely to have exploits than a simpler, single purpose program. Also, yes, the running environment that sees plaintext is within the security boundary, so you do have to worry about it. If you saw the movie "Citizenfour", the journalists communicated with Edward Snowden using laptops that were air gapped, i.e. completely disconnected from the internet. They'd get an encrypted email (GPG) on a connected computer, transfer it to the gapped machine on a USB stick(?), and decrypt and read it on the gapped machine. Even that had vulnerabilities of course.
As for using browser cryptography, I never got around to trying to understand this in detail, but there was a known incident where some Facebook app somehow intercepted the TLS encrypted traffic of other apps. Presumably they can extend such schemes to the browser libraries.
Because of all these issues, high security and general purpose computers/phones just don't mix. It's better to avoid pretense and just aim to make something that's reasonably secure and that's easy to use. Remember PGP stood for "pretty good privacy". That's a more realistic claim.
thanks for your thoughts. im sure others would have similar concerns.
The attacker takes over the server and replaces the JS with a backdoored version
this is a core concern why the app is open source and selfhostable. details are provided in the readme to create a selfhosted fork that runs on github pages. there are several ways around this concern described here.
You are going in the wrong direction
thats unfortunate if you still think so, but id like to hear any other concerns if you have any.
Why do you expect that the JS that the server sent you this time is the same as what you audited earlier? Self hosting doesn't help against unfriendly takeover. I will look at the other post about hashing. Maybe that helps, depending.
You conern is well placed. This is why the project has to be open source and I encourage selfhosters.
If this is your concern, I think the offering from other apps is much more shady. While many projects are open source, when provided from an app store you have much less ability to verify the binaries involved match the available source code. It's at this point backdoors can be introduced. (It may be noteworthy that my app serves code unminified for transparency)
As it stands for me app. It's unstable and so I suggest always using the latest version because I will be adding fixes. It is an offline-first pwa. It's possible to make it so it doesn't fetch statics if it already has a cached vopy. At this early stage it doesn't make sense to use this feature because the project is unstable and recieving various fixes and improvements throughout. (There is no audited version of the app.)
I wouldn't expect other apps of this sort to be highly secure either, especially the ones that run inside web browsers like this one does. I think users facing so-called advanced persistent threats (APT's) shouldn't use anything like that. Examples: military, law enforcement (sometimes), industrial espionage targets, or people like Julian Assange back in the day. For my own stuff I hope I'm less of a target, so I mostly want to avoid dragnet surveillance. This kind of app can be fine for that, but it's mistaken to consider them to have very high security compared with dedicated solutions. I didn't include banking (say payment endpoints) since that tends to want particular financial protocols rather than "chat". But those systems use specialized hardware (https://www.join.tech/blog/2024-0x10-the-backbone-of-cybersecurity-hardware-security-modules.php), not javascript in a browser for heaven's sake.
Cryptography per se is now mostly a solved problem, but the wider area of security is huge and full of hazards. Again I'd suggest Anderson's book that I mentioned, and maybe some spy novels, to get into the feel of the thing.
alas, we circle back around to "javascript is inherently insecure".
especially the ones that run inside web browsers like this one does
i dont think this is a valid assersion. it seems wrapped in vagueness about the attack vector. as a webapp it has to be sandboxed in a browser. any vulnerabilities will be related to that. i often hear about browsers having backdoors (which is possible), but theyd be saying that in a world where their operating system is more likely the attack surface. id like to discuss that as a webapp, (and a suitable security stack), this app is secure. take for example any existing secure app (signal/simplex/whatsapp?). they can have all the required bells and whistles for secure/private functionality. but all that encryption can be undermined if a typical low-end phone+os is more than capable of snooping your screen. the same as would apply for my app running in a browser. my arguament is if you dont trust google, you shouldnt use chrome. with a lot of those native offering, your choice is limited to something like apple or android.
this app is also contains builds for iOS, Android and desktop. i dont promote them because im simply not convinced that these native builds are better than what web-technology can provide. a recurring concern is the reliablity of the statics served... it seems thats easy to eliminate if i make it open source and selfhostable. it puts me at a competative disadvantage, but consequently it is unparalelled in the devices it can run on.
APT are a valid concern as any. if this is something youre worried about, i think using this app with a trusted VPN in combination to using disposable profiles, it should be easy to achieve. the mitigation for APT seems simple if that is a concern, but let me know if im overlooking something. webrtc can leak ip addresses and after investigating this, i think you can achieve a reasonable degree of "hiding your personal IP address" based on the information here.
compared with dedicated solutions
the purpose of this project is to create a secure chat app. i want this app to be one of those 'dedicated solutions'. as it stands, its created by a baboon sat in front of chatgpt. but the goal is indeed to create something with unparalelled security. people always seems to avert the idea of this app being secure on the grounds that its JS, but i havent come across any credible way to undermine its security without having compromised the stack above the app (browser/os/peer/network) if any of those are a concern, the app is presented in various distributions from website to native builds.
i hope im not coming across as stubborn here. i really think this app represents a different paradigm in security that nobody is exploring. i dont think ive noticed any lack of interest in decentralized or p2p technology, but nobody seems to be working on this kind of app as a webapp. i find that its not only possible, but i think its relatively trivial to get basic functionality together. i understand that the user-experience isnt great at the moment and will limit the people who want to use it, but on the security grounds alone, i think i could be a real-contender for secure chat.
Unfortunately you are. WIthout intending disrespect, you're relatively new to this field, and don't seem familiar enough with the many successful and unsuccessful approaches that have been taken to this stuff. That makes it too easy to repeat mistakes of the past.
i really think this app represents a different paradigm in security that nobody is exploring.
It's not new, in fact the crypto primitives in the browser are intended to support precisely this type of thing. If you want to do something relatively unexplored, try to figure out what metadata you can avoid exposing. I will say that your native apps avoid some of the issues of browser cryptography. Still, I'd find it easier to accept the product if the claims were toned down from how this started. One thing to ask yourself: would you use a bitcoin wallet to manage megabucks worth of coins, if the wallet software was browser JS served from Github pages? If not, rethink your approach.
Anyway, we're going around in circles. It's good that your actually implementing stuff. I'd be interested to know what toolkits you're using for the native apps. That's an area that I've wanted to know more about.
My bad. I noticed the ego sometimes inflates which seems to stem for naive confidence.
I have observed pitfalls of other apps like mine. In particular one called crypto cat. I'm sure I can't ever be exhaustive enough in learning from other examples.
Reducing metadata is indeed the goal of security and I think I have it reduced to a level where I can exchange webrtc connection data over QR codes or plain text. The IP is exposed at this point but I think this can be further scrubbed with a VPN. Perhaps this is interesting for you. It the minimum example of establishing a webrtc connection with plain text. Not user friendly, but it work without a peer-broker service. In the app I'd like to frame this around exchanging data over QR code.
As for the bitcoin wallet thing, I would think so if it's well tested and ironed out well. As long as I can facilitate the downloading of the data (for backup) and the data syncing between devices then it would be doing that without registering to any backend. There are countless examples of bitcoin exchanges collapsing and taking people's assets. The same could be said with the quality of security provided by chat app providers.
If it's peer to peer text over webrtc, you might write it so that each client sends a fixed number of bytes per second nonstop (some of the bytes are padding and discarded at the other end). That is supposed to stop eavesdroppers from observing when somebody is typing. It's just the exact same rate of encrypted traffic all the time, 24/7 if you have always-connected computers at home.
I don't think VPN helps much, and if it's real time chat you have to make some concessions to keep latency under control. If you can stand a day or two of latency like old-school email remailers, you can do more reordering and so on.
Cryptographer saying: A good disguise does not reveal the person's height!
All nice ideas! I'll take a note. I'd like to make time to make it so on each initial connection it generates new keys too. This should be what I think is forward-secrecy. (Let me know if I'm wrong.)
I don't know the specifics of VPN and it's implication with webrtc, I tried testing and sharing my observations here. I'm open to advice here.
You asked about native builds... Tbh I don't know much about it. I did a short search-engine search and these seem to be well regarded. (Currently?) As a pwa I have a lot of flexibility in the apps form-factor. I was thinking about how easy it would be to make it into a browser extension. (It's not about it being useful, but just providing that extra option.)
Yes you should probably use tls 1.3 in an ECDH mode for your crypto layer, for forward secrecy. I do believe iti s (or was) a known issue that webrtc exposes your ip address to the other client but I'm not well versed on this. If the other client is someone you trust, it might be ok to expose your ip address to them, as long as you don't also expose it to eavesdroppers.
Oh when you mentioned a native build I thought you meant a real one, not an embedded browser. I guess the embedded browser is still better than using a full blown browser that includes whatever buggy extensions that the user happens to have installed, and preferably doesn't take browser updates automatically.
Someone in this group (unfortunately I didn't save the link since I wanted to think about it more) said that for a really secure application, it should be impossible to update the software. Real world systems aren't that extreme, but there is something to the thought. Of course it means that the very first version has to be free of vulnerabilities, but that's part of why everyone knows security is hard.