One of the rare use cases of a blockchain actually being useful. A federated internet archive that uses a blockchain to validate that the saved data has not been altered by a malicious actor trying to tamper with proofs
That would be really cool but horribly inefficient because of the sheer amount of storage required
You mean a "github repo". Git by itself doesn't give a hoot about validating authors what-so-ever (I could sign as "Bill Gates bill@microsoft.com", and git would happily accept the commit), and it's not federated (multiple people manually downloading various states of the repo at various times doesn't count).
Github ensures owners are who they are, as linked to their profile (though email validation only goes as far as "Well, they clicked the link in the email, so this must be their email account"). Github also isn't federated, since that one site going down takes all the repos with it (unless someone had it cloned, but again, random people downloading at random times yields different states of the repo, depending on when the clone/fetch occured, but then you'd end up with tens/hundreds/thousands of sources of various levels of truth).
It's not a minor nitpick. The comment was that "nobody calls a git repo a blockchain". It's because it's not a blockchain, or even remotely similar to one.
No worries. I just correct people on it because it's caused problems at work before. It's a pain when people think that git automatically means github, and they start complaining about cost, and Microsoft feeding their AI, and setting up user accounts, and etc etc etc.
I'm like... dude, I just want to sync the code from a central server, we can do it in house for free in 5 minutes...
I mean you don’t need the blockchain for that. The same way that distro mirrors don’t need the blockchain. It can be federated, with each upload being verified through hashes that they are in fact the real upload. I would argue that something like blockchain would remove the authority from them, granting the position of a bad actor spinning up enough servers to be able to poison the blockchain just because they had the computing power, claiming authority
So basically a blockchain, but for a bunch of files, not ordered. So instead of a native token, users can just trade bits of information as currency. 🙀
If it goes really well, we could even recruit one of the Bitcoin developers to help.
Yes, this is a great example of where ipfs would work (specifically for file hosting, not necessarily for the actual web interface), and also, no ipfs is not a blockchain, and it shouldn’t be. I thought we were past the whole “can this be a blockchain” thing, but here we are. Blockchain is cool tech. It’s also incredibly inefficient for anything beyond a transaction ledger, or in today’s case, money laundering and trying to avoid taxes and regulation.
The thing is sometimed articles must be removed from IA (copyright (I disagree with that one) or when information is leaked that could threaten lives), with a blockchain this would be impossible
I'd be interested in seeing real examples where lives are threatened. I find it unlikely that the internet archive would be the exclusive arbiter of so-called deadly information
There was an actual example where a journalistic article about afghanistan accidentally leaked names of some sources and people who helped westerners in afghanistan, which did actually endanger those people’s lives.
No. The archive of it isn't doing the dangerous part. The info was already out there and the bad actor who would do something malicious would get that info from the same place the archive did. I need you to show how the archival of information that was already released leads to a dangerous situation that didn't already exist.
I thought of something but I don’t know if it’s a good example.
Here’s the hypothetical:
A criminal backs up a CSAM archive. Maybe the criminal is caught, heck say they’re executed. Pedos can now share the archive forever over encrypted messengers without fear of it being deleted? Not ideal.
Yeah this is a hard one to navigate and it's the only thing I've ever found that challenges my philosophy on the freedom of information.
The archive itself isn't causing the abuse, but CSAM is a record of abuse and we restrict the distribution not because distribution or possession of it is inherently abusive, but because the creation of it was, and we don't want to support an incentive structure for the creation of more abuse.
i.e. we don't want more pedos abusing more kids with the intention of archival/distribution. So the archive itself isn't the abuse, but the incentive to archive could be.
There's also a lot of questions with CSAM in general that come up about the ethics of it in that I think we aren't ready to think about. It's a hard topic all around and nobody wants to seriously address it beyond virtue signalling about how bad it is.
I could potentially see a scenario where the archival could be beneficial to society similar to the FBI hash libraries Apple uses to scan iCloud for CSAM. If we throw genAI at this stuff to learn about it, we may be able to identify locations, abusers and victims to track them down and save people. But it would necessitate the existence of the data to train on.
I could also see potential for using CSAM itself for psychotherapy. Imagine a sci-fi future where pedos are effectively cured by using AI trained on CSAM to expose them to increasingly mature imagery, allowing their attraction to mature with it. We won't really know if something like that is possible if we delete everything. It seems awfully short sighted to me to delete data no matter how perverse, because it could have legitimate positive applications that we haven't conceived of yet. So to that end, I do hope some 3 letter agencies maintain their restricted archives of data for future applications that could benefit humanity.
All said, I absolutely agree that the potential of creating incentives for abusers to abuse is a major issue with immutable archival, and it's definitely something that we need to figure out, before such an archive actually exists. So thank you for the thought experiment.
Having multiple servers which store file checksums would have much less overhead, would be easily repeatable and appendable, with no need for unnecessary computational labor. Linux mint currently uses the checksum process for verifying that an ISO downloaded is not altered in any way, and it can work for any file (preferably not humongous files).
That’s an excellent question. Unfortunately I do not have an answer. But I believe it’s worth discussing some means of redundancy for the IA; even if it’s as simple as rsync to other hosts.
YaCy self-hostable search engine kind of has this feature and architecture by way of a DHT inter-peer search, in combination with local page caching. Although the caching feature is something that a node operator needs to manually enable.