Researchers have written up the unusual case in a medical journal.
Researchers have written up the unusual case in a medical journal.
A dystopian story writer would get lambasted for including something so ridiculously, over the top bleak in their story. Yet here we are in reality where...
Huh, is there an option for being immortal in Kenshi but in the "you are immortal but not invincible" way, so characters never die but they still need someone to come along to save them and fix them up before they can move again? I used to simulate this in Rimworld with the bleeding out mod that kept pawns from dying upon losing most vital organs for a very short time combined with a mod that made them regenerate lost parts at 10% efficiency until it fully regenerated, leaving them unable to do anything but still alive until recovery.
I mean more that things people in a simulation can't observe with great detail yet won't be simulated with great detail until they can see that kind of detail, and by then, I assume technological advancements in the host world would have improved hardware to run on that allows that kind of detail to be simulated in a reasonable amount of time. Plus there's also the ability of the host world to edit the simulation so that things that weren't simulated in great detail when observed by people in the simulation before retroactively was changed, so that people inside always were able to see things in great detail in their memory, history, and other forms of knowledge from their points of view, but from the outside, things inside were changed minimally to make them consistent with any retroactive simulation conflicts. Not in a dystopian way, mind you, just in ways like "this very star was actually always a few light years away from its current position in the sky", like small technical details that are smoothed over in the internal history as seen by the simulation inhabitants to match up with other parts of the simulation.
There are a couple tricks one could use like having some parts of the simulation skip steps in less important areas, simulating different parts at different times in the host world and only syncing them back together when necessary, which would end up being invisible to those inside, as well as the simulation not running in real time, where it might be running slower or inconsistently in the host world, while inside the people see it as stable and not slow.
Not that I'm claiming it's true; it's simply an interesting thing to think about and ways around processing speed issues. If humanity ever makes a simulation of even a small universe, I imagine some of these tricks that are smoothed over in that universe would be used, since it can look messy from the outside but look normal on the inside.
I feel like one way to do this would be to break up models and their training data into mini-models and mini-batches of training data instead of one big model, and also restricting training data to that used with permission as well as public domain sources. For all other cases where a company is required to take down information in a model that their permission to use was revoked or expired, they can identify the relevant training data in the mini batches, remove it, then retrain the corresponding mini model more quickly and efficiently than having to retrain the entire massive model.
A major problem with this though would be figuring out how to efficiently query multiple mini models and come up with a single response. I'm not sure how you could do that, at least very well...