It depends how websites choose to implement it, and how other browsers choose to implement it.
If Firefox et.al chooses not to implement browser environment integrity, then any website that chooses to require strict integrity would completely cease to work on Firefox as it would not be able to respond to a trust check. It is simply dead. However, if they do implement it, which I imagine they would if this API actually becomes widespread, they should continue to work fine even if they're stuck with the limitations on environment modification inherent to the DRM (aka rip adblockers)
Websites will vary though. Some may not implement it at all, others may implement a non-strict integrity check that may happily serve browsers that do not pass the check. Third parties can also run their own attestation servers that will report varying levels of environment data. Most likely you will see all Google sites and a majority of "big" websites that depend on ad revenue implement strict integrity through Google attestation servers so that their precious ads don't get blocked, and the internet will become an absolutely horrid place.
Frankly I'll just stop using anything and everything that chooses to implement this, since we all know Google is going to go full steam ahead with implementation regardless of how many users complain. Protecting their ad revenue is priority 1 through 12,000 and fuck everybody else.
What everybody seems to be forgetting is that there is a ton of web-content fetching being done right now which is not done by browsers.
For example, all the webcrawlers doing indexing for search engines.
Consider the small possibility that any major website that does this either becomes inaccessible for any webcrawler which does not implement this (say, those indexing sites for search engines other than Google's) or has exceptions for webcrawlers which are one big backdoor for browsers to also come in (in fact a number of paywall-bypassing solutions relly on sending the right HTTP headers to use exactly existing exceptions for webcrawlers).
Even webcrawlers implementing this are relying on "integrity validation" servers from a 3rd party (I bet that's going to be Google) so think about how Google can interfere here with 3rd party webcrawlers by merelly throttling down integrity validation responses for those.
Oh, and open source webcrawler implementations can forget all about being validated.
(By the way, this would also impact any AI-data gathering webcrawlers that don't use APIs to get the data but rather go in via the web interface)
This is quite possibly a far bigger play by Google than just for browser and Ad dominance.
Only if they proceed AND websites enforce it. The last reply I read from the Googler that was part of the draft spec said they were building in a guardrail that prevents sites from outright blocking non-compliant clients without also blocking a not insignificant portion of their desired userbase.
You know those movies where aliens attack earth and we always win? I think these outcomes are mostly true because I've said it before and I'll say it again, there's nothing humans can't ruin. Whether it's meeting your family at the arrival gate or alien societies we'll destroy it. The internet is just the next thing.
It will affect for some sites, not for others. You'll no longer be able to bypass paywalls to read news, for example, because those sites will most likely adopt the DRM. Some streaming services may do the same, maybe even some social networks. But places like lemmy will still be generally unnaffected.