So I think there are a couple of "phenomena" swirling around right now that are stimulating interest in this kind of DRM.
The first, of course, is AI. If people start using AI as an intermediary, it becomes difficult for web sites to push advertising or to even understand what views they are getting. Putting a DRM requirement on connections to your own web site would help you filter "real users" from AI and search engine bots, and potentially open an avenue to charging AIs & search engines for sucking out your content into their own databases.
Is this good? Bad? I mean, at some point, we have to figure out how to track the flow of information into AI so we can figure out how to charge for it, or every web site that depends on monetizing content will dry up. But yes, it means adding some draconian tracking & verification.
The second is the fediverse. Google makes money from advertising, and people are shifting to advertising-free platforms. The more time people spend in Mastodon, Lemmy, Calckey, Pixelfed, Peertube etc. the less time they are consuming advertising in Twitter (or whatever it is this week), Reddit, Facebook, Instagram, and Youtube.
A potential side effect of this DRM initiative might be to try and segregate the Internet into "safe" (that is, advertising-supported proper web sites that have gotten all registered with Google DRM and require it for full connectivity) and "unsafe" (those crazy nutballs running Mastodon instances), where Chrome is gonna throw up big red banners warning you that you're in a dark corner of the Internet whose safety cannot be assured!
I wonder if Google is looking out there at a BUNCH of the big players and asking, who is gonna be around in 20 years, and what technology can be put in place to help them lock down their investment?
AI will certainly be a challenge for web sites. I think we must accept that the end of the 2010s internet is upon us. Add supported web sites, that offer the user as the product, will start to slowly disappear. That is not really a bad thing though. The drawback for us users is that services will start to cost money.
I read a comment by Bill Gates recently where he suggested that most people will interact with AI through personal assistants. This actually feels like a good point from him. A PA that actually works will add a lot, to a lot of people. Which goes to your point about how normal users will stop interacting with the web though an unfiltered browser and start using AI to access the web. Companies who sell actual good and services online should be safe as their income is not dependant on advertising.
I suspect normal users will end up paying for the search engines used by the AI. On some form of tiered approach. Apart from a few oddball users, like us here in the fedverse who will find other ways to make things work.