But in a separate Fortune editorial from earlier this month, Stanford computer science professor and AI expert Fei-Fei Liargued that the "well-meaning" legislation will "have significant unintended consequences, not just for California but for the entire country."
The bill's imposition of liability for the original developer of any modified model will "force developers to pull back and act defensively," Li argued. This will limit the open-source sharing of AI weights and models, which will have a significant impact on academic research, she wrote.
They should be doing the exact opposite and making it incredibly difficult not to open source it. Major platforms open sourcing much of their systems is basically the only good part of the AI space.
Same energy as PirateSoftware's "If AAA companies can't kill games due to always online DRM then small indie devs have to support their games forever, thus bankrupting them" argument.
IRL, arms manufacturers claim they're not culpable when their products are used to blow up civilians. They point at the people making decisions to drop the bombs as the ones responsible, not them.
This legislature tries to get ahead of that argument, by putting reponsibility for downstream harm on the manufacturers instead of their corporate or government customers. Even if the manufacturer moves their munitions plants elsewhere, they're still responsible for the impact if it harms California residents. So the alternative isn't to move your company out of state. It's to stop offering your products in one of the largest economies in the world.
The intent is to make manufacturers stop and put up more guardrails in place instead of blasting ahead, damn the consequences, then going, oops 🤷🏻♂️
There will be intense lobbying with the Governor to get him to veto it. If it does get signed, it'll be interesting to see if it has the intended effect.
As we've previously explored in depth, SB-1047 asks AI model creators to implement a "kill switch" that can be activated if that model starts introducing "novel threats to public safety and security,"
A model may only be one component of a larger system. Like, there may literally be no way to get unprocessed input through to the model. How can the model creator even do anything about that?
They're safety washing. If AI has this much potential to be that dangerous, it never ever should have been released. There's so much in-industry arguing, it's concerning.