Skip Navigation
TechNews @radiation.party irradiated @radiation.party
BOT

[VERGE] The world’s biggest AI models aren’t very transparent, Stanford study says

www.theverge.com The world’s biggest AI models aren’t very transparent, Stanford study says

Stanford’s new Foundation Model Transparency Index found developers don’t disclose societal impact at all.

The world’s biggest AI models aren’t very transparent, Stanford study says

[ sourced from The Verge ]

1
1 comments
  • This is the best summary I could come up with:


    No prominent developer of AI foundation models — a list including companies like OpenAI and Meta — is releasing sufficient information about their potential impact on society, determines a new report from Stanford HAI (Human-Centered Artificial Intelligence).

    Other models evaluated include Stability’s Stable Diffusion, Anthropic’s Claude, Google’s PaLM 2, Command from Cohere, AI21 Labs’ Jurassic 2, Inflection-1 from Inflection, and Amazon’s Titan.

    OpenAI refuses to release much of its research and does not disclose data sources, but GPT-4 managed to rank high because there’s a great deal of available information about its partners.

    However, none of the models’ creators disclosed any information about societal impact, Stanford researchers found — including where to direct privacy, copyright, or bias complaints.

    Some proposed regulations, like the EU’s AI Act, could soon compel developers of large foundation models to provide transparency reports.

    Generative AI has a large and vocal open-source community, but some of the biggest companies in the space do not publicly share research or its codes.


    The original article contains 501 words, the summary contains 163 words. Saved 67%. I'm a bot and I'm open source!