The Open Source Initiative has published (news article here) its definition of “open source AI,” and it’s terrible. It allows for secret training data and mechanisms. It allows for development to be done in secret. Since for a neural network, the training data is the source code—it’s how the model gets programmed—the definition makes no sense.
And it’s confusing; most “open source” AI models—like LLAMA—are open source in name only. But the OSI seems to have been co-opted by industry players that want both corporate secrecy and the “open source” label. (Here’s one ...
The post AI Industry is Trying to Subvert the Definition of “Open Source AI” appeared first on Security Boulevard.
Bruce Schneier
Source: Security Boulevard
Source Link: https://securityboulevard.com/2024/11/ai-industry-is-trying-to-subvert-the-definition-of-open-source-ai/