As of today, OSI let Meta to decide what went in OSAID and what not, granting the LLama team (and only the Llama team, who counted 2 Meta employees) the power to counter the votes of the other teams.
Because I know OSI won’t.
Actually, this is one of the unaddressed issues of OSAID RC1: if an AI system is “Open Source” if and only if OSI certify it as being Open Source, such formal requirement should be explicit:
So if Open Source AI is what OSI certify as Open Source AI, such formal requirement should be explicit in the Open Source AI definition, eg in a new final section like this:
OSI Certification
OSI will be responsible to certify the compliance of each candidate AI system to the definition above.
- For example, when a new version of an AI system is released with different weights, a skilled person at OSI will recreate a substantially equivalent system using the same or similar data, to verify that the Data information requirement still hold.
Quis custodiet ipsos custodes?
Why should OSI release an ambiguous definition that leaves so much arbitrariness to OSI itself (or to a Judge trying to enforce the AI Act)?