Very nice and constructive attempt!
At a rapid check, your proposal fixes 7 of 11 issues that still plague RC1.
The remaining issues here are:
- Implicit or Unspecified formal requirements: if ambiguities in the OSAID will be solved for each candidate AI system though a formal certificate issued by OSI, such formal requirement should be explicitly stated in the OSAID. (reported here and here )
- OSI as a single point of failure: since each new version of each candidate Open Source AI system world wide should undergo to the certification process again, this would turn OSI to a vulnerable bottleneck in AI development, that would be the target of unprecedented lobbying from the industry. (reported here and here )
- Unknown “OSI-approved terms”: the definition requires the code distribution under “OSI approved licenses”, but requires that Data Information and Weights to be distributed under “OSI approved terms”. Nobody knows these terms, and this pose issues “critical legal concerns for developers and bsinesses”. (reported here )
- Underspecified “substantial equivalence”: the definition requires a skilled person to be able to build a “substantially equivalent” system out of the components of an Open Source AI system, but it doesn’t explain (not even in FAQs) what such equivalence does means.
In Computer Science, two software are equivalent if and only if for any given input they produce the same output in a comparable amount of time, but OSI have not specified what such equivalence should mean in their understanding of AI systems. (reported here, here, here)
I suggest to replace “OSI-approved terms” and “OSI-approved licenses” with “licenses that comply with the Open Source Definition”.
This should remove from OSI the burden of certifying each version of every AI system.
As for “build a substantially equivalent system." I suggest to replace with “recreate a copy of the system”.
Since my previosly censored proposal, I realized that the adjective “exact” is not really really required: lets trust the court’s wisdom.