In that regard, and whatever decision the OSI board takes, we should already be asking developers to check their projects against the OSAiD, to get a more detailed view of the size of “niche” the definition creates and ask the Open Source community (*) if they would accept the Ai systems that were validated as compliant with the four principles of freedom.
This validation has already been made in the beginning of the process [1], by others [2] [3] [4], and presently can also be asserted, up to a certain point, with the Model Openness Tool (MoT) [5] and complemented with the Foundation Model Transparency Index [6], but I do not know if any analysis from the Open Source community (*) at large was made over those systems.
references
[1] Towards a definition of “Open Artificial Intelligence”: First meeting recap – Open Source Initiative
[2] Z. Liu et al., ‘LLM360: Towards Fully Transparent Open-Source LLMs’, Dec. 11, 2023, arXiv: arXiv:2312.06550. doi: 10.48550/arXiv.2312.06550.
[3] I. Solaiman, ‘The Gradient of Generative AI Release: Methods and Considerations’, Feb. 05, 2023, arXiv: arXiv:2302.04844. Accessed: Oct. 17, 2024. [Online]. Available: [2302.04844] The Gradient of Generative AI Release: Methods and Considerations
[4] M. White, I. Haddad, C. Osborne, X.-Y. L. Yanglet, A. Abdelmonsef, and S. Varghese, ‘The Model Openness Framework: Promoting Completeness and Openness for Reproducibility, Transparency, and Usability in Artificial Intelligence v5’, Oct. 02, 2024, arXiv: arXiv:2403.13784. doi: 10.48550/arXiv.2403.13784.
[5] https://isitopen.ai/
[6] Foundation Model Transparency Index
(*) I know the term “Open Source community” is too vague, and open to wild interpretations, but I wouldn’t know who to name and I’m certain the OSI could bring together experts in the FLOSS field
from the many communities of thought and practice to give their opinion.