Draft v.0.0.8 of the Open Source AI Definition is available for comments

But almost nothing one should wish about any non-AI program is part of the standard OSD.
I wish programs had good documentation, written with proper grammar as well.
I wish the commit history were kept well.
One may wish to have access to all the intermediary assets that lead to the creation of a program (diagrams, etc…) as well as to know the thinking process of the author.
But the only thing that is required, the whole point of open source, is that the author doesn’t restrict others’ software freedom trough legal and technical means and that the program is shared in a form in which it’s always available to the author, about as easy to share as any other form.
Releasing an open source program is generally no harder, no more cumbersome and requires little to no additional work compared to releasing it as proprietary.

Thank you, and apologies for my oversight. I should have compared this draft more carefully to previous versions.
This is just something I’m slightly confused about: it this supposed to refer to all kinds of ML systems or only “blackboxy” ones?
I know neural networks are (deservedly) currently the most prominent system in DL, but I wonder if these requirements are intended to apply to “clearboxes” too (such as decisions tree), which may very well gain popularity again (due to the will to have more decipherable systems, in certain areas).

Me neither, which is why I’d rather define any other AI system as “open source software” than “open source AI”.

Not really, in this instance, although that could be one example.
Under the current draft, releasing, let’s say, an “open source AI” LLM would require much more work than releasing a proprietary LLM, in the form of writing documentation, sharing rather large files, preserving information, restricting the way you train it, etc.
For example, if you train a system interactively and you happen to bodge a bit (tweaking the process with some additional lines of code), you’d arguably have to keep track of that too, since that code contributed to the training process.

It’s also vague to the point that, personally, I’d never feel comfortable describing a deep learning model as “open source AI” under this definition.
I would, of course, not refrain from describing training and testing/inference programs (which, regardless of the model, are just code) as “open source software”, with reference to the OSD.