As in the comment below, the fear of “be deemed ineffective” is real.
And, as I believe it was again reinforced in the last townhall and it was probably written elsewhere, the OSI board is also concerned that a too strict definition might lend itself to an “empty set” of OSAID-compliant Ai systems.
I would argue that:
firstly, we do not abdicate of our principles, and,
secondly, we invite all those stakeholders to see how they could approach their systems onto to our definitions and carve out a compromise which does not taint our ideas.
In truth, this seems more like an excuse than anything else, given how community feedback showing otherwise has been ignored.
To paraphrase Tara Tarakiyee, the OSI has now chosen to undermine the credibility of open source (and its own authority):
there are good things about the definition that I like, only if it wasn’t called “open source AI”. If it was called anything else, it might still be useful, but the fact that it associates with open source is the issue.
It erodes the fundamental values of what makes open source what it is to users, the freedom to study, modify, run and distribute software as they see fit. AI might go silently into the night but this harm to the definition of open source will stay forever.
I look forward to reading @juliaferraioli’s followups regarding the backstage of the design process.
From the perspective of representing Japan, if OSAID simply results in creating an “empty set,” it would be more convenient for the definition itself not to exist. In Japan, there is no law that specifically mentions Open Source AI, like the EU AI Act, and if the goal is to prevent open-washing, the argument that open source AI does not exist could be acceptable. Though, this would be a disappointing outcome for organizations that are gathering datasets under Open Data licenses.
I’m convinced that will not happen, as it has already been proven with Open Source licenses, once the definition is set, those projects that are actually interested in being open will adjust themselves to match the requirements.
And even if some projects fail short of the definitions (for instance, their model was trained with unsharable data) the remaining artifacts could be enough to create a fork which complies fully with the OSAID.
You already see that case with software where projects are forked to work around closed binary blobs which were holding the project hostage,