The Open Source(ish) AI Definition (OSAID)

If it’s just about providing plausible data information, wouldn’t that mean they “can falsely claim compliance” rather than actually being “compliant”? Yes, there are indeed several ambiguous areas in the current OSAID that allow for such misrepresentation, and I believe many people are aware of this fact.

That being said, while I think some of the terms in the current OSAID could be refined, I also believe that the definition should retain a certain degree of ambiguity. The “Open Source Definition” we advocate for also leaves room for interpretation when viewed from a legal perspective. What fills in these interpretive gaps is the license review process, where long-standing community culture and legal consistency are taken into account to reach reasonable interpretations. I believe it is through the accumulation of these reviews that OSI has built its trust.

Some view the release of OSAID 1.0 as a potential threat to OSI’s credibility, but I believe the approval process is more important than the definition itself. Even with current license reviews for source code, we often see people confidently asking OSI to approve their “groundbreaking license as open source.” However, such licenses are rarely approved. Ultimately, the key issue for OSAID will be how to build a strict and sustainable approval process.

That said, it’s still unclear what exactly OSI will review based on OSAID and how they will do it. This concerns me. I understand that the focus will be on reviewing licenses and legal conditions, not the AI systems themselves, but I’m still unsure whether the current review process will remain as is or whether a new process will be created. As a volunteer collaborator, I don’t have clarity on this yet.