Why and how to certify Open Source AI

Thanks for splitting the thread, it is indeed an important separate discussion.

I think the need from “certifying” an AI system as OSAID compliant or not will emerge primarily from the following situation. The definition is not, and will never be, completely unambiguous. In OSAID 0.0.8 we have the “sufficiently”, “substantially equivalent”, etc. expressions mentioned in the parent post. Even with the proposed changes, we have “high quality equivalent synthetic dataset”. And no matter how hard we try, there will always be margins for different interpretations.

As soon as two parties will disagree on the OSAID compliance of a system, people will want a judge of sorts. For the OSD, OSI has been such a judge, via the license-review process. (Which was quantitatively easier to manage, because there were way fewer licenses than software products under such licenses. With OSAID we’re potentially looking at one judgment call per system…)

OSI will certainly be the first actor the community will turn to for such judgment calls.