Concerns and feedback on anchoring on the Model Openness Framework

This is a very good question… In my mind, the document we’re co-designing (the drafts of the Open Source AI Definition) have three main parts: the Preamble, the Definition and the Checklist. The first two parts should be well thought out, represent timeless principles, as much as possible. Together with the FAQ they should continue driving the interpretation of the openness of AI systems in the future, when technology and the legal landscape may change. The principles that AI developers need to have the model parameters, code and data information so that they can recreate a substantially equivalent system shouldn’t change over time.

The Checklist may change more frequently: I see it as a working document that reviewers of AI systems use to evaluate if a system is Open Source AI or not. Right now I think it serves this purpose. Given how quickly things change, it may become obsolete or other types of checklists will be required.

I’ll also propose some small improvements to the current checklist in v0.0.9, based on comments to the draft.

I’m expecting the Linux Foundation to show how they’re putting this into production… Given that this coming week there is AI_Dev, maybe we’ll see something demoed there.

I join @amcasari call to see examples used in practice.

1 Like