Is the definition of "AI system" by the OECD too broad?

That’s one of the reasons why we added the “AI system” to the debate. We were going around in circles discussing exactly this use case: the model weights are available with MIT/BSD-like terms so it’s open source, right? Except that the weights alone are not very useful: you need more, like the analysis of Llama2, Pythia, OpenCV and BLOOM reveal. Anchoring the working groups around the definition of AI system helped answering the question “what exactly do you need to run, study, modify, share [Llama2 | Pythia | OpenCV | BLOOM]?” Now we have a good, shared idea.

One artifact I’m producing for draft 0.0.6 is a diagram like the one below to summarize the findings of the workgroups. Draft 0.0.6 will start listing which components are necessary to qualify as an Open Source AI: it will be one that shares training code, supporting libraries under an OSD-compatible license and model parameters, architecture under a question mark. Solving that question mark will be the next exercise.

Am I making any sense? :slight_smile: