There has been a lot of media coverage regarding the release of version 0.0.9, which is encouraging. However, one thing caught my attention in the final part of the article linked above. I quote it below:
She adds that OSI is planning some sort of enforcement mechanism, which will flag models that are described as open source but do not meet its definition. It also plans to release a list of AI models that do meet the new definition. Though none are confirmed, the handful of models that Bdeir told MIT Technology Review are expected to land on the list are relatively small names, including Pythia by Eleuther, OLMo by Ai2, and models by the open-source collective LLM360.
It mentions that an enforcement mechanism is being planned to flag false open source AI, but I am not enthusiastic about this. I believe OSI’s statement regarding Llama was effective, but I do not wish to see labels being regularly applied. It seems to be different from OSI’s traditional stance, so I hope this is a misunderstanding by the journalist.
Another point is that while I understand that Pythia, OLMo, and LLM360 comply with versions up to OSAID 0.0.8, if models are to be announced as meeting the standards, it would be wise to follow a process to confirm that they meet the criteria according to the latest definition at that time.