I just came to think about that Mark Zuckerberg’s blog post around the release of Llama 3.1, where he declares that “Open Source AI Is the Path Forward”. I feel that it is easy to agree with what he says there, the trouble arises around what he doesn’t say. I also feel that this says so much about the role of Open Source in society, and from there, we should be able derive much insight. And therefore, this thread.
The most obvious problem is that Llama 3.1 isn’t Open Source and will not be so within any likely OS AI Definition. How do we respond to that? Just by pushing Meta away, tell them to stop, go somewhere else? What would result from that? Or is there a path to where a future Llama could actually be OSAID-compliant? Is there a “Thank you, but this is where we need to be going”-path forward? Would key decision makers elsewhere realize that Llama isn’t actually Open Source? What does it have to say for the status of EU legislation? Could it bring Open Source generally into disrepute?
Further, it also calls out that Open Source seems to some largely ungoverned, or at least, governed by the same billionaire war lords that have brought society into such disarray. We know that this is generally not the case, there exists a lot of well-governed ecosystems around Open Source projects, but AI brings many new problems.
Like whether the model should have been published at all, whether the security of the thing is actually good, as Meta claims to have been very careful about it.
It also calls into question whether it is realistic to make the third party assessment that they claim.
Say that the license is fixed, would it still be Open Source as per OSAID? And what does that have to say for OSAID?