In the conversations around Open Source AI there has always been someone invoking a shortcut to resolve “the data conundrum”: admit that openness is a spectrum. This is a flawed argument that the OSI unequivocally refutes and that the whole Open Source community should forcefully push against.
Freedom is a binary concept: you’re either free or you’re not. You can’t tell a prisoner that they’re free because they’re not chained to a wall and they can move around in their cell or walk in the jail yard one hour per day. That’s not freedom. Those are degrees of deprivation. A prisoner is free only once they get out of prison. Only then can they enjoy freedom in any way shape or form they like.
For Open Source it’s the same: you can have software projects using licenses and terms of use on a range of freedom-depriving conditions and those are NOT considered Open Source. The ethical licenses, the licenses with commercial restrictions, the time-shifted licenses, etc. offer degrees of deprivation, they’re not Open Source because they don’t grant the basic freedoms.
Now think of the variety of Open Source projects released with an OSI-Approved License®. There are projects like SQLite, published without a public roadmap, few committers, tiny or no community around them. Those are Open Source projects.
There are also projects like OpenStack, Eclipse IDE or Kubernetes, organized around strict community rules, hierarchical chain of command, and promises to their users: these are Open Source, too.
There are projects that are Open Source and designed to further the progress of the United Nations sustainable development goals: they’re also Open Source.
All these projects are “Open Source”, they’re not “more” Open Source than others. Some of them may add more promises (like OpenStack’s 4 Opens) or requirements (“be designed to do good for humanity”, like the DPGs.) As a user you’re free to decide to give more value to an open community than a single-vendor project or one that is designed not to harm people. That’s part of the freedoms you’re granted.
For AI systems it’s exactly the same concept: you either have the freedoms to use, study, modify and share or you don’t. There is a gate the AI system, like software, must pass to grant freedoms: before that there is the gradient of deprivation.
I used this image to illustrate this concept. Think of being Open Source as the prisoner passing that gate: what they do with their project’s freedom is their choice.
I’ve been brainstorming with @webmink (his is the prisoner metaphor) and the rest of the team on how to explain this. I’m sharing our current approach here (after using it in recent presentations) to get more comments before posting this more widely.
What do you think of this argument? How would you change the image to be more clear?
[posting this in the AI category because this topic pops up often in AI discussions but as it’s a general topic, really, not only an AI-related one]