Is LGPL really a precedent for an Open Washing AI definition?

LGPL is an important precedent for two-tier branding, where the second tier is introduced for the limited or “lesser” case, without compromising on the four freedoms. In this case it refers to our requiring data (analogous to the proprietary code) be accessible when building (training) the Open Source licensed redistributable software (model), in order to protect the four freedoms.

Binary blobs are a more relevant precedent for the FOSS community making a deliberate and measured decision to be practical with proprietary code and do whatever it took in the early days to enable and encourage adoption, again without compromising on the four freedoms for our own code. History shows this was a pragmatic decision without which Linux may have failed to gain traction, and now Open Source versions of the offending drivers are readily available.

The proposal is that we acknowledge that taking a purist approach to data (e.g., demanding open data licenses) will drastically limit the number of candidates for certification (thanks @quaid), violating the OSI board’s approval criteria that it “provides real-life examples” (slide 9). That’s what @stefano was referring to in the town hall, and I can relate — we all want more Open Source software that protects our freedom:

we have in the free software and open source movement a long history of making exceptions and finding ways to solve the problems in order to have more freedom and more open source software.

That said, the current draft is unacceptable and must be rejected by the board because it fails to protect some or all of the freedoms (or to the extent it does, those advocating for the risky approach have not proven its safety, the onus being on them per the precautionary principle).

It also fails at its one job: to be a useful and usable standard for certifying compliant candidates. Even if it were functional, it’s clearly not deployable or enforceable.