Is LGPL really a precedent for an Open Washing AI definition?

TL;DR

No.
LGPL is just one license and won’t impact the life and freedoms of millions of Europeans by enabling thousands of AI systems to avoid the legal and scientific scrutiny that the AI Act impose them.

A surprising argument…

During the last Town Hall, @stefano quoted Stallman about the strategic value of the linking exception granted by LGPL (slide 18).

As far as I understood, the argument was: “open source has a long history of compromises, so allowing unshareable datasets in Open Source AIs is fine if it leads to more AI systems certified by OSI as Open Source”.

The LGPL was suggested as an example of such compromises.
 

But is it a reasonable precedent?

LGPL doesn’t compromise on the 4 freedoms

Lesser/Library GPL is a copyleft license that grant linking exception to developers, so that one can link a covered library from a program without releasing such program under a copyleft.

However, none of the four freedom is compromised by adopting LGPL: developers that modify a LGPL-covered work, still have to grant users access to the modified sources under the same license.

LGPL uses a different name than GPL

Since the very beginning the license was not presented as “the GPL”, but as a surrogate for the GPL to be used strategically to maximize users’ freedoms.

To keep with the comparison, the current draft should be named “Almost Open Source AI definition”, or “Somewhat Forkable AI”, “Freely Fine-tunable AI definition” or even “OSI-Certifiable AI definition”, but not open source, since without access to training data users lose two of the four freedoms.

LGPL aims to maximize freedoms, not FSF income

The LGPL was designed to pragmatically maximize users’ freedoms, not to maximize the market of worthless certifications.

Instead an Open Source AI definition that does not requires training data would severely inhibit user freedoms, while raising the number of compliant AI systems.

So we would have more systems bragging to be “Open Source AI” and less freedom for the users at the same time.

Maybe a win-win for an hypothetical OSI Corporation (that could launch a rich business selling certificates of paper-compliance) and for all companies trying to escape the legal and technical scrutiny imposed by the AI Act.

But a net loss for everybody else.

LGPL is just one license

To be honest, I don’t know how many compromises OSI did with the Open Source Definition, but reading the Fair License I can trust @stefano’s words about them.

However LGPL was just one license.
Just like the Fair License or the CAL.

The Open Source AI definition will impact the safety of all Europeans!

Thus I don’t think that LGPL can be issued as a precedent to justify an Open Washing AI definition like the last draft.

Am I missing something obvious?

3 Likes

LGPL is an important precedent for two-tier branding, where the second tier is introduced for the limited or “lesser” case, without compromising on the four freedoms. In this case it refers to our requiring data (analogous to the proprietary code) be accessible when building (training) the Open Source licensed redistributable software (model), in order to protect the four freedoms.

Binary blobs are a more relevant precedent for the FOSS community making a deliberate and measured decision to be practical with proprietary code and do whatever it took in the early days to enable and encourage adoption, again without compromising on the four freedoms for our own code. History shows this was a pragmatic decision without which Linux may have failed to gain traction, and now Open Source versions of the offending drivers are readily available.

The proposal is that we acknowledge that taking a purist approach to data (e.g., demanding open data licenses) will drastically limit the number of candidates for certification (thanks @quaid), violating the OSI board’s approval criteria that it “provides real-life examples” (slide 9). That’s what @stefano was referring to in the town hall, and I can relate — we all want more Open Source software that protects our freedom:

we have in the free software and open source movement a long history of making exceptions and finding ways to solve the problems in order to have more freedom and more open source software.

That said, the current draft is unacceptable and must be rejected by the board because it fails to protect some or all of the freedoms (or to the extent it does, those advocating for the risky approach have not proven its safety, the onus being on them per the precautionary principle).

It also fails at its one job: to be a useful and usable standard for certifying compliant candidates. Even if it were functional, it’s clearly not deployable or enforceable.