In the future, AI in cyberspace is used as both a protector and a weapon. How do we control it?
Hi @anatta8538, welcome to the conversation. You ask a loaded question The short answer is that controlling the use of software as well as AI systems is out of scope for the Open Source Definition.
A longer answer would require a treaty… Of course there need to be laws, rules, norms and regulations that prevent abuses of AI and any software: they’re simply not something that the Open Source Initiative can take a stance on through its Definitions. That’s the realm of policy, an area where OSI is already active and will increasingly be.
This talk can give you an idea of why usage restrictions in the context of the Open Source Definition are not a good idea: State of the Source presentations
Making the original training dataset and code publically available is the ground towards security, as well as AI supply-chain security.
Moved to new thread because security is critical and this one is mixing issues.