hi simon, wow - it’s great to see you here, as you have context with open-source hardware (pt here) if i recall correctly you had issues with the open-source hardware logo that was based on my design which was used for OSI and we resolved it together (thank you for that) - https://www.oshwa.org/wp-content/uploads/2012/08/233124698_1.pdf
here is what we are proposing for an AI addition to the OSI def, as well as the open hardware def…
Inspection of Prompts and Data Access Transparency:
In addition to the existing requirements, the preferred form for making modifications to a machine-learning system shall include access to the prompts and commands used during the training phase and/or code and hardware creation. This will enable users to understand the context in which the model was developed, including:
- Prompt Transparency: Access to a detailed log of all prompts, commands, and instructions used during the training phase and/or code and hardware creation, ensuring that users can see the exact inputs that shaped the model’s behavior.
- Justification and Documentation: Each prompt should be accompanied by documentation explaining its purpose, how it was constructed, and its expected impact on the model’s development.
- Replicability and Testing: The framework should provide means for users to replicate prompt scenarios to test modifications and understand their effects on the model’s outputs.
- Prompt and Model Linking: Direct links to the specific model versions used along with the corresponding prompts, enabling a traceable lineage from input to model behavior.
- Timestamp and Metadata Documentation: Each entry of the prompt log should be timestamped and include metadata such as the version of the model used at that time.
- Public Access to Logs: Where possible, logs of the prompts should be made publicly available, with links provided in the documentation to ensure that users can review the historical context and development trajectory of the model.
This addition aims to enhance transparency and foster an environment where users can more effectively audit, replicate, and modify AI behavior.