I think it is a great starting point. To assess transparency, I believe it is important to also include the detailed technical aspects of model training, such as hardware specifications, training time and carbon footprint (if available). Sharing this information would benefit the community by promoting reproducibility, improving accessibility, and enabling more efficient collaboration. Transparency around resource requirements also helps practitioners estimate the computational cost of replicating or adapting models, encouraging more responsible and optimized use of hardware.
In this sense, knowing the hardware setup might allow researchers to recreate the training environment and thus better understand the computational power required. And it would also help developers optimize models for more efficient training on available resources.