Can a derivative of non-open-source AI be considered Open Source AI?

I’m not sure about this.
Multiple strategies for fine-tuning exist, and anyone is free to make up their own.

You do not need to add any new layer, you can update weights in any amount of existing layers.

You don’t have to update the whole network, but if you choose to that doesn’t negate the purpose of having a pre-trained model.
Training a new model from scratch can require a lot of data and time (depending on the size of the model, the training algorithm and other factors). Fine-tuning might be cheaper, even if you train all layers, because you are leveraging the information the model already contains.

In essence, you can use fine-tuning to slightly bias the model so that it performs better in a certain class of samples, or to slightly change its general behavior.
If you are training all layers, it’s similar to having that new data at the end of the overall training process, so that more information about it is preserved.