I will try to recreate from memory what people smarter than me mentioned in the room. I hope said people will join and comment here directly, to avoid any misinterpretation from my side
From what I understand, if one looks at the OECD definition of AI, and we chop it up to be shorter:
An AI system is a machine-based system that […] infers, from the input it receives, how to generate outputs […]
The word “infer” is here very important, because this inference is pretty much the only thing setting AI systems apart from well any random piece of software.
Again, quoting the OECD.ai website, I think this is relevant and important here:
Illustrative, simplified overview of an AI system
Note: This figure presents only one possible relationship between the development and deployment phases. In many cases, the design and training of the system may continue in downstream uses. For example, deployers of AI systems may fine-tune or continuously train models during operation, which can significantly impact the system’s performance and behaviour.
and further down:
“Inferring how to” generate outputs
The concept of “inference” generally refers to the step in which a system generates an output from its inputs, typically after deployment. When performed during the build phase, inference, in this sense, is often used to evaluate a version of a model, particularly in the machine learning context. In the context of this explanatory memorandum, “infer how to generate outputs” should be understood as also referring to the build phase of the AI system, in which a model is derived from inputs/data.
So if I recall our discussion at Cyberpipe correctly, an example of an inference during the build (or pre-deploy) phase would be when you have a pair of AI systems learning from each other – e.g. one trying to create images of cats, and the other identifying images of cats (or point out if they are fake). So to make one (or both) better at their task, you have them learn from each other – which is where the pre-deployment inference kicks in.
And here it may be – and this is the part that I’m least sure I remember correctly – that because one ML would be inferring from the output of the other (and vice versa), that the inference here would not have any code written for it (at least not directly), but would need to be covered otherwise. E.g. by stretching to the partner ML system. (dare I say AI copyleft? egad!)