What are Cyberpipe comments in the draft

I apologise for a slightly odd thread, but inside the Cyberpipe / Kiberpipa (meta)hackerspace, starting with 0.0.6 draft, we’re meeting together and collaboratively discussing the definition, while one of us (currently me) writes makes sure the discussions get translated into comments.

So if you see a comment (of mine) in HackMD that’s marked as (Cyberpipe comment) it is a joint effort of the following people (in alphabetical order):

  • Leon Anžel (testman)
  • Jure Repinc (JLP)
  • Blaž Rojc (hackguy)
  • Mihael Simonič
  • Jurij Sitar
  • Adrian Šiška
  • Matija Šuklje (hook) – yours truly
  • Samo Zorc – typically leading the discussions

I will try to keep the list up-to-date, as more people join in with the debate. Some were missing today due to school or other obligations.

Just wanted to make sure that it does not get lost who helped out. Of course, I am prepared to take any editorial etc. blame on me.

3 Likes

That’s fantastic to hear. Thank you very much @hook and everyone from Cyberpipe / Kiberpipa (meta)hackerspace!

1 Like

@hook: Can you please clarify this comment?

Within OECD definition of AI systems, “inference” does not rely only on execution in the deployment phase, but also includes the learning phase.

What’s the implication? How would you rephrase the sentence?

I will try to recreate from memory what people smarter than me mentioned in the room. I hope said people will join and comment here directly, to avoid any misinterpretation from my side :slight_smile:

From what I understand, if one looks at the OECD definition of AI, and we chop it up to be shorter:

An AI system is a machine-based system that […] infers, from the input it receives, how to generate outputs […]

The word “infer” is here very important, because this inference is pretty much the only thing setting AI systems apart from well any random piece of software.

Again, quoting the OECD.ai website, I think this is relevant and important here:

Illustrative, simplified overview of an AI system

Note: This figure presents only one possible relationship between the development and deployment phases. In many cases, the design and training of the system may continue in downstream uses. For example, deployers of AI systems may fine-tune or continuously train models during operation, which can significantly impact the system’s performance and behaviour.

and further down:

“Inferring how to” generate outputs

The concept of “inference” generally refers to the step in which a system generates an output from its inputs, typically after deployment. When performed during the build phase, inference, in this sense, is often used to evaluate a version of a model, particularly in the machine learning context. In the context of this explanatory memorandum, “infer how to generate outputs” should be understood as also referring to the build phase of the AI system, in which a model is derived from inputs/data.

So if I recall our discussion at Cyberpipe correctly, an example of an inference during the build (or pre-deploy) phase would be when you have a pair of AI systems learning from each other – e.g. one trying to create images of cats, and the other identifying images of cats (or point out if they are fake). So to make one (or both) better at their task, you have them learn from each other – which is where the pre-deployment inference kicks in.

And here it may be – and this is the part that I’m least sure I remember correctly – that because one ML would be inferring from the output of the other (and vice versa), that the inference here would not have any code written for it (at least not directly), but would need to be covered otherwise. E.g. by stretching to the partner ML system. (dare I say AI copyleft? egad!)