On how to manage AI assistance when creating creative processes.
After reflecting on the subject with present tools and my comprehension of the subject I propose to start the AI copyright infringement booster by concatenation of events similar to what was work on this session rather with one created from scratch, considering the capacities of these systems .
Writing is a creative process which can unifies in various ways a similar statement. Is like having a function F(x) → Y, such that multiple inputs are having the same output; take for example square function, you can’t tell if a number was brought into the function by its positive or negative values, both will have the same output. In a similar way, when a person request to a AI syntax motor to write, the given output text could happen to be generated by any person. It could be compared to if a person requests someone else to write for. So by analogy if a person requests another person to write, the style will vary from person to person. Since it is harder to generate content with different style every time (considering a human is writing) focusing on the long term analyzing, I’m certain style would be a key concept to analyze AI copyright fraud as well as a tool to contribute to public recognition of AI tools as well. In the way AI increases their capacity each model will defer from others showing a style concept. If one could dare to extrapolate things, the situation could go as far as the subject is attained in the movie ghost in the shell.
The ability to concatenate various writings, if a machine learning system could train to determine style and classify by user, then each new writing will give to the system more data to determine if a given user is retaining the style or is switching. Also businesses that concern this problem could benefit from deep understanding AI generating natural language models styles . Indeed AI models already have things that distinguish between them and the user can fine-tune some parameters (those fine tuning could be turned into a back door to trick style recognition).
If there is an AI style interpreter that has been trained with selected information from the creators of distinct AI generating natural language models, then could it be biased to recognize the fine-tune users' adjustment. It would be something similar to talk to the same inteligible thing but with variations; this subject requires further analysis of implications, but it is similar to talking to a friend; they can change but we find somethings deep in their behavior that lets us recognize them
To the businesses that concerns this issue, each user generated content will sum to generate their style, which will maybe let pass some first tries to cheat the system, but in the long term may offer a stable solution to the challenge and if AI models count already with style pattern, even first tries could be catched.
If you see this route as an alternative, the next step would be to analyzed with detail term of use and capabilities of actual AI models to adjust this proposition.
Please leave the feedback on this idea