Counter argument to pose, would be that a skilled artist with AI is now a faster producing artist than without - presumably (at least at the current tech), this combo pair up is best of both worlds. Artist can create art but still retains creative freedom and the talent of guiding AI prompts in specific directions a project may call for that a non-artist with an AI would struggle with.
I think if capitalism wasn’t involved in how AI evolves, we would be in a much better place. The fact that the first question about any tech is “how can we make money with it?” Already starts down a different path.
Seems like since we can’t solve that fundamental issue, best next bet is to learn to welcome AI into our lives in ways that enrich it. Use it to augment your work - alternatively, maybe start learning and specializing in things that are (for the time being) out of reach for AI. Human services that require another human or hand made high quality items where the purchaser is specifically interested in the hand made aspect.
Can’t say I have a perfect solution, other than to stay curious and adaptable to change.
Yeah, managed to not see that in the title 🫠
Interesting - kind of weird how in the visual realm there’s the uncanny valley, but I suppose that would be explained by how significant and instinctual vision has played a role in human evolution to detect faces/weird faces etc
Right, and I suppose if you still tried to charge for use of references to source data, it would then be a weird slippery slope of weighting for which source data the AI was trained on first. How would you say, bill for references to a circuit board if it was trained on things like dictionaries that include “circuit board” as well as of course, more direct references to circuit boards in tech.
Guess it could be some weird percentage, but I don’t think I would welcome that reality
Such a detailed response, thank you for that. It walked the line well between keeping it fairly simple but still detailed to understand it.
Because of the complexity and “mystery box” nature of ai for me, it’s hard not to just allow it in my mind to just to consider it as another form of intelligence. But people dismiss it as not intelligent because they have a far better understanding of how the AI has been trained and also in how it came to the results it has. This, VS humans where you’re like “oh I know he went to college in the medical field” but you don’t have as intimate an understanding how how thoughts, ideas and responses are formed because of that obscured source information. Also of course, far more complex with the other impacting factors like chemicals in the body, sleep, mood, experiences and perceptions.
I guess this is a bit of a run on, but it still makes me wonder if it’s just a case of creating enough obscured understanding that allows for consciousness to be as accepted. Not saying that ChatGPT is like a genuine consciousness, but more that it’s the underpinnings or beginnings for something like that. But this is said as someone with absolutely no training in the medical field as well as the artificial intelligence field, so yeah.
Thanks again for your response.
Likely an unpleasant or possibly infeasible thing to implement, but designing the AI to always be able to “show the receipts” for how it’s formulating any given response could potentially be helpful. Suppose that could result in like a micro-royalties sort of industry to crop up for sourced data being used, akin to movies or TV using music and paying royalties