How many pages has a human author read and written before they can produce something worth publishing? I’m pretty sure that’s not even a million pages. Why does an AI require a gazillion pages to learn, but the quality is still unimpressive? I think there’s something fundamentally wrong with the way we teach these models.
The more important question is: Why can a human absorb a ton of material in their learning without anyone crying about them “stealing”? Why shouldn’t the same go for AI? What’s the difference? I really don’t understand the common mindset here. Is it because a trained AI is used for profit?
It is because a human artist is usually inspired and uses knowledge to create new art and AI is just a mediocre mimic. A human artist doesn’t accidentally put six fingers on people on a regular basis. If they put fewer fingers it is intentional.
How many pages has a human author read and written before they can produce something worth publishing? I’m pretty sure that’s not even a million pages. Why does an AI require a gazillion pages to learn, but the quality is still unimpressive? I think there’s something fundamentally wrong with the way we teach these models.
The more important question is: Why can a human absorb a ton of material in their learning without anyone crying about them “stealing”? Why shouldn’t the same go for AI? What’s the difference? I really don’t understand the common mindset here. Is it because a trained AI is used for profit?
It is because a human artist is usually inspired and uses knowledge to create new art and AI is just a mediocre mimic. A human artist doesn’t accidentally put six fingers on people on a regular basis. If they put fewer fingers it is intentional.
If your argument is that it depends on the quality of the output, then I definitely shouldn’t be allowed to look at art or read books.
That’s where I don’t agree. I don’t subscribe to the view that LLMs merely are “stochastic parrots”.