• 0 Posts
  • 16 Comments
Joined 1 year ago
cake
Cake day: July 1st, 2023

help-circle
  • Copyright is an artificial restriction

    All laws are artificial restrictions, and copyright law is not exactly some brand new thing.

    AI either has to work within the existing framework of copyright law OR the laws have to be drastically overhauled. There’s no having it both ways.

    What you should be advocating for instead is something like a mandatory GPL-style license, where anybody who uses the model or contributed training data to it has the right to a copy of it that they can run themselves.

    I’m a programmer and I actually spend most of my week writing GPLv3 code.

    Any experienced programmer knows that GPL code is still subject to copyright. People (or their employer in some cases) own the code the right, and so they have the intellectual right to license that code under GPL or any other license that happens to be compatible with their code base. In other words I have the right to license my code under GPL, but I do not have the right to apply GPL to someone else’s code. Look at the top of just about any source code file and you’ll find various copyright statements for each individual code author, which are separate from the terms of their open source licensing.

    I’m also an artist and musician and, under the current laws as they exist today, I own the copyright to any artwork or music that I happen to create by default. If someone wants to use my artwork or music they can either (a) get a license from me, which will likely involve some kind of payment, or (b) successfully argue that the way they are using my work is considered a “fair use” of copyrighted material. Otherwise I can publish my artwork under a permissive license like public domain or creative commons, and AI companies can use that as they please, because it’s baked into the license.

    Long story short, whether it’s code or artwork, the person who makes the work (or otherwise pays for the work to be made on the basis of a contract) owns the rights to that work. They can choose to license that work permissively (GPL, MIT, CC, public domain, etc.) if they want, but they still hold the copyright. If Entity X wants to use that copyrighted work, they either have to have a valid license or be operating in a way that can be defended as “fair use”.

    tl;dr: Advocate for open models, not copyright

    TLDR: Copyright and open source/data are not at odds with each other. FOSS code is still copyrighted code, and GPL is a relatively restrictive and strict license, which in some cases is good and in other cases not depending on how you look at it. This is not what I’m advocating, but the current copyright framework that everything in the modern world is based on.

    If you believe that abolishing copyright entirely to usher in a totally AI-driven future is the best path forward for humanity, then you’re entitled to think that.

    But personally I’ll continue to advocate for technology which empowers people and culture, and not the other way around.



  • If you look at a hundred paintings of faces and then make your own painting of a face, you’re not expected to pay all the artists that you used to get an understanding of what a face looks like.

    That’s because I’m a human being. I’m acting on my own volition and while I’ve observed artwork, I’ve also had decades of life experience observing faces in reality. Also importantly, my ability to produce artwork (and thus my potential to impact the market) is limited and I’m not owned or beholden to any company.

    “AI” “art” is different in every way. It is being fed a massive dataset of copyrighted artwork, and has no experiences or observations of its own. It is property, not a fee or independent being. And also, it can churn out a massive amount of content based on its data in no time at all, posing a significant challenge to markets and the livelihood of human creative workers.

    All of these are factors in determining whether it’s fair to use someone else’s copyrighted material, which is why it’s fine for a human being to listen to a song and play it from memory, but it’s not fine for a tape recorder to do the same (bootlegging).

    Btw, I don’t think this is a fair use question, it’s really a question of whether the generated images are derivatives of the training data.

    I’m not sure what you mean by this. Whether something is derivative or not is one of the key questions used to determine whether the free use of someone else’s copyrighted work is fair, as in fair use.

    AI training is using people’s copyrighted work, and doing so almost exclusively without knowledge, consent, license or permission, and so that’s absolutely a question of fair use. They either need to pay for the rights to use people’s copyright work OR they need to prove that their use of that work is “fair” under existing laws. (Or we need to change/update/overhaul the copyright system.)

    Even if AI companies were to pay the artists and had billions of dollars to do it, each individual artist would receive a tiny amount, because these datasets are so large.

    The amount that artists would be paid would be determined by negotiation between the artist (the rights holder) and the entity using their work. AI companies certainly don’t get to unilaterally decided what people’s art licenses are worth, and different artists would be worth different amounts in the end. There would end up being some kind of licensing contract, which artists would have to agree to.

    Take Spotify for example, artists don’t get paid a lot per stream and it’s arguably not the best deal, but they (or their label) are still agreeing to the terms because they believe it’s worth it to be on those platforms. That’s not a question of fair use, because there is an explicit licensing agreement being made by both parties. The biggest artists like Taylor Swift negotiate better deals because they make or break the platform.

    So back to AI, if all that sounds prohibitively expensive, legally fraught, and generally unsustainable, then that’s because it probably is–another huge tech VC bubble just waiting to burst.