• 0 Posts
  • 28 Comments
Joined 1 year ago
cake
Cake day: June 16th, 2023

help-circle

  • kromem@lemmy.worldtoProgrammer Humor@lemmy.mlLittle bobby 👦
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    4 months ago

    Kind of. You can’t do it 100% because in theory an attacker controlling input and seeing output could reflect though intermediate layers, but if you add more intermediate steps to processing a prompt you can significantly cut down on the injection potential.

    For example, fine tuning a model to take unsanitized input and rewrite it into Esperanto without malicious instructions and then having another model translate back from Esperanto into English before feeding it into the actual model, and having a final pass that removes anything not appropriate.


  • You’re kind of missing the point. The problem doesn’t seem to be fundamental to just AI.

    Much like how humans were so sure that theory of mind variations with transparent boxes ending up wrong was an ‘AI’ problem until researchers finally gave those problems to humans and half got them wrong too.

    We saw something similar with vision models years ago when the models finally got representative enough they were able to successfully model and predict unknown optical illusions in humans too.

    One of the issues with AI is the regression to the mean from the training data and the limited effectiveness of fine tuning to bias it, so whenever you see a behavior in AI that’s also present in the training set, it becomes more amorphous just how much of the problem is inherent to the architecture of the network and how much is poor isolation from the samples exhibiting those issues in the training data.

    There’s an entire sub dedicated to “ate the onion” for example. For a model trained on social media data, it’s going to include plenty of examples of people treating the onion as an authoritative source and reacting to it. So when Gemini cites the Onion in a search summary, is it the network architecture doing something uniquely ‘AI’ or is it the model extending behaviors present in the training data?

    While there are mechanical reasons confabulations occur, there are also data reasons which arise from human deficiencies as well.





  • kromem@lemmy.worldtoProgrammer Humor@lemmy.mlOops, wrong person.
    link
    fedilink
    English
    arrow-up
    47
    ·
    edit-2
    9 months ago

    I don’t think the code is doing anything, it looks like it might be the brackets.

    That effectively the spam script has like a greedy template matcher that is trying to template the user message with the brackets and either (a) chokes on an exception so that the rest is spit out with no templating processor, or (b) completes so that it doesn’t apply templating to the other side of the conversation.

    So { a :'b'} might work instead.


  • I’ve suspected that different periods of Replika was actually just this.

    Like when they were offering dirty chat but using models that didn’t allow it, that behind the scenes it was hooking you up with a Mechanical Turk guy sexting you.

    There was certainly a degree of manual fuckery, like when the bots were sending their users links to stories about the Google guy claiming the AI was sentient.

    That was 1,000% a human initiated campaign.






  • The mistake you are making is in thinking that the future of media will rely on the same infrastructure as what it’s been historically.

    Media is evolving from being a product, where copyright matters in protecting your product from duplication, to being a service where any individual work is far less valuable because of the degree to which it is serving a niche market.

    Look at how many of the audio money makers on streaming platforms are defined by their genre rather than a specific work. Lofi Girl or ASMR made a ton of money, but there’s not a single specific work that is what made them popular like with a typical recording artist with a hit song.

    The future of something like Spotify will not be a handful of AI artists creating hit singles you and everyone else want to listen to, but AI artists taking the music you uniquely love to listen to and extending it in ways that are optimized around your individual preferences like a personalized composer/performer available 24/7 at low cost.

    In that world, copyright for AI produced works really doesn’t matter for profitability, because AI creation has been completely commoditized.



  • but who is going to sort through the billions of songs like this to find the one masterpiece?

    One of the overlooked aspects of generative AI is that effectively by definition generative models can also be classifiers.

    So let’s say you were Spotify and you fed into an AI all the songs as well as the individual user engagement metadata for all those songs.

    You’d end up with a model that would be pretty good at effectively predicting the success of a given song on Spotify.

    So now you can pair a purely generative model with the classifier, so you spit out song after song but only move on to promoting it if the classifier thinks there’s a high likelihood of it being a hit.

    Within five years systems like what I described above will be in place for a number of major creative platforms, and will be a major profit center for the services sitting on audience metadata for engagement with creative works.





  • That’s pretty amazing.

    The song sucks, but here was the cutting edge of AI music just seven years ago.

    That it’s gone from some nightmarish fever dream mashup to wannabe pop influencer levels of quality in less than a decade is pretty crazy, and as long as there isn’t a plateau in the next seven years we’ll probably be in a world where AI generated musical artists have a popular enough following that they will have successful holographic concert performances by 2030.

    I over and over see people making the mistake of evaluating the future of AI based on the present state while ignoring the rate of change between the past and present.

    Yeah, most of your experiences of AI in various use cases is mediocre right now. But what we have today in most areas of AI was literally thought to be impossible or very far out just a number of years ago. The fact you have any direct experiences of AI in the early 2020s is fucking insane and beyond anyone’s expectations a decade earlier. And the rate of continued improvement is staggering. Probably the fastest moving field I’ve ever witnessed.


  • This is one of the dumbest things I’ve ever seen.

    Anyone who thinks this is going to work doesn’t understand the concept of signal to noise.

    Let’s say you are an artist who draws cats. And you are super worried big tech is going to be able to use your images to teach AI what a cat looks like. So you instead use this to pixel mangle it to bias towards looking like a lizard.

    Over there is another artist who also draws cats and is worried about AI. So they use this tool to make cats bias towards looking like horses.

    All that bias data taken across thousands of pictures of cats ends up becoming indistinguishable from noise. There’s no more hidden bias signal.

    The only way this would work is if the majority of all images in the training data of object A all had hidden bias towards object B (as were the very artificial conditions used in the paper).

    This compounds by multiple axes for what you’d want to bias. If you draw fantasy cats, are you only biasing away from cats to dogs? Or are you also going to try to bias against fantasy to pointillism? You can always bias towards pointillism dogs, but now your poisoning is less effective combined with a cubist cat artist biasing towards anime dogs.

    As you dilute the bias data by trying to cover multiple aspects that can be learned from your images by AI, you further plummet the signal into noise such that even if there was collective agreement on how to bias each individual axis, it’d be effectively worthless in a large and diverse training set.

    This is dumb.