Audi R8 Forums banner
1 - 16 of 16 Posts

· Registered
Joined
·
1,815 Posts
Discussion Starter · #1 ·
Is AI actively scanning and replying to posts in this website?

Look at this back and forth over the last few posts and it sure seems to me like either state sponsored or AI automated.


 

· Registered
Joined
·
617 Posts
Is AI actively scanning and replying to posts in this website?

Look at this back and forth over the last few posts and it sure seems to me like either state sponsored or AI automated.


Strange that you ask this question :unsure:
 

· Registered
Joined
·
984 Posts
Is AI actively scanning and replying to posts in this website?

Look at this back and forth over the last few posts and it sure seems to me like either state sponsored or AI automated.


I think it is automated (AI). English is too proper.
 

· Registered
Joined
·
1,815 Posts
Discussion Starter · #10 ·
This is getting weirder by the minute. Here is another thread that appears to have angry AI responding.

 

· Premium Member
Joined
·
6,111 Posts
what does @ezmaass think about this?
Spam-like, for sure, whether human or AI. :)

GPT-3 (or 3.5, I suppose to be exact) is certainly opening the public's eyes to what large language models and "generative AI" is capable of these days. It's fairly impressive, but most likely didn't realize that a vast percentage of syndicated news content was already AI generated.

In my line of work, there's now some very interesting discussion around security implications, though - data security (like governance of origin data [used to train models], and access rights to conglomerated AI works of "art"), intellectual property implications, and quite a few more. Various researchers have already shown how these models (including GPT-3) can be manipulated for nefarious reasons - such as poisoning the output with embedded instruction in the request - think of this as similar to old school SQL injection attacks against websites where the attacker would embed SQL (database query commands) into website form fields, hoping the back-end application would process them and dump the results. But in this case, the model would be forced to give manipulated output - funny, perhaps, if just playing with the interface, not so funny if an attacker is able to do this within an app that's using GPT-3 for something like a service chat bot.

Obviously, beyond manipulation of the model, itself, there's the "deep fake" concerns - as you're essentially raising here. But, add it to the list of domains where we have never-ending (and escalating) combat with the bad guys. This is where concepts like "adversarial AI" come in to play. Think of two AI models - a white hat and a black hat - that can essentially be used for keeping up with the Jones. So, as good as the next deep fake gets, there will be models that can detect and expose them. The "detection" model is essentially trained on the output of the generative AI model - nuances in phrases, construct of sentences / grammar, length (GPT-3 is known to be "verbose"), etc. would all play a role in essentially detecting the fakes. For many years, very basic machine learning has been used in fraud detection and spam detection (think e-mail filters)... but, yeah, the game will need to be upped considerably here... mainly in context to ensuring that the technology for such fraud/fake/spam/etc. detection is not only exceptional at its job but also open (for trust purposes) and widely available.

As a technologist, I get the urge to always push the ball forward. But, there are a handful of domains, and this is one of them, where we'll live to regret many of the decisions we're making. As innocent as it may appear, and for all the benefits it can offer, we've mentally explored and pontificated around the many issues that arise if we're ever able to achieve sentient artificial life. The thought experiments go as far back as the 1940's and Issac Asimov's three laws of robotics. Even then, people understood the danger of creating artificial life that may, in turn, understand its own existence and decide to become genocidal. Now, as impressive and convincing as ChatGPT may be, it's not sentient. As I stated in another thread (off topic) recently, we don't really understand what separates logic (like pattern recognition in a neural network) from sentience... and simply having convincing, human-sounding text-conversations doesn't meet the bar... thankfully. But, I think we'll see it in our lifetime... and that cat won't go back in the bag, or the many systemic, philosophical, and far-reaching issues it'll bring with it. And, that's just scratching the surface - not getting to ugliness of considering what kind of ambitions, temperament, and "personality" sentient artificial life would have... or what governs it. It's not like biological life with genetic markers that we can explore.

Like I said, I get the urge to move technology forward... but this is one domain where I'm not convinced we have the capacity to deal with the implications of our creation... put it in the camp of nuclear weapons. As the old timers would say, "it's all fun and games until..." :)
 
  • Like
Reactions: JuddS

· Premium Member
Joined
·
207 Posts
Is AI actively scanning and replying to posts in this website?

Look at this back and forth over the last few posts and it sure seems to me like either state sponsored or AI automated.


It certainly has the characteristics and style of AI or ChatGPT language.
 

· Premium Member
Joined
·
207 Posts
Spam-like, for sure, whether human or AI. :)

GPT-3 (or 3.5, I suppose to be exact) is certainly opening the public's eyes to what large language models and "generative AI" is capable of these days. It's fairly impressive, but most likely didn't realize that a vast percentage of syndicated news content was already AI generated.

In my line of work, there's now some very interesting discussion around security implications, though - data security (like governance of origin data [used to train models], and access rights to conglomerated AI works of "art"), intellectual property implications, and quite a few more. Various researchers have already shown how these models (including GPT-3) can be manipulated for nefarious reasons - such as poisoning the output with embedded instruction in the request - think of this as similar to old school SQL injection attacks against websites where the attacker would embed SQL (database query commands) into website form fields, hoping the back-end application would process them and dump the results. But in this case, the model would be forced to give manipulated output - funny, perhaps, if just playing with the interface, not so funny if an attacker is able to do this within an app that's using GPT-3 for something like a service chat bot.

Obviously, beyond manipulation of the model, itself, there's the "deep fake" concerns - as you're essentially raising here. But, add it to the list of domains where we have never-ending (and escalating) combat with the bad guys. This is where concepts like "adversarial AI" come in to play. Think of two AI models - a white hat and a black hat - that can essentially be used for keeping up with the Jones. So, as good as the next deep fake gets, there will be models that can detect and expose them. The "detection" model is essentially trained on the output of the generative AI model - nuances in phrases, construct of sentences / grammar, length (GPT-3 is known to be "verbose"), etc. would all play a role in essentially detecting the fakes. For many years, very basic machine learning has been used in fraud detection and spam detection (think e-mail filters)... but, yeah, the game will need to be upped considerably here... mainly in context to ensuring that the technology for such fraud/fake/spam/etc. detection is not only exceptional at its job but also open (for trust purposes) and widely available.

As a technologist, I get the urge to always push the ball forward. But, there are a handful of domains, and this is one of them, where we'll live to regret many of the decisions we're making. As innocent as it may appear, and for all the benefits it can offer, we've mentally explored and pontificated around the many issues that arise if we're ever able to achieve sentient artificial life. The thought experiments go as far back as the 1940's and Issac Asimov's three laws of robotics. Even then, people understood the danger of creating artificial life that may, in turn, understand its own existence and decide to become genocidal. Now, as impressive and convincing as ChatGPT may be, it's not sentient. As I stated in another thread (off topic) recently, we don't really understand what separates logic (like pattern recognition in a neural network) from sentience... and simply having convincing, human-sounding text-conversations doesn't meet the bar... thankfully. But, I think we'll see it in our lifetime... and that cat won't go back in the bag, or the many systemic, philosophical, and far-reaching issues it'll bring with it. And, that's just scratching the surface - not getting to ugliness of considering what kind of ambitions, temperament, and "personality" sentient artificial life would have... or what governs it. It's not like biological life with genetic markers that we can explore.

Like I said, I get the urge to move technology forward... but this is one domain where I'm not convinced we have the capacity to deal with the implications of our creation... put it in the camp of nuclear weapons. As the old timers would say, "it's all fun and games until..." :)
Was that really you, ezmaass, or ChatGPT? 😳
 
  • Haha
Reactions: Nagengast

· Registered
Joined
·
1,815 Posts
Discussion Starter · #16 ·
I think we should rate how stupid people are. I've always enjoyed thinking through how dumb people are. I find it fascinating how much of a difference there is in intelligence and how that plays out in group dynamics. And its really fun to see how people can be so smart in one thing and so dumb in another.
 
1 - 16 of 16 Posts
Top