Spam-like, for sure, whether human or AI.
GPT-3 (or 3.5, I suppose to be exact) is certainly opening the public's eyes to what large language models and "generative AI" is capable of these days. It's fairly impressive, but most likely didn't realize that a vast percentage of syndicated news content was already AI generated.
In my line of work, there's now some very interesting discussion around security implications, though - data security (like governance of origin data [used to train models], and access rights to conglomerated AI works of "art"), intellectual property implications, and quite a few more. Various researchers have already shown how these models (including GPT-3) can be manipulated for nefarious reasons - such as poisoning the output with embedded instruction in the request - think of this as similar to old school SQL injection attacks against websites where the attacker would embed SQL (database query commands) into website form fields, hoping the back-end application would process them and dump the results. But in this case, the model would be forced to give manipulated output - funny, perhaps, if just playing with the interface, not so funny if an attacker is able to do this within an app that's using GPT-3 for something like a service chat bot.
Obviously, beyond manipulation of the model, itself, there's the "deep fake" concerns - as you're essentially raising here. But, add it to the list of domains where we have never-ending (and escalating) combat with the bad guys. This is where concepts like "adversarial AI" come in to play. Think of two AI models - a white hat and a black hat - that can essentially be used for keeping up with the Jones. So, as good as the next deep fake gets, there will be models that can detect and expose them. The "detection" model is essentially trained on the output of the generative AI model - nuances in phrases, construct of sentences / grammar, length (GPT-3 is known to be "verbose"), etc. would all play a role in essentially detecting the fakes. For many years, very basic machine learning has been used in fraud detection and spam detection (think e-mail filters)... but, yeah, the game will need to be upped considerably here... mainly in context to ensuring that the technology for such fraud/fake/spam/etc. detection is not only exceptional at its job but also open (for trust purposes) and widely available.
As a technologist, I get the urge to always push the ball forward. But, there are a handful of domains, and this is one of them, where we'll live to regret many of the decisions we're making. As innocent as it may appear, and for all the benefits it can offer, we've mentally explored and pontificated around the many issues that arise if we're ever able to achieve sentient artificial life. The thought experiments go as far back as the 1940's and Issac Asimov's three laws of robotics. Even then, people understood the danger of creating artificial life that may, in turn, understand its own existence and decide to become genocidal. Now, as impressive and convincing as ChatGPT may be, it's not sentient. As I stated in another thread (off topic) recently, we don't really understand what separates logic (like pattern recognition in a neural network) from sentience... and simply having convincing, human-sounding text-conversations doesn't meet the bar... thankfully. But, I think we'll see it in our lifetime... and that cat won't go back in the bag, or the many systemic, philosophical, and far-reaching issues it'll bring with it. And, that's just scratching the surface - not getting to ugliness of considering what kind of ambitions, temperament, and "personality" sentient artificial life would have... or what governs it. It's not like biological life with genetic markers that we can explore.
Like I said, I get the urge to move technology forward... but this is one domain where I'm not convinced we have the capacity to deal with the implications of our creation... put it in the camp of nuclear weapons. As the old timers would say, "it's all fun and games until..."