They somehow manage to filter smut (presumably after generation but before transmission), so some processing of results is possible. Same way that human input can be filtered for harmful content
They somehow manage to filter smut (presumably after generation but before transmission), so some processing of results is possible. Same way that human input can be filtered for harmful content
And that's exactly why people in algorithmically moderated social media say things like "corn" and "adult fun time" and "un-alive". You can still prompt engineer LLMs to give info they aren't supposed to or produce smut. You can stop some, but not all, and it gets harder the longer a session goes.
Yeah that's fair. My general attitude is ban AI.