
I would even put 'functioning' in quotes. Since the LLM bot is spewing out text based on it's inputs, I'm not sure exactly at what point your own inputs start to cause it to generate text that apparently 'veers off' from 'safeguards'. The way these things are built, no one knows exactly what ... 1/