Do you think LLMs and other large broad models could/should be legislated to make them more regulated/predictable. The teenager committing suicide after speaking with chatgpt a lot is horrendous.
Do you think LLMs and other large broad models could/should be legislated to make them more regulated/predictable. The teenager committing suicide after speaking with chatgpt a lot is horrendous.
They will not become perfectly predictable with current tech. However they will likely do intense surveillance, individual risk scores, and notification to authorities of suspicious behavior. Pre-crime detection with a very high false-positive rate.
Does that require more urgent action than teens committing suicide after using social media, or with not tech involved?
If someone spoke to a teenager and mentioned suicide thousands of times and told them how to make a suitable noise, then yes action should be taken.