avatar
Chris Carollo @crazytalk.bsky.social

It's for sure weird that it can do that sort of thing, with deep domain knowledge and a great ability to explore the issue with me and eventually explain what the problem was...and still maybe get state palindromes wrong! But that just means its a tool people should carefully, IMO.

aug 5, 2025, 5:58 am • 0 0

Replies

avatar
Chris Carollo @crazytalk.bsky.social

I'm not a "vibe coding" person and I wouldn't use LLM-generated results without reading and understanding it myself. I just think the "they're useless" camp is missing the boat, and should really explore when and where they can be useful. Because I think they really can be!

aug 5, 2025, 6:00 am • 0 0 • view
avatar
hypozeuxis.bsky.social @hypozeuxis.bsky.social

I've seen generative AI do some really cool things that humans basically can't (Matt Parker did a video about jigsaws with two distinct "correct" solutions) but expecting correct complex answers requiring some creativity is off the table for me. Your debugging example seems to me more #1 (cont'd)

aug 5, 2025, 6:09 am • 1 0 • view
avatar
hypozeuxis.bsky.social @hypozeuxis.bsky.social

Generating code coverage test cases and flagging discrepancies in responses is the sort of repetetive detailed work that computers are excellent at - relatively little "intelligence" needed once the parameters are properly specified. But analyze obfuscated code? Good luck, don't get your disk wiped

aug 5, 2025, 6:12 am • 2 0 • view