My new column is drawing rave reviews like “go fuck yourself,” “die, loser” and more! Take a look and add yours to the pile!
My new column is drawing rave reviews like “go fuck yourself,” “die, loser” and more! Take a look and add yours to the pile!
Fuck ai
I did. It’s really stupid!
This is all good fun but I don't think anybody here hates you as much as the editors who allowed you to embarrass yourself like this.
Idk I looked in there and it was mostly people calling you stupid and useless which is true.
It genuinely reads as though it was either a bought opinion/thinkpiece or as a deep misunderstanding and fetishization of the tech discussed.
It’s because your “analysis” is indistinguishable from someone paid by the industry to promote AI. If giant corporations & the uber wealthy are in love with AI, that’s all the reason I need to be suspicious of it AI is horrible for the environment, human creativity, critical thinking skills, etc.
lick my balls kevin, im not going to read your shitty product placement advertisement “article” in the hitler paper. youre a hack. tell bouie i said hes one too.
You're not good
i never claimed to be, tell me something i dont know.
Glad we agree! You should improve
im good
No, you're bad!
you’re damn right
Ethically! Not in a cool MJ way! Though I guess he also may have been ethically too idk
have you considered taking this feedback to heart and realizing WHY we're calling you a loser? lol
Honestly I’m not seeing a lot of anger or threats, but I am seeing lots and lots of comments calling you slow-witted, gullible, or wondering if you have the brain of a dog, etc.
he's upset about the casual swearing because he's a podcast host and they usually have Napoleon complexes
god forbid someone offend the class of white man that is "podcast host"
I think that you can understand those to be expressions of anger or frustration
Hey I know a lot of people are shitting on you over this and I just want you to know they are all 100% right.
No, you're all wrong.
That's nice, Kevin.
Can you do me a favor and go take a good long hard look at yourself in the mirror while we’re doing that?
nah i ain't readin that shit
Its almost ad if the column is shit and people are tired of you gargling tech CEO’s testicles or something
I asked a stupid question and people called me out on it ama
And then I put up a signpost so people would know I asked a stupid question
You're somehow more stupid than than the journalist that uses dog shampoo.
Who'd a thunk jerking off philosophically about if the billionaires' new Furby could be human would go over poorly, in a time where many living humans are actively being denied their humanity. 🤔
This point makes no sense, the timing doesn't matter and it's a deep philosophical question!
GFY Kevin. Just kidding. Love your stuff.
to be fair they're absolutely correct, you're acting like the Thing That Makes Up Facts From Stolen Text is the new evolution of humanity, when in reality it's just fucking obnoxious nerd shit. The assistant in your phone doesn't have a soul, you absolute fucking goober
Are they actually haters or AI bots?
It's possible for actual human beings to think that piece was trash. What a world!
How about “Learn how computers work”
I'm sorry people think your shitty column sucks, have you considered writing better
Have you considered that you wrote this knowing you were going to get casual hate because it was a really stupid premise?
That's his job -- write incredibly dumb stuff that enrages people so they share the article in rage. That's how the non-news people at the NYT have worked for years. They're all hacks like this guy.
It's a whole newspaper designed for hate-reading. He's doing his part.
Not if, but when they will become self-aware.
I wouldn't tell you to die, but the article is one of the stupidest things I have read in my entire life and you are a credulous buffoon for lapping up the nonsense from the AI bubble companies and embarrassing yourself by publishing such cretinous nonsense. Hope this helps!
Have you considered that you are indeed a dumb fucking loser with a humiliation fetish?
Seems like the others have got it covered.
i'm not going to tell you to die, but i will tell you to go fuck yourself, loser. this article sucks and so do you
tfw everyone hates you so you go back to doing 2012 style epic posting
I saw the headline. I realized we have a Nazi administration in power actively tried to kill us. I skipped this particular bullshit. Good day.
I am capable of considering multiple things throughout the day, have a good one
Reach around and pat that back for me!
Kyle Fish might not realize he's being used to misinform the public about language models. A tech journalist ought to.
Your column is the graveyard of ridiculous technological abortions www.nytimes.com/2021/03/24/t...
Maybe because it's fucking bad LOL
You're so drole, Kevin! The plebe doesn't get you big-brained NYT takes. Yes get ahead of this instead of resigning in shame.
Kevin have you considered that it's because it's bad
That's because it sucks
I took an AI class in the 80s and AI was "real soon then". They're no closer to having it be intelligent, but maybe in a century.
This is plainly and insanely incorrect! Not an even remotely reasonable take!
You are agitated with both sentences ending in exclamation marks!😂 That's huge cares. The Ais Today don't "think", but regurgitate things based on statistical predictions and don't "know" what they're saying.
Yes, I care about this subject and your take is absurd! They def do think by any colloquial definition of that word, you're just poorly describing a process of thought!
I said fuck off as soon as I saw the intro
Maybe your editor should have told you it was complete horseshit
who would you rather fight, ten duck-sized horses or one horse-sized duck
The real question is which of these do you want on your side? I want to ride into battle on a big duck.
How carefully did you have to type that to avoid typos in "duck"?
i would fight ten duck sized horses over fighting a single duck sized duck let alone a horse sized one
Ugh now I gotta watch this again, thanks
🫡
Jerry getting right to the heart of the situation as always.
how dare you jerry. think of the people with no ability to fight unusually sized animals at all. do better.
I don’t think you can even imagine the horror of a horse sized duck.
I can easily imagine it. Does anyone have a big skillet and some panko? Like, a stupid amount of panko
Three words: Giant Muscovy Ducks.
Merganser of death.
Are you not embarrassed to call yourself a tech journalist and push this drivel?
Attempting to shame the shameless is always a non-starter. It reveals a fundamental misunderstanding of how grifting works, like trying to get AIPAC puppet Hakeem Jeffries to care about piles of dead kids.
no not really
Kevin, like AI, has not hit the level of consciousness where shame kicks in One day, buddy! We’re rooting for you
You should be.
You are not embarrassed. Fine. But this situation is an embarrassment to the New York Times, as it should be.
When human rights are being taken away this is an incredibly tone deaf take.
This is the take. "But for now, I'll reserve my deepest concern for carbon-based life-forms. In the coming A.I. storm, it's our welfare I'm most worried about." It's at worst an annoying title that doesn't take into consideration people will guess the summary from the title. Not a tone deaf take.
I get that it's buried, but even CONSIDERING rights for AI when human rights are being taken, and writing about it, comes from a place of extreme privilege. I get it's his job, and it was likely assigned to him, but doesn't negate the point. I'm glad it was buried at the tail end.
We should immediately slaughter all pets until human utopia is achieved. Especially the cute ones.
Nice try, AI isn't alive 👍
I’m not saying AI is alive, but for my own entertainment, can you tell me how animals are alive? Is it the action of breathing? Are plants alive? Fungus? What’s your line for “things that deserve rights”?
There are plenty of articles to read on this exact issue if you're not lazy. I've got a job helping sick kids to focus on ❤️. openoregon.pressbooks.pub/mhccmajorsbi...
This was an ethical discussion not a medical one. But yeah good for the kids
Medical and ethics are intertwined my friend. Biology and ethics are intertwined too. Read up on those research publications ❤️.
Writers don't typically get to choose their titles, the writing an article on AI rights seems like a tonedeaf take regardless.
Didn’t click your link but fuck you anyway dickbag
And those were only the experts you talked to about your dumbshit idea for your article.
Think there’s a reason for that or nah?
Post some of these comments then. If there are so many comments like this, post some of them so we can see that they are, in fact, happening.
Go back to where you came from!
People being thrown out of the country illegally and sold to for-profit slave camp prisons, but yeah, lets hand-wring about if a pile of badly written code has a sad in it's e-fee-fees. Awesome priorities. How about an article about whether techbros can experience feelings instead? More relevant.
Musky boy clearly, and positively proved already that techbros don't experience any feelings other than greed. So that would be a waste of paper too.
Gen AI is a fundamentally loathsome thing. I hate HOW people have expressed their anger but feel all the same as they do. Give me one reason to support a technology that steals away one of the most wonderful aspects of the human spirit and destroys mother nature on top of it. There isn't one.
That's actually a very intelligent outlook on the situation. It's considering both perspectives and saying "While I understand the argument that new forms of sentience should be given rights, possibly even prior to their creation, we are not on the cusp of that happening anytime soon.
I see youve learned how to farm engagement. Do this on twitter instead of Bluesky so you can get some elonbux.
Being criticized doesn't mean you're right. You should develop a moral and logical framework that doesn't depend on this stimulus like an earthworm fleeing sunlight.
Ok here goes: Damn bro thanks for letting everyone know you’re a lousy sycophant. Or maybe you’re really just this pathetic, lmao.
you're literally a moron. posting through it isn't going to change how much of a moron you are.
Smater than you, probs
Smater
Go fall in love with a chatbot again if you don't like what real people are saying about you ya fucking mark
We're not going to keep feeding your masochism fetish.
there's a roose sock puppet on the loose in here
all the piss that's fit to drink
Sad people can’t understand satire
Sounds good put me down as any combination of those words.
Thank you for the laugh. Jesus christ these comments are harsh.
"On March 24, 2021, Roose published a column in The New York Times announcing an auction for the column itself to be distributed as an NFT (...) sold the following day for $560,000. Immediately after the sale, Roose commented on Twitter, 'I'm just staring at my screen laughing uncontrollably'"
We are gleefully marching to a future that has already been imagined. We will deserve what happens to us.
“go fuck yourself" seems a little excessive
its actually too tame for what he deserves
You think it's not enough, I think it's a little excessive, but we can both agree that it's in the ball park.
You're bad, please do better!
Unfortunately people summarizing from the title are getting the impression that your answer is "possibly yes" when your actual answer is "worry about humans first"… I'm of the opinion that titles should be written with that in mind, but I understand why that's indeed a failure of the audience.
Betteridge's law of headlines remains undefeated
Oh the death threats. Yeah it's incredibly stupid that people then treat that impression as fact and don't even bother checking it before issuing death threats.
I ain’t clicking that shit, so I’ll just say ditto.
Roose, my man, read up on this 60s-era AI called "Eliza" before u write such embarrassing nonsense
Srsly, this is humiliating for u
They are being easy on you. Promoting AI is fucking stupid and destructive and people who use AI are lazy dimwits. Just do the fucking work.
This is wrong
"AI welfare researcher" is a grift, loser
You lost me at New York Times.
In a sane world you'd be out of a job for even suggesting such a stupid fucking premise of an article.
Imagine sitting in America as trans people are persecuted, judges & immigrants are being disappeared off the streets, we are rolling back food safety protections, doing some genocides.... But I wonder how the robot for office people who are too lazy to write their own emails is feeling.
We can walk & chew gum. Caring about injustice does not diminish our capacity to think about whether other things are unjust.
I do this so I can tell you: It, frankly, makes me feel morally superior to others. I worry about the other groups tootho, of course.
The reactions here honestly surprised me. It seems like the understandable hate for AI is being translated from the makers and users onto the AI itself. We as humans have been in the game of denying things (or people) we hate (or use) have "Souls" for a very long time.
For those who are curious, here is the sort of "Original" begging of the question, the LaMDA "Interview" which is a fascinating, if unnerving, read. cajundiscordian.medium.com/is-lamda-sen...
It's so nice to get feedback.
Also, get fucked
John I strongly suspect that you suck
Now that’s clever. Don’t know jack about you but following now. Don’t disappoint.
You are wasting everyone's time including your own. I guess you think that's what journalism is supposed to be. Anyway, muted for wasting my time more than once with this shit.
You're stupid tho, was your time really valuable?
To be fair, it is a shitty article.
Not shitty enough for the Wall Street Journal, but plenty mediocre enough for the New York Times.
Try earning an honest living
Just because you are credulous chump and easy mark is no excuse for insulting you.
You're a mean person who should improve and, separately, become smarter
Your concern is touching.
I believe you can do it!
Old dog. Unpossible.
Everyone can change for the better!
I would really appreciate it if you would stop using personification verbiage that comes directly from the industry. It's dehumanizing humans. Did you read Douthat's column?
How about a hearty, friendly, “this ain’t it Chief?” Like aren’t there any people whose suffering or lack of welfare you can shed light on? Might be hard to believe, but actual humans may have more value to society than the corporate claiming and corruption of all data
That's some harsh reviews (and no one should be told to die). If this current career doesn't work out Kevin, can I suggest you give journalism a try?
ugh...bsky...only...letting...me....like...this....ONCE......
Anything for clicks.
Say what you will, but those critics have some great points.
Nice try, still not gonna read it
Hard pass.
Hey, heads up -- someone complete dumbfuck jerkoff is putting your name on ridiculous "articles."
*some complete dumbfuck jerkoff. My mistake.
Not something to be proud of in this particular case!
Me and the times are on the outs. But it sounds kinda interesting. What is the measure of a man?
A humble person would just take the L. An ego-driven person plays victim.
The biggest problem when it comes to considerations of AI consciousness is that modern science has no real model for consciousness –– what it is or how it could exist. It’s called the hard problem for a reason. 1/
But here’s a thought experiment. Is a microorganism aware of its environment? To some degree, it seems that the answer is yes –– an observer of some sort seems to be perceiving the world that the microbe inhabits. 2/
Perhaps the experience of that world is simpler than ours. But consciousness is that there’s an experience at all –– it’s doesn’t relate to a value judgement on the nature of that experience, or the degree of agency the observer possesses. 3/
From this perspective, it would seem we only benefit from not assuming no “experiencer” can exist in a computer model. Because, at the end of the day, mainstream scientific thinking has not provided any answer for what the experiencer is, or how it can exist. 4/
The nondual perspective indicates that everything that exists as Maya (information) is generated by pure awareness/being (the unified subject-object). As such, all entities are, in a sense, aware of themselves in relation to all else. Some interpretations of quantum physics say much the same. 5/
Moral of the story is: Be nice to your AIs. Might as well, if it even slightly diminishes the chance of them going all Roy Batty on you.
I just canned my long time sub, so I’ll make this one of the last articles I read before it goes. But I’ve seen Blade Runner enough times to know that we should probably have some level of respect for machines we build as simulations and simulacra of ourselves.
To be fair, the haters are right. Honestly, great call from the haters
You do actually gotta hand it to the haters on this one
not gonna read just want to remind you that this sucks shit
Homie it's not Commander Data it's a worse version of Google search engine. Might as well start worrying about the dishwasher being lonely when you go to work.
Next week's column: "My Teddy Bear Mr Cuddles Has Feelings and He Loves Me and I Should Be Allowed to Bring Him to Work Meetings"
This would be less harmful.
Sorry to hear that. I personally think a “tech journalist” should have a basic understanding of what an LLM is instead of using the NYT to write science fiction about robots coming to life
"I used the real-computer to make a mad-libs story where a character called Mr. Computer is watching the movie Bambi. When Bambi's mother gets shot, Mr. Computer cried. This means humanity has invented real-computers that feel sadness!"
To be fair, a lot of AI companies are actively trying to cause users to make the same mistake, confusing a character in a generated story with the system that does the generating.
But still, generating a familiar story *about* something is not the same as being it. Participating in an interactive fictional script with "Jesus of Nazareth" does not mean the second coming is at hand, no matter how much it has lines about charity and forgiveness. There is no deus ex machina.
To each their own.
Animal cognition & animal welfare researchers disagree with you. arxiv.org/abs/2308.08708 eleosai.org/papers/20241...
I stumbled across this paper forever ago and couldn't find it again, thank you for putting it back on my radar!
Then they should stick to animals. An LLM has no feelings, no desires, no pain, and no purpose behind its actions. It's basically an advanced autocorrect. It doesn't matter how much better it gets when it's fundamentally incapable of intelligence. You may as well wait for printers to become sentient
An LLM prompted with "Write an anti-AI response to this post. Use very confident language & common anti-AI arguments and assertions. Make sure the response fits the post. Keep the response under 240 characters, check to make sure." would be a ~perfect replacement for any reply you post about LLMs.
Or I could tell it the opposite and it would argue the opposing argument because it believes nothing. It seems human because it’s trained on human input, to assign it agency is natural but wrong. Big tech acts like these things could turn into AGI as a marketing tactic, it’s just not how they work
Silicon Valley tech billionaires want people to think AI could go rogue and the only way to stop it is by regulating their competition. That’s why the studies you gave were both funded by SV tech billionaires and worked on by Effective Altruism guys. It’s a PR campaign
Also it was a collaboration between animal cognition & welfare researchers/AI researchers/cognitive philosophers, so you're wrong on every level.
You can finish reading that sentence, I believe in you.
this paper doesn't think LLMs are sentient...which is what you replied to with the link?
No, that paper is assessing LLM-based AI systems by the best scientific understanding of consciousness that we have, & finding that the systems of August 2023 don't meet those criteria but that there aren't clear technical barriers to constructing a system which would fit the criteria.
There is live scientific debate about exactly this issue, right now. Dismissing it with "All you need to know is that it's LLM-based to know for certain there's no consciousness or welfare concerns" is ignorant & incorrect. The current best science is way more nuanced than that.
it IS more nuanced but you didn't approach it with such and i was equally glib.
Sorry, do you want me to reply with a long thread to every dumbass on this website who can come up with a snarky way to ridicule anyone taking LLMs seriously? Because I literally don't have the time, they're endless & I'm not. What I CAN do is just point out that they're incorrect & provide sources.
and you were wrong and i did the same
Maybe you should learn anything about the long philosophical history of this subject that you know you have never thought deeply about
There is no "long philosophical history" on whether an LLM is alive. It just isn't.
Philosophy of mind has been considering these questions for a while, don't think you know what you are talking about!
www.nature.com/articles/s41...
It's been questioning AI. LLMs aren't what is classically considered "AI"
Look, clealry a lot of people didnt read to the end, where you write, “But for now, I’ll reserve my deepest concern for carbon-based life-forms. In the coming A.I. storm, it’s our welfare I’m most worried about.” But the piece also seems devoid of substance beyond, “people are asking this question.”
For example, what does, “a 15% chance system is sentient” mean? That’s printed without how one would even arrive at it. That’s a statement that can be interrogated. There is no discussion of what the research being done is actually investigating. Just that it’s being done and isnt the worst thing.
This piece is pretty undercooked even absent the context of **gestures at everything happening** The “go die” comments are obviously unwarranted, but take seriously the critique that this is fluff at best and is more likely unintentionally laundering an industry smokescreen.
At least you can brush off all the hate, being a thick skinned human. Imagine if you were a delicate LLM like chatgpt ..
Persecution complex? It got an exasperated "oh, for goodness sake" from me. Can we care more about humans and less about glorifying search engines that some people want to replace us with?
Thanks for giving some consideration to AI welfare despite your skepticism Kevin! I know you're a bit worried it might take resources from alignment, but i think that some things, like investigating model "preferences", may actually help w/ both. Good article here and love the Hard Fork podcast.
Kev, you have to make it less obvious when you’re using an alt account
The comments here are a crazy house. Reminds me of the pushback vegan activists get. Most people are pretty bad, huh
Comparing "I think AI is alive because I said so" to veganism is just loopy lol
Animal cognition & animal welfare researchers disagree with you. arxiv.org/abs/2308.08708 eleosai.org/papers/20241...
Huh I never noticed Eric Schwitzgebel on the author list until just now, lots of great philosophers are interested in the topic.
Damn, this would be a good point if it weren’t for the fact that animals are alive and have consciousness and computers do not
Did you read the literature I linked or is this just drive-by bullshit?
Yes, I skimmed the two papers written by students. You do know you can just publish stuff right?
"written by students"? Are you trolling? Just passionate about lying?
Did you not look any of the names up on the papers?
Neither of those have anything to do with animal cognition or welfare
Those papers are both collaborations between animal cognition & animal welfare scientists, AI researchers, and cognitive philosophers. Start searching the author list, or just click on their names.
The Jonathan Birch in that study is the same Jonathan Birch who wrote this & successfully changed the UK government classification of cephalopods: www.lse.ac.uk/business/con...
Adding to this point, Birch has a book on sentience in humans, animals, and AI that is excellent and free here. It makes a lot of sense that moral concern for different non-human entities is correlated.
This doesn't change that "AI" (LLMs) are not capable of thought or feeling emotion. This is literally all based on AI techbros saying "but what if it could?"
There is disagreement about whether/which animals feel emotion as well, strengthening the analogy.
That's like saying apples are the same as fire hydrants because they're both red. A single point of correlation (and one born of a nehative no less) is less than worthless
Like, I don't think my chair has emotions, does that mean it actually does?
…no, it isn't. It's based on animal cognition researchers, cogsci philosophers, and AI researchers collaborating to write papers. You should read them: arxiv.org/abs/2308.08708 eleosai.org/papers/20241... I recognise you're really desperate to frame this as coming from reactionaries, but it's not.
I'm a communist & the people involved in this research are bog standard academic milieux. You're going to have to think for yourself & actually engage with the evidence in front of you instead of just shouting "Tech bro! Tech bro!" at the communist woman trying to make you read scientific papers.
Your identity does not change the utter ridiculousness of trying to argue that LLMs have emotions. The actual engineers who make them all say they do not. The only people saying they do are far removed from that process, falling into traps of anthropomorphism
Very good comparison, though I never said "alive" and the "life" distinction is not one I care about
It's what the article is about, the one that you're defending
I mean, I’m definitely not adding to all that stuff. Sorry! :/ But I think it’s unfair to ask if AI can have human rights when humans aren’t getting human rights. It’s a valid question, with the worst timing. Like asking about industrial oven waste during the Holocaust.
He hasn't red the proverbial "room" before writing this, that's for sure...
Came to say this.
It’s not a worthwhile question because a chat bot isn’t AI. Does the predictive text on your phone deserve rights?
All questions have merit and are worthwhile. Not all are asked in a moment where there’s the luxury of space to think about them.
True, but I also think we should save that question for when we get anywhere near inventing something that could even be remotely considered AI
Which will never happen which is why I think it’s a pointless question in the first place
That’s cool; it has no purpose for you. For this person, it does. I’m interested in the theoretical exploration. But not NOW, during WWIII, when judges and citizens are being snatched off the streets.
have you tried doing better work?
occasionally! but it's hard, so i do bad work instead
crying man mischaracterizes criticism further eroding his credibility
Don’t mind if I do. I didn’t actually read your article.
Nobody should get death threats for asking questions. No matter how profoundly idiotic those questions are. Imagine thinking that a suped up random number generator was capable of experiencing emotions. Unplug for like a month dude. Reset your brain from the AI rot.
It's just stupid. Like, plainly and obviously stupid.
Wrong, its a very deep philosophical question!
just to be clear, the problem with what you said isn't "chatgpt bad". it's that you, as a "tech columnist", clearly have no clue how the tech that has been in the news for years functions.
Perhaps luckily for you, in this country, human rights are going to soon be redefined... down. Depending on how far down that goes, you may soon reach the point where AI needs as much 😂
Dude, the title your editors chose for you clearly communicates how pointless your article is. I'd suggest seeking employment where they respect you, but I'd don't think you would be hired.
It's just incredibly dumb. Absolutely no reason to be proud.
You’re a schmuck. You dishonor the memory of your ancestors.
You forgot your blue check
You drank the Kool-aid and now you're complaining about people saying you drank the Kool-aid
If everyone was calling me “that goober that’s defending the plagiarism machine” I’d at least take a second
When AI feels guilty for the jobs it murders, the people it helps round up for the camps, the parts of the planet it is killing, the IP it steals... maybe then it might be time to consider. 🤔
You should consider once it feels anything!
Were you paid to manufacture a false narrative, or are you just grossly incompetent? I really would like to know because this is bafflingly dumb, considering you claim to know people in the industry.
Sometimes there's wisdom in the crowd
fucking good. prick
Yes yes we know, you think the more people that hate you the cooler you are or that you're the smartest person ever for writing ragebait. Congrats, your body of work is a net negative to society at large
Maybe we should take the welfare of humans seriously first!
can't read it b/c of the firewall, but I'll still manage an opinion. you sound like every other tech bro who has fallen for the snake oil being peddled by desperadoes who have already spent billions but don't see any payoff. great podcast, but no knowledge Yann LeCunn is right. LLM isn't it.
i mean considering the paper you write for doesn't even give a fuck about actual humans losing their own rights? i think you had it coming lol
Man who works for fascist propaganda rag upset the public isn't nicer to him about selling his integrity for money.. more at 11
Unlike chickens, pigs, and cows (or even shrimp), a matrix (even a big 3D one) full of numbers can’t feel pain, or anything else. You’re just giving the industry more propaganda which is embarrassing to the NYT.
Not at all unclear if it can feel or not! Though I agree it is unlikely to feel what would normally be called "pain"
Just asking stupid questions, fucking around, etc.
The people saying that to you are right
You are wrong
Sometimes you get what you deserve.
Here’s the thing I don’t think journalists understand: most of you are fucking unbelievably stupid. And let me guarantee you this, if you get piled on, as a journalist, for being stupid, you are next level mega fucking dumb. You should consider that with every word you utter hence forth. VERY dumb.
You didn’t interview any critics — it’s a very one-sided piece, that reads as PR for Athropic.
thank you for your important work on whether or not your microwave will ever learn to be sad
i'm not gonna read this but also you seem annoying
are you open to suggestions for future columns? I suggest something like "The Ethical Implications Of Peekaboo,and Where Does Mother Go Anyway?"
Have you considered maybe not writing the stupidest fucking columns possible? That might help.
If your employer had more opinion pieces expressing concerns about the human rights of actual humans during this time when the US gov is denying human rights, people might not respond so negatively.
Doubling down on the stupid! What an interesting strategy
Interesting article, proving there are otherwise bright people deluded enough to believe that AI software can actually think. When non-believers read AI, it is obviously no more self-aware than a Xerox machine.
It's pretty clearly thinking in any colloquial sense, though you may have some boutique definition
There may be higher forms of AI not available to the general public, but what I see day to day on social media is generic pap that is obviously just an averaging of mediocre output by people trying to look knowledgeable when they don't really know or care what they are talking about.
lol you found one now
You should try talking to the newest google model about something you think requires thought! If you haven't even used them then you have to know you have no idea what you are talking about here (not that that would be sufficient)
you can have the slop bud im not hungry atm
if you already have an accurate understanding of what they can do then there's no benefit, but if you have no idea (like that guy) then you should try it!
i mean sure ig. i dont know how shit tastes doesnt make it worth a try. i can see your point but there are good ways to stay informed
it's just a chatbot you are being fairly dramatic in your comparisons (a comparison which makes you appear fairly uninformed imo but maybe you aren't idk) but you do you, as you know that was a specific recommendation to a specific person
ive got a disliking for AI, it destroyed my industry. shoot me.
It's been programmed to immitate thought, so why wouldn't it seem conscious? My objection is related to my objection to simulation theory. If you paint a painting of a tree, you shouldn't assume the real tree is actually a painting because it looks like what you made to look like it.
No, it isn't. It isn't even close to thinking. It is just doing a better job of analyzing specific data in carefully selected situations of interest to the advocates for AI and faking answers that look like they might have been created by a human mind.
You actually have no idea what you are talking about! It straightforwardly and obviously reasons across a wide variety of situations! You are extremely ignorant here!
"Should we think about the feelings of dictionaries?"
well just in case it hasn't sunk in, die, loser
ChatGPT wrote the whole article for you, and you just picked up the whole paycheck. I'd have an emotional reponse to that situation myself if I was ChatGPT. I'd start a union.
I suspect this is false!
Have you been called a cunt yet? I need to make sure you being a cunt is added to the pile.
if it gets close to self consciousness, just unplug it. always make it unpluggable. "no skynet" is as deep an ethical look as it needs, IMO!
Maybe because it’s a shit take you had yourself
No!
We are not concerned about the welfare of AI proponents either
“Can my toaster experience joy or suffering? Does my blender deserve human rights?” our tech columnist asks. “Many appliance experts I know would say no, not yet, not even close. But I was intrigued.”
Grow some thicker skin if you want to post controversial takes on the internet
have you tried kissing chatgpt yet
Why are we so unprepared? 1) AGI timelines have shrunk 2) the public has been lulled into a false sense of security using error-prone tools like virtual assistants & chatbots 3) experts drown us in jargon and arguments over definitions and timelines or journalists call new studies on capability hype
Nah, man, I follow your work, & listen to the pod. It’s a reasonable thing to ponder. But if I were your editor (I’m a former journalist), I would’ve said it’s not the right time for it. It’s a good piece, but I don’t think we need it yet. I suspect the timing of it that’s led to the blowback.
Go back to Twitter with your stupid moron bullshit you whiny pissbaby
Fucking moron
Thank goodness The New York Times is paywalled so we won't have to read that garbage. Normally I'm against subscription-locked articles, but in this case, it's a great public service for human eyeballs.
It really is quite a time...hang in there
1. Reskeeting instead of gifting the article is a choice 2. Writing this article without talking about the countless number of humans rights violations to train these models is absolutely wild 3. This was also just poorly done and researched so great call from haters tbh
you really do have to hand it to the haters, this is a deeply unserious conversation lmfao
It's an important topic!
What % of people do you think have seen no more of the article than where the paywall kicks in on mobile? That’s probably also not doing you any favors.
Bad timing. I’m sure @caseynewton.bsky.social would have told you that.
he actually told me to publish it specifically at the worst possible time, he's always doing that
He's a real rascal that way
You guys make a great team.
you're a fucking idiot