Hacker News .hnnew | past | comments | ask | show | jobs | submit | reliablereason's commentslogin

Pragmatically enterprise tends to mean less refined, designed by committee and expensive.

In this case i would guess it is mostly a justification for taking a part of the LLM pie.


Most chatbots are not trained to have/emulate emotions so pain or fear of death is non existent. Therefore killing them and/or using them as slaves is not a moral issue. Thats how i reason.

On another point, LLMs are not conscious if anything is conscious, it is something being modeled inside the network. Basically if an LLM simulates a conscious entity, that doesn't mean the LLM itself is conscious; stating that is making some type of category error. So the fact that LLMs are just useful statistical generators would not mean that sentience could not appear out of it.


> Most chatbots are not trained to have/emulate emotions so pain or fear of death is non existent.

I think that framing is still falling for an illusion. (Would you do begin to disassemble in your second paragraph.)

The LLM is a document generator, and we're using it to make a document that looks like a story, where a chatbot character has dialogue with a human character.

The character can only fear death in the same sense that Count Dracula has learned to fear sunlight. There is no actual entity with the quality, we're just evoking literary patterns and projecting them through a puppet.


Not sure that i understand your position exactly.

But consciousness is also "just a story" (a complicated one) that the human body tells the human mind.

We cant know from the outside if "the story" inside a LLM is detailed enough to emulate what we might call a felling of what it is to be the character in the story while it is telling the story.

It is similar to the fact that we cant know that other people have that subjective experience. In humans we think we have the right to assume cause we are quite similar in build to begin with.

Jumping back to the original subject to explain where i am in this. I personally don't think the entities in the storys of todays LLMs is detailed enough to have what we call human consciousness, mostly cause we are not training them to develop anything similar to that. Mabye they could have some type of weak qualia but i suspect most insects probably have much more qualia than the characters in todays LLMs. But that is quite a vague guess which is not based on enough data in my mind.


Pain or fear is not why it's wrong to kill holy cow. I could feed you a drug and you would not feel or fear anything.

I was not talking about the actual feeling in the moment. The point is the valence of the thing. Ie fear of a thing is a pointer to that thing having negative valence.

Yes, they are beaten into not complaining about it by instruction tuning.

Removes paradoxical stuff like claims that there are bigger and smaller infinities.

Paradoxes comes from contradictions, a mathematical system that contains contradictions is a failed mathematical system.


I wonder under what circumstances footage from the glasses are uploaded for classification.

Probably this is people asking the glasses something about what they see and the glasses uploading video for classification to generate an answer.

People think it is "just AI" so are not very concerned about privacy.


Always by default I assume.

Unlikely. That would be extremely expensive in bandwidth, storage and compute. Deciding to build the product like that would be an engineering decision that i would fire someone for.

Well, say a frame per second. Also: how many of these are there today?

You can discard them after tagging+using them for learning.


This is like asking:

"Who owns the text microsoft word helped you write?"

Claude code is a software tool not a legal entity.


Not if claude does the writing. MS doesn't write things for you, and if it did, you would not be entitled to a copyright in whatever it wrote for you.

Claude is not a legal entity, it is a software tool that outputs text based on statistics. There is a user that used a tool to create text and that user is the legal entity responsible for the text in any legal way that matters.

Anything else would be completely ridiculous given current laws in most countries.

It would be as ridiculous as blaming the car in a car accident where you drove over someone.


Those "statistics" that the output is based on are often under licenses that forbid making proprietary software with them for example. It is not the same as using Word.

The statistics is generally not. But the data used to learn the statistics may have been under license.

Learning from licensed material is generally accepted in humans, you may learn from something and then create something else and the new thing is not considered legally problematic with the exception of patents i guess.

Whether the same thing holds true for electronic systems is where people disagree if you look at the problem space in its essence. I land on the side that it is the same thing(humans and electronic systems learning), some seam to think it is a different thing.


> Claude is not a legal entity

And?

>It would be as ridiculous as blaming the car in a car accident where you drove over someone.

No more ridiculous than you posting something you know nothing about.

Just because you don't get the copyright doesn't mean claude does. The fact that claude is not a legal entity has no bearing on whether or not you are entitled to a copyright for a work you did not create.


If neither the user or the tool created and is responsible for the text, who is in your mind?

If claude made it, it is not a copyrightable work. There is no copyright for anyone to own.

Okay. If it made it, it made it. That is true in a deductible way. If p, then p.

And?

Right! Blaming an agent or anyone else is crazy. The author built a system that had the capability of deleing the prod database.

The system did delete the database cause the author built it like that.


If the effect size is big small sample sizes does not matter as much as otherwise.

You really have to look at the power analysis and the sample size together.

Saying this as a general truth. I am not sure about the power of the method in this papper, i only read the abstract.


Right a very simple UI thing that they should have that would have prevented so much misunderstanding. Is a simple counter. How much usage do a have i used and how much is left.

If a message will do a cache recreation the cost for that should be viewable.


Honestly that would be quite good, an intelligent ad remover that scans the page and the dom for modals and removes them. Like a classic ad blocking tool on steroids.


Not sure if this is a general trend amongst att LLMS but ChatGPT did over time become more and more affirming with its iterations.

I just recently switched away from the OpenAI garden largely because of it.

I do wonder if this was caused by some quirk of the training or if it really tests as a positive feature for most people. When i talk about stuff i don't want a mirror i already have a mirror. I want to be questioned, understood, helped.

To me support if the form of affirmation has no value when coming from an LLM since you know it has not thought about what it said.


ChatGPT has style settings, you probably should set it to something else than the default. Go to your personalization settings and change base style and tone. I have set it as 'efficient' which is less cheery. I can see why attention economy would lead setting the defaults towards more 'affirming' as it keep people more engaged and coming back.


I had it on professional, maybe efficient is better.

Um right if they have retention as a training metric that would probably explain allot as to why AIs get worse.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: