Hacker News new | past | comments | ask | show | jobs | submit login

Why is this voted down? It's true. How can this be in actuality prevented?

Changing times. Not even imagery or art can be trusted. A whole range of human creativity based occupations is approaching the border of redundancy.

You can argue that they may never arrive at that border, but there is no argument to be made that this is the first time ever in the history of human kind where artists are approaching that border of redundancy.

And what does this mean for every other human occupation? The AI may be primitive now, but it's still the beginning and it it is certainly possible for us to be approaching a future where it produces things BETTER then what a human produces.




> How can this be in actuality prevented?

By abolishing copyright and making crediting the author a habit. There is no way copyrights can survive the AI. There is no way human civilization will let 200 year old legal practices hold back technological advancement.


> There is no way human civilization will let 200 year old legal practices hold back technological advancement.

Human civilization has kings and queens, and laws based on the sayings of ancient prophets.

Instead of trying to figure out what "human civilization" will accept, figuring out what current wealthy capital-owners will accept will be more predictive.


Even if wealthy-capital owners get the AI banned where they can, the countries where they don't hold the power will get ahead by not banning the AI/


I think there's a strong case for arguing that we may actually completely ban AI and any sufficiently strong ML algorithm. AI hasn't even realised a millionth of its potential yet, and it's already running rings around humans (cf. algorithmic feeds driving political conflicts online). I think potentially it will cease to be tolerated and be treated a bit like WMDs.


> we may actually completely ban AI and any sufficiently strong ML algorithm.

Who are 'you'. The US, or maybe Angloamerican countries may do it. Many countries won't. Those who don't do it, will get ahead of everyone else.


At what cost? Perhaps the WMD comparison continues to work here.


How are you going to un-invent it? Will this involve confiscating GPUs or criminalizing owning more than 1 of them? The thing is much like the problem of gun control in a warzone; weapons are just not that hard to make especially if you have a surplus of parts.


First time I've heard that line of thinking. How and when do you think thatl happen?


Okay, so there's a sense in which AI essentially destroys knowledge culture by performing a reductio-ad-absurdam on it.

Examples:

1) Social content. We start with friend feeds (FB), they become algorithmic, and eventually are replaced entirely with algorithmic recommendations (Tiktok), which escalate in an AI-fuelled arms race creating increasingly compulsive generated content (or an AI manipulates people into generating that content for it). Regardless, it becomes apparent that the eventual infinitely engaging result is bad for humans.

2) Social posting. It becomes increasingly impossible to distinguish a bot from a human on Twitter et al. People realise that they're spending their time having passionate debates with machines whose job it is to outrage them. We realise that the chance of someone we meet online being human is 1/1000 and the other 999 are propaganda-advertising machines so sophisticated that we can't actually resist their techniques. [Arguably the world is going so totally nuts right now because this is already happening - the tail is wagging the dog. AI is creating a culture which optimises for AIs; an AI-Corporate Complex?]

3) Art and Music. These become unavoidably engaging. See 1 and 2 above.

This can be applied to any field of the knowledge economy. AI conducts an end-run around human nature - in fact huge networks of interacting AIs and corporations do it, and there are three possible outcomes:

1) We become inured to it, and switch off from the internet.

2) We realise in time how bad it is, but can't trust ourselves, so we ban it.

3) We become puppets driven by intelligences orders of magnitude more sophisticated than us to mine resources in order to keep them running.

History says that it would really be some combination of the above, but AI is self-reinforcing, so I'm not sure that can be relied upon. We may put strong limits on the behaviour and generality of AIs, and how they communicate and interact.

There will definitely be jobs in AI reeducation and inquisition; those are probably already a thing.


> We become puppets driven by intelligences orders of magnitude more sophisticated than us to mine resources in order to keep them running

What do you think corporations are


Well, typically not cleverer than most people, until you combine them with AI.


I am beginning to feel like the Butlerian Jihad may have had the right idea.


Because it's a legal thing, not a practical thing. If they're preparing a lawsuit, they want to show the court that they forbid people from uploading AI-generated images. It's a rule without real enforcement.


>Why is this voted down? It's true. How can this be in actuality prevented?

It's no different than submitting a plagiarized essay to a teacher. Yeah, it's often hard to detect/prevent such submissions and you could even get away with it. But if you get caught, you'll still get in trouble and it will be removed.


This is different from plagiarism. Because in plagiarism there is an original that can be compared against. There is a specific work/person where a infraction was commit-ed against.

In AI produced artwork, the artwork is genuinely original and possibly better then what other humans can produce. No one was actually harmed. Thus in actuality it offers true value if not better value then what a human can produce.

It displaces humanity and that is horrifying, but technically no crime was committed, and nothing was plagiarized.


You asked why someone was downvoted for laughing about how AI-generated content is hard to detect/can still be submitted, and then asked, "How can this be in actuality prevented?". The comparison I was making was the comparison to the process of submitting plagiarized content, not whether or not AI-generated content and plagiarized works are the same thing.


>No one was actually harmed. //

In a couple of years, unchecked, many artists will be out of work and companies like Getty will be making a lot less revenue. That's "harm" in legal terms.

On a previous SD story someone noted they create recipe pictures using an AI instead of using a stock service.

These sorts of developments are great, IMO, but we have to democratise the benefits.


That's like me creating a car that's faster and better then other cars and more energy efficient.

Would it be legal to ban Tesla because it harms the car industry through disruption? AI generated art is disrupting the art industry by creating original art that's cheaper and in the future possibly better. Why should we ban that.

By harm I mean direct harm. Theft of ideas and direct copying. But harm through creating a better product. Morally I think there is nothing wrong with this.

Practically we may have to do it, but this is like banning marijuana.


Yeah, the law isn't morality.

>Why should we ban that. //

Well, we have copyright law, so I was starting there, though I'd be personally pretty interested in a state that made copyright very minimal. The question is what type of encouragement do we want in law for the creative arts; I doubt any individual would create the copyright law that lobbying has left us with, but equally I think most people would want to protect human artists somewhat when AIs come to eat their lunch and want to make a plate out of the human artists previous work [it's like a whole sector is getting used to train their replacement, but they're not even being paid whilst they do that].


copyright doesn't apply. Nothing was copied.

The AI was trained in the same way you trained your brain when you look at something. Nothing is copied. Nothing immoral was done. No law was broken.


When I look at something I create an image of it on my retina. When a computer "looks" it creates an image somewhere. The former is allowed, the latter comes against copyright scrutiny, things like caching images to show you a webpage have been addressed by copyright law--through precedent--and this will be similarly addressed.


The image is not saved in a neural network. No identical image can be extracted from the network.


Yes, it's a derivative that relies on use of the copyright works. You can't create a NN without using a copy of the work, so copyright applies -- there might be a Fair Use exception in USA but the outputs compete with the original creators of the works and so IMO courts are likely to rule it as non-Fair Use.


>Yes, it's a derivative that relies on use of the copyright works.

Almost every idea on the face of the earth is a derivative of something else. This includes ideas from a Human brain so applying such laws is inconsistent.

>but the outputs compete with the original creators of the works

All art competes with other art. And all art is derivative of other art other things.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: