Hacker News .hnnew | past | comments | ask | show | jobs | submit | vermilingua's commentslogin

Finding it hard to compete with the rigorous ICE recruitment standards?

What would this have to do with ICE? Isn’t that DHS?

I think it is a joke of there is no requirement's ..

Same recruitment pool (like same "hiring pool" / market segment). Therefore, competing.

I think they mean (sarcastically) that ICE is sucking up all the qualified candidates.

>Run: 1.5-mile run in 14 minutes 25 seconds or less

Hilarious.


I'm not sure if you're saying this as somebody who never runs, or somebody who runs so often that you've forgotten baseline levels. That's a very significant hurdle for most people. I'm in very good shape strength wise, but I'd almost certainly need to do some significant training to meet that since I just never run.

I run for weight loss and general health and at one point had a BMI in the mid-thirties. Back then I was able to make that pace for sessions of 30-40 minutes on most days. I'm not suggesting it's easy, but I would expect someone in ICE to easily outrun an obese person.

Genetics play into this far more than we used to realize. It depends on things like how good your body is at in taking and utilizing oxygen, the specific composition of fast twitch vs slow twitch muscle fibers you have, your body's ability to control and regulate lactic acid, etc.

I used to work around some very "high level" athletes. There is definitely a body type that is pretty BMI heavy that can run for days seemingly. They often struggle with sprints or H.I.I.T. type workouts, as a downside.


Jogging for 14 mins at a 10min/mile pace is so far out from any high level athletics or excuse for genetics impacting it. Being unable to do this is being completely out of shape.

At run clubs, I see practically every novice join and do 3 miles / 30 mins at 10min/mile pace on their very first day or within 2-3 attempts.


> Being unable to do this is being completely out of shape.

Really not advocating for the time.

It's quite slow compared to most US military standards.

I'm simply talking about my love of visual "fatbodies" that sometimes have 5x the running cardio of a, say, 5'8" 150lb male that looks like they could run circles around the heavier individual.


That's about a 8:30 mile scaling for the fact that its harder when you have to cover more distance... seems pretty reasonable to me as a fitness baseline for the army. I would struggle to make that now but if I had one month to prep I could clear that

Am I crazy? Isn't it 9:40 pace?

I don’t think someone completely untrained can do 10’ / mile.

Depends on what you mean by untrained. I think most men who exercise regularly and don't carry a ton of extra bodyweight, even with zero running, could bang out 1.5 miles at 9:40 pace. Couch potatoes, no.

"most men who exercise regularly and don't carry a ton of extra bodyweight, even with zero running" - what's that supposed to mean? How do they exercise then when they are not running?

I don't think many people who don't exercise running can do 1.5 miles in 9:40.


> what's that supposed to mean? How do they exercise then when they are not running?

Any other sport? Cycling, swimming, rowing, ball sports, team sports, weightlifting, ...

> I don't think many people who don't exercise running can do 1.5 miles in 9:40.

It's 1.5 miles at 9:40 mile pace; 14.5 minutes total. Much easier than 6:30 pace (what you're imagining).


It's 1.5 miles in 14:25, I think most people can handle that. There are plenty of ways to exercise that aren't plain running. Biking, skating, swimming, Tai chi...

A 29 minute 5K is not trivial

You're only running half of that.

Good riddence to bad trash. To me, this idea represents the absolute worst of the AI wave (out of a lot to choose from): a corporate controlled endless stream of the feelies to keep people plugged in and scrolling for nobody’s benefit except those in control of the output. If “entertainment” can be produced algorithmically to a volume and level of quality that the masses find attractive, it’s only a matter of time before bad (worse?) actors take control of it to start highly targeted campaigns of influence, far worse than what we’ve already seen.

I'm having trouble understanding this. There were some very funny videos, created by people with a great sense of humor, and I happen to enjoy laughing, and I don't feel bad about that. I always saw it as the Vine of AI.

For a litmus test of your perspective, try using sora. Try to make a video that makes someone genuinely laugh. Sora doesn't prompt itself. Human creativity and humor is still required.

Sure, it was moderated to heck, like all models attempting to avoid PR disasters (see Grok), but, just as with Youtube and broadcast TV, there's still a corporate friendly surface area that excludes porn, gore, etc, that people can enjoy. And yes, people like different things.


I feel like taking in GenAI content, even if it makes me laugh, probably does something bad to my brain. It looks like real life, but the physics is just wrong in ways that range from obvious to very subtle. I don’t want to feed my brain videos of things that look photorealistic but do not depict reality, that just seems foolish somehow.

Like, imagine if you watched a bunch of GenAI videos of cars sliding on ice from the driver’s perspective. The physics is wrong, and surely it’s going to make you a worse driver because you are feeding your internal prediction engine incorrect training data. It’s less likely that you’ll make the right prediction in real life when it counts.


Do you feel the same about special effects in professionally produced media?

I was thinking about this while typing. I don’t really care about classically animated content; it’s generally not trying to be indistinguishable from real life and I don’t feel like my brain trains on it.

But I think I do have similar feelings about special effects. A difference is that special effects tend to depict scenarios very outside of the envelope of normal experience, so probably not very damaging if my model of “what does a plane crash look like” is screwed up.

Though some effects probably are damaging - how many people subconsciously assume cars explode when they are in an accident? A poor mental model of the odds of a car exploding could cause you to make poor real-life decisions (like moving someone out of a wrecked car in a panic instead of waiting for EMS, risking spine/neck injury)


if it worked this way, we could get good at golf by watching TV, writing songs by listening to the radio, or doing math by watching 3b1b. but it doesn't - we don't learn that way, for better or worse.

I agree with rogerrogerr, and your comparisons don’t make sense to me. Getting good at complex motions and understanding theory is far different than building a simple model of cause and effect in the real world.

Most people can’t explain the physics they see, but they can deduce enough to be able to predict the effects of physical actions most of the time.


That's not a great comparison. People absolutely do learn by watching, especially when they do so actively.

Your counter-examples have the property that most of the things you need to learn are absent from the media being watched, leading to an observation which is "obviously" true, but they ignore the impact of media on a journey properly incorporating other pieces of information. To compare to the mental models being discussed, you'd have to actually consider effects you're writing off as negligible, and when it comes to something like a world model which we've only learned by observation and which doesn't have a lot of additional specialized knowledge those effects might be much more impactful.


But you do get good at driving by playing realistic driving games.

To your point about cars - such an expectation could well save your life now that there are so many EVs on the road. You do not want to hang out in one of those after a collision. Regardless, I agree that it's probably a bad idea to instill defective mental models in people.

Eh, the stats don’t seem to support EVs being terribly explosion-prone either. In comparison to gas cars, maybe, but both are very safe in absolute terms. Harder to extinguish if they do catch fire, but I think if I came upon a fresh accident and there’s no immediate signs of a battery fire (airbags smoke, it’s normal), I would still leave the victim in the car seated until someone trained shows up.

Sure, be ready to get them out, and if they’re trapped and it’s going to be a while until fire shows up start working on that. But my mental model is that for any road legal car that is not currently on fire, there is a higher chance you’ll cause harm by rashly moving a victim than that a victim will be suddenly consumed by an enormous Hollywood style conflagration.


The likelihood or lack thereof is not the problem. My mental model might be off because it largely isn't based on EVs but I've seen plenty of videos of e-bikes and more generally cheap lithium batteries going up in flames and I don't think it's at all comparable to a pool or stream of gasoline catching on fire. The issue is how rapidly it develops since it doesn't require an external oxidizer which is exactly the same as a firework.

Media has warped people's mental models of what car wrecks are like at different speeds, being stabbed, being shot, drowning, seizures, falls from different heights, falls into water, giving CPR, when it is/isn't appropriate to give CPR, appropriate responses to natural disasters, etc.

When I watch a film, I know it is fiction and special effects. But most of the fake AI-generated videos are being passed off as real on social media. It is exhausting (and increasingly difficult) to analyze every video on my feed to try figure out if its real.

I feel like people do sometimes have a warped sense of reality from consuming too much media, ie porn

Not op but if I’m being honest, I don’t feel as if that’s the case until I see a film whose special effects are limited to mise en scene and matte paintings and then I always have this overwhelming feeling that we’re all missing out.

Films on film using in camera effects are still made on occasion but they’re art films for niche audiences.

But we’ll never get another Ben Hur. And that doesn’t sit well with me even if society can’t yet fully explain why.


I'm not OP, but I do get annoyed by bad car physics on movies.

The worst offenders are brake sounds not correlating to the car movement, engine sounds not correlating to the car's acceleration, nonsensical car deceleration while braking, and steering wheel not correlating to car steering.


Yes, I think consuming too much media, and creating too little is bad for the brain.

Effort makes a great deal of difference for me. The effort itself, the fact that it's there.

I am willing to suspend disbelief for Terminator 1, even if it is clear, that it's a head of the doll in shot.

But it is insulting to feed slop to your audience; it shows you didn't even try.

I have actually seen one slop-video, that I kinda enjoyed - it was obvious, that a great effort was put in a script and details as much as it was obvious it isn't being passed for the real thing.


special effects make most people think that they could jump farther or from higher ground that they actually can. and most people think that all cars explode in massive fireballs.

Are there energy consumption differences between CGI and AI?

We also need to take into account, that CGI only consumes energy when the actual creation of particular video happens.

"AI" consumes energy before user even started (during training).

That is on top of comparison for each particular case.


Right idea, but the application is incorrect.

Model training is similar to the creation of the cgi for the movie. Both happen before anyone consumes the output, and represent the up front cost for the producer.

Both a movie and a language model can cost tens or hundreds of dollars to produce.

In both cases additional infrastructure is needed for efficient usage: movie theaters or streaming platforms for movies, and data centers with the GPUs for LLMs. This is also upfront (capex) costs.

At consumption time, the movie requires some additional resources, per viewing, whether it's a movie theater or streaming. Likewise, an llm consumes some resources at inference time. These are opex. In both cases, the marginal cost for inference/consumption is quite low.


  > Model training is similar to the creation of the cgi for the movie. Both happen before anyone consumes the output
I did not say anything about consumption of the output. Maybe you misread what I wrote, it is about energy consumption.

  > Both a movie and a language model can cost
But we weren't comparing cost of the movie to cost of a language model

  > can cost tens or hundreds of dollars
But we weren't talking about dollars, we were talking about energy.

We're clearly exploring different questions.


And that energy costs money, both at the training/cgi stage and at the inference/consumption stage. It's not even an externality.

CGI renders do use a lot of electricity relative to playing back the movie for individual viewers. It's perfectly analogous.


  > CGI renders do use a lot of electricity relative to playing back the movie for individual viewers. It's perfectly analogous.
I've literally laughed at loud after reading this.

I can't believe you're stretching this in a good faith.

But if you are - well, you're certainly have a unique perspective.


that's just empty consumption, there's nothing that makes art great in algorithmically generated content except at the shallowest of levels. I mean no disrespect, but that is extremely sad and all too indicative of the instrumental reasoning of the industrial milieu. It's about 2 steps above marrying a sex doll.

There's such a fascinating divide on this.

I am 100% with you. I didn't ever _use_ Sora, but some of it trickled down to me (mostly through Instagram reels). I think it's amazing that we have such great new tools to express ourselves, and that we are trying out new platforms, paradigms, and approaches.

Is there money involved? Absolutely, but I don't fault companies for trying to earn their keep.

It 100% takes work to use these tools in the right way to make something funny. Ask an LLM to make them on their own and they'll hardly evoke laughs (I'm sure that'll change too, though).


Yes, I don’t doubt that there was some very high quality human-moderated output. The point is that you likely can’t accurately distinguish the human-moderated output from the entirely generated slop (especially as it’s being trained and refined on the rest of the content), and so what chance does the average non-technical person have?

Then, when they start ratcheting the slop ratio up (likely under the justification of keeping up with declining creator engagement), the consumers get more and more adjusted to a pure-slop feed, until bingo you have a direct line into the midbrain of millions of consumers/voters/parents/employees/serfs.


> created by people with a great sense of humor

The real problem with AI slop is not the AI. It's the people. It's always the people.

The clickbait has started fooling people more than before, with the latest videos being halfway believable (except for the circumstances of the videos).

Technology enables the most malicious and self-interested, and systems need to be adjusted to not reward that, or users need to become wise to it.

With the amount of early 2000's style clickbait ads still around, I'm not sure we ever vanquished Web 1.0 style clickbait, it just got crowded out by ever more sophisticated forms.


There were some genuinely very, very funny videos made on there. A lot of slop, but some definite nuggets of gold.

They are not getting rid of Sora because people won't want AI videos lol. They're getting rid of Sora because they're so behind in this realm. AI videos online are mostly made with Chinese models, and the situation has been like this for more than one year.

The percentage of AI videos over the internet will certainly not decrease after Sora is gone.

The question is when will Chinese coding models have their Seedance moment and squeeze Opus/Codex out of market. It weirdly feels impossible and inevitable at the same time.


Its no surprise Chinese models will eventually win in a video generation race since they are far less censored and not affected by crazy copyright system.

It much easier to make Qwen animate tankman than it's to make any western model to generate indigenous people dancing because cough cough naked skin is baaaaad. Except this Musk one that will nonetheless affected by all the copyright mess.


This market will not be abandoned, and other tools already exist:

https://klingai.com/global/

https://aistudio.google.com/models/veo-3

https://runwayml.com


Already being used in that manner, here a small glimpse - every video on this page is AI and an advert: https://www.tiktok.com/@livbennettstudies

I'm kinda surprised about how hard GenAI fell on its face in the arts (including SD and other video generators). It seemed so promising, when SD came out and it turned out the model fit on people's GPUs. People started making LoRAs, hyperparameter tunes, mixing models, training models for representing characters, ComfyUI and Controlnet came out yada yada.

Then it became synonymous with slop, lowest common denominator content made without care, instead of a tool for enabling people willing to put in a varying level of skill, kinds of expertise and effort, like coding models did.


You’re most likely consuming a large quantity of genai art without even knowing it.

Sure, and I'm also consuming a gigantic quantity of GenAI art while knowing it, completely against my will. Which like OP has soured my overall perception of it.

The existence of inoffensive use cases doesn't invalidate anything OP is saying, that's just a natural human reaction to overexposure of a technology.

In the span of less than 2 years, pretty much everywhere I look has been inundated with zero-effort spam, manipulated imagery, etc that has had a net-negative impact on my life. Even if it may also be helpful for a small business making a flyer or whatever without actively making my life worse, that doesn't really move the needle on my overall attitude.


  > manipulated imagery
And we thought iPhone camera videos were bad... (they were (and are) though)

Sure, and there’s lot of great man made art that I don’t enjoy quite as much because I can’t get the question out of my head, is this even a photograph someone took, is this even a painting someone bothered to paint. I get the sense that there are a lot of folks that just want the end result judged on its own merits, like, is it a funny vine or not, is it a compelling beautiful digital painting or not, but I want to know whether there’s a person behind it, expressing themselves, growing as an artist etc, or if the picture on my phone is totally divorced from any humans actual desire to say something. Having them mixed in the same pot just makes me less hungry.

This is where curation matters, eg in a newsroom or gallery. Provenance is their job, and if done well, can connect people in a way that an unfiltered social media firehose can't.

Yea fair enough, I’m hoping I can encourage the folks in my life that are not adept at telling truth from fiction to just cut out looking at any social media firehouse.

It’s so dumb that Zuck and Elmo want to inject^H^H^H^H^H^Hrecommend content into these people’s feeds while they’re checking in on their neices and nephews and local book clubs.


I never understood what people are trying to say with comments like that.

- You're making unsubstantiated claim

- personally targeting someone you don't even know

- in order to celebrate presumed success of a mass fraud?


You're conflating mainstream popular opinion and professional usage. They're entirely separate. The obvious low effort pieces get lambasted. Meanwhile the high effort work doesn't draw attention. The public perception right now has little to do with technical capabilities being driven almost entirely by social factors.

I feel like it was inevitable that it would become slop. The models are impressive, but they can really only get you 80% there.

If you want a video of a dancing cat, sure, you can get that. But if you want an orange tabby doing the moonwalk or the robot, that's a lot harder. You'll have to generate dozens of videos and fine tune prompt incantations before you get what you want, if you even do before you hit a rate limit or you get frustrated. If you want something specific and unique and interesting, you still need to put in a lot of effort. Therefore, most videos that people actually make and share are pretty generic.

I think most art models have subtle tells and limitations similar to textual LLMs too, just a little harder to recognize. Certain ideas and imagery will be easier to generate and more likely to fill in the gaps of your prompt. The technology is fascinating compared to the nothing that we had before, but it still has real limitations - try to get it to generate an Italian plumber wearing a red hat that isn't Mario, for example.

All that to say, the trend towards low effort, repetitive, and uncreative results is inherent in the medium. Most users will prompt for a generic dancing cat and get something resembling a cat doing something that resembles a dance and that will flood social media. The few people going for a more creative and specific artistic view will be frustrated by the constant rolling of dice, and if they do make something they work hard on, it will be drowned out by the low effort slop posts. And if you're frustrated by those limitations and want to make something intentional, then you'll eventually gravitate towards Photoshop or Blender where you can actually craft the exact thing you want.

These models do not really "democratize art", they just make it really easy to generate visually interesting noise. Once the novelty wears off, the limitations are apparent. Art has always been democratized anyway - Blender and Krita are free, and pencils are cheap.


What the masses have found entertaining has always been referred to as slop, so I am not sure it matters.

Novels, cinema, television, comic books, etc.

They were all considered careless skill-free slop at some point.


relax dude

Not only is it gambling, it has the full force of the industry that built the attention market behind it. I find it extremely hard to believe that these tools have not been optimised to keep developers prompting the same way tiktok keeps people scrolling.


It is really truly incredible that this mess of microscopic meat plumbing encodes everything we see, think, and do. Terrifying and amazing all at once.


Absolutely not


[dead]


Not GP. On the one hand I feel the sentiment. I lost my dad too when I was in my 30s, we knew it would happen but came sooner than expected and rather sudden. He also didn't leave much for us apart from his things.

And my SO struggles to recall details from long ago, and always appreciates when I reminisce from the time we met and so on. And I manage most of the finances and such, she'd be lost without some guide and credentials.

My first concern is that most SaaS companies have a track record for not sticking around. Will this be operational in 10 years, let alone 20 years?

Another is privacy. The things I want to say might be very private, either personal or like I mentioned credentials. SaaS companies have a rather poor track record when it comes to privacy, from data breaches to outright selling data. I'm weary of trusting a party I know nothing about.

There are some other minor points but those are the ones that made me immediately go "yeah but no".


Please don't, can't we just have one thread without this shit


It baffles me that so many people are so willing to pay for the privilege of training their own replacement.


But are you though?

From where I stand this thing is going to provide great leverage to those who don’t simply just write code. I personally doubt the thing will ever get to a place where it can be trusted to operate alone - it needs a team of people and to go super fast you need more people.

Moreover, the price won’t be high due to competition.

I’ve changed my view on LLMs as being good, as long as competition is fierce.


Looks like a LLM generated comment


It reads like a human to me. But I understand being suspicious of an account that’s 40min old


[flagged]


Not sure how hacker news can effectively protect against what looks like fake users posting LLM generated comments :(


I am pretty sure non-technical people are not going to be able to compete in any meaningful way with technical people.


Most modern TVs (not to mention SBCs you could just connect with HDMI) have bluetooth/USB for keyboard & mouse


I believe that’s a matinee


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: