HN2new | past | comments | ask | show | jobs | submit | dannersy's commentslogin

You're cherry picking. The open world games aren't as compelling anymore since the novelty is wearing off. I can cherry pick, too. For example, Starfield in all its grandeur is pretty boring.

And the users may not care about code directly, but they definitely do indirectly. The less optimized and more off-the-shelf solutions have seen a stark decrease in performance but allowing game development to be more approachable.

LLMs saving engineers and developers time is an unfounded claim because immediate results does not mean net positive. Actually, I'd argue that any software engineer worth their salt knows intimately that more immediate results is usually at the expense of long term sustainability.


Startfield is boring because of the bad writing and they made a space exploration game where there are loading screens between the planet and space and you don’t actually explore space.

They fundamentally misunderstood what they were promising, it’s the same as making a pirate game where you never steer the ship or drop anchor.

You can prove people are not bored with the concept as new gamers still start playing fallout new Vegas or skyrim today despite them being old and janky.


This is why Sid Meier's Pirates [0] remains such a great game.

It was really a combination of mini-games:

- you got steer a ship (or fleet of ships) around the Caribbean

- ship to ship combat

- fencing

- dancing (with the Governors' daughters)

- trading (from port to port or with captured goods0

- side quests

Each time I played it with my oldest, it felt like a brand new game.

https://en.wikipedia.org/wiki/Sid_Meier%27s_Pirates!


Played this as a kid, genuinely great gameplay loop and felt very immersive at the time.

I think my point stands. Procedural generation is a tool that usually works best when it is supplementary. What makes New Vegas an amazing game is all the hand built narratives and intricate storylines. So yeah, I agree, Starfield is boring because of the story. But if the procedural vastness was interesting enough to not be boring, then we wouldn't be talking about this to begin with.

Starfield wasn't procedural vastness though, No Man's Sky is but what Starfield was is handmade content then a loading screen then a minigame then a loading screen then a small procedural "instance"/"dungeon" not a vast seamless world to explore.

Im inclined to say that if Bathesta used LLMs for story based on known best seller books - it would be better than the garbage created by so called “modern script writers”.

The same could be said about Hollywood movies and series.

When agenda is more important than fun, books, movies, games are not labour of love but neglet.


Yeah I mean, I think procgen is cool tech, but there's a reason we don't talk about Daggerfall the same way we talk about Morrowind

Agreed.

> Starfield in all its grandeur is pretty boring.

And yet "No Mans Sky" is massively popular.

> ny software engineer worth their salt knows intimately that more immediate results is usually at the expense of long term sustainability.

And any software engineer worth their salt realizes there are 100s if not 1000s of problems to be solved and trying to paint a broad picture of development is naive. You have only seen 1% (at best) of the current software development field and yet you're confidently saying that a tool that is being used by a large part of it isn't actually useful. You'd have to have a massive ego to be able to categorically tell thousands of other people that what they're doing is both wrong and not useful and that they things they are seeing aren't actually true.


No Man's Sky got better as they were more intentional with their content. The game has more substance and a lot of that had to be added by hand. It is dropped in procedurally but they had to touch it up, manually, to make it interesting. Let's not revise history.

I don't think it has anything to do with ego. There are studies on the topic of AI and productivity and I assume we have a way to go before we can say anything concretely. Software workflows permeate the industry you're in. You're putting words in my mouth, I said nothing about what people are doing is wrong or not useful. I said the claim that generative AI is making engineers more productive is an unfounded one. What code you shit out isn't where the work starts or ends. Using expedient solutions and having to face potentially more work in the future isn't even something that is a claim about software, I can make that claim about life.

You need to evaluate what you read rather than putting your own twist on what I've said.


You said:

> LLMs saving engineers and developers time is an unfounded claim

By whom exactly? If I say it saves me time, and another developer says the same, and so on, than it is categorically not unfounded. In fact, it's the opposite.

You've completely missed the point if you don't understand how telling other people that their own experience in such a large field is "unfounded" simply because it doesn't line up with your experience.

> we have a way to go before we can say anything concretely

No YOU do. It's quite apparent to me how it can save time in the myriad of things I need to perform as a software developer (and have been doing).


Anecdotal evidence, how scientific of you. When I say it's unfounded, I'm saying it hasn't been proven with actual research and data. So when you ask, "by whom?", that's exactly my point, it is unfounded. That's what that word means, no one has made a claim, backed by data, that AI is making significant waves on productivity. I don't think I've missed the point at all, but it seems I hit an emotional nerve with you though, so the conversation is over.

Do I have to explain to another adult (presumably) what the word "unfounded" means? Are you purposely ignoring the hundreds of articles popping up on this site demonstrating the capabilities of these tools? Are they all lying?

Let us not pretend that they won't be used for war eventually. If they cave immediately under pressure, then this is an inevitably.


No offense, but this is the most predicable outcome ever. The software industry at large does this over and over again and somehow we're surprised. Provide thing for free or for cheap, and then slowly draw back availability once you have dominant market share or find yourself needing money (ahem).

The providers want to control what AI does to make money or dominate an industry so they don't have to make their money back right away. This was inevitable, I do not understand why we trust these companies, ever.


because it's easier than paying $50k for local llm setup that might not last 5 years.


Well, yes. They know what they are doing. They know when given the option the consumer makes the affordable choice. I just don't have to like or condone their practices. Maybe instead of taking on billions of dollars of debt they should have thought about a business model that makes sense first? Maybe the collective "we" (consumers and investors, but especially investors) should keep it in our pants until the product is proven and sustainable?

It will be real interesting if the haters are right and this technology is not the breakthrough the investors assume it to be AFTER it is already sewn into everyone's work flows. Everyone keeps talking about how jobs will be displaced, yet few are asking what happens when a dependency is swept out from underneath the industry as a whole if/when this massive gamble doesn't pay off.

Whatever. I am squawking into the void as we just repeat history.


Or the companies can be transparent about their product roadmap. I can guarantee this enshittification was on the roadmap way before we knew about it. They let us operate under false information, that's just weak behavior.


No offense taken here :)

First, we are not talking about a cheap service here. We are talking about a monthly subscription which costs 100 USD or 200 USD per month, depending on which plan you choose.

Second, it's like selling me a pizza and pretending I only eat it while sitting at your table. I want to eat the pizza at home. I'm not getting 2-3 more pizzas, I'm still getting the same pizza others are getting.


I am genuinely interested in hearing why we collectively ditched XMPP. I would love to hear someone who has been in the weeds on the development or even just following closely.

Edit: Seems someone beat me to it with a good reply.


> I am genuinely interested in hearing why we collectively ditched XMPP

We didn't. It was never very popular, and is today more popular that it has ever been.


It wasn’t popular? I remember using pidgin to talk to friends on google chat, facebook and my work contacts. It was glorious.

I haven’t had a reason to use an xmpp client in over a decade.


Likely you do or have without knowing it. The protocol is used in telecom quite a bit for all sorts of things. Jitsi is built on XMPP. Lots of games use it for chat - league of legends and unreal engine I believe. Xmpp shows up in all sorts of places if you look.


Same! Pidgin was such a great piece of software



Pidgin is a multi protocol client. Not an XMPP client.


Depends if you mean just the technology or using it in the small federated spirit. Google Talk and Facebook Messenger were XMPP all the way through and worked with vanilla XMPP clients. Slack wasn't XMPP but supported it via a gateway until it was dropped.

Not sure how popular the small federation was back then, but I know Mac OS X Server touted an XMPP server and that was a first-class feature of iChat.


Facebook was also a gateway like slack, but not as good as slack's gateway.

Google Talk was real and federated XMPP before they killed the product.


Oh, I mean Facebook Chat not Facebook Messenger. Supposedly that was ejabberd.


> Google Talk and Facebook Messenger were XMPP all the way through and worked with vanilla XMPP clients

I remember this, it was great to connect to absolutely every chat platform with bitlbee and pretend that all my chats were just DMs on some irc server somewhere


Forgot to mention the original WhatsApp was ejabberd under the hood but ofc was heavily modified and didn't work with regular XMPP clients


XMPP had rather bad name. Well-known design issues causing message losses, fractioned ecosystem due to varying implementation of extensions, unsuitability for mobile clients, absence of synchronization between clients, absence of end-to-end encryption. Most of these issues were (much) later fixed by extensions, but Matrix (or Signal for those who do not require federated one) was already there, offering E2EE by default.

Even today, E2EE in XMPP is rather inconvenient compared to Matrix due to absence of chain-of-trust in key management.


Sometimes I wonder if the endgame is each person having their own XMPP server for their set of devices. S2S is your E2EE then. Your chain of trust is your existing CA, unlike Matrix which starts from scratch. Cause XMPP wasn't designed from the start for clients not to trust servers, plus the fragmentation of C2S extensions was always a pain.

It's not a bad solution if someone can make it easy, even if it's a managed service that just lets tech-savvy users export it to self-hosting if they want.


Google Talk support for XMPP: 2005-2013

Facebook Messenger support for XMPP: 2010-2015

Jabber.org support for new accounts: 1999-2013

First-class integration with two of the world's largest social networks put XMPP in practically everyone's hands for a time, but when all the major hosts left, network discoverability and typical account longevity dropped drastically. The landscape is bleak today.

And since then, our collective needs and expectations of a chat platform have expanded. XEPs have been developed to bolt much of that functionality onto the base protocol, but that has led to a fragmentation problem on top of the bleak server landscape.

This unfortunate situation might be navigable by a typical HN user, and perhaps we could guide a few friends and family members and promise to keep a server running for them, but I think the chances of most people succeeding with it are pretty slim today.


Facebook never had "first-class integration". It was just a client bridge - you could login into Facebook Chat using your XMPP client, but it was a completely separate network, unlike Google Talk which was an actual federating XMPP server.


Fair enough. (Although all the XMPP clients that I used supported multiple accounts, so it made little difference from where I was standing.)

In any case, it contributed significantly to XMPP's reach and utility, and it's gone now.


(And my point regarding support on Google/Facebook was that their users could chat with me over XMPP without having to leave their familiar sites, sign up for anything new, or do anything else special. That put it in easy reach of the masses.)


The same could be said about various XMPP transports that I've used back in the day with Google Talk to access all sorts of IM networks. Facebook was just running one on their servers rather than you running it on yours.

Ultimately they just briefly used XMPP to not have to implement their own desktop client for their closed proprietary network. It had nothing to do with network reach, unlike Google Talk which did actually bring XMPP to the masses for a while (and then took it away).


Decent overview (& more broadly but the heart is about XMPP & good ol’ capitalist corpo greed): https://ploum.net/2023-06-23-how-to-kill-decentralised-netwo...


We didn't. Big tech did, as XMPP broke down barriers so they lost their moats.

I.e. it worked too well.


Whether you view the question as nonsensical, the most simple example of a riddle, or even an intentional "gotcha" doesn't really matter. The point is that people are asking the LLMs very complex questions where the details are buried even more than this simple example. The answers they get could be completely incorrect, flawed approaches/solutions/designs, or just mildly misguided advice. People are then taking this output and citing it as proof or even objectively correct. I think there are ton of reasons this could be but a particularly destructive reason is that responses are designed to be convincing.

You _could_ say humans output similar answers to questions, but I think that is being intellectually dishonest. Context, experience, observation, objectivity, and actual intelligence is clearly important and not something the LLM has.

It is increasingly frustrating to me why we cannot just use these tools for what they are good for. We have, yet again, allowed big tech to go balls deep into ham-fisting this technology irresponsibly into every facet of our lives the name of capital. Let us not even go into the finances of this shitshow.


Yeah people are always like "these are just trick questions!" as though the correct mode of use for an LLM is quizzing it on things where the answer is already available. Where LLMs have the greatest potential to steer you wrong is when you ask something where the answer is not obvious, the question might be ill-formed, or the user is incorrectly convinced that something should be possible (or easy) when it isn't. Such cases have a lot more in common with these "nonsensical riddles" than they do with any possible frontier benchmark.

This is especially obvious when viewing the reasoning trace for models like Claude, which often spends a lot of time speculating about the user's "hints" and trying to parse out the intent of the user in asking the question. Essentially, the model I use for LLMs these days is to treat them as very good "test takers" which have limited open book access to a large swathe of the internet. They are trying to ace the test by any means necessary and love to take shortcuts to get there that don't require actual "reasoning" (which burns tokens and increases the context window, decreasing accuracy overall). For example, when asked to read a full paper, focusing on the implications for some particular problem, Claude agents will try to cheat by skimming until they get to a section that feels relevant, then searching directly for some words they read in that section. They will do this even if told explicitly that they must read the whole paper. I assume this is because the vast majority of the time, for the kinds of questions that they are trained on, this sort of behavior maximizes their reward function (though I'm sure I'm getting lots of details wrong about the way frontier models are trained, I find it very unlikely that the kinds of prompts that these agents get very closely resemble data found in the wild on the internet pre-LLMs).


I don't share his experience entirely, by even on my desktop built for gaming I can notice the right click menu is delayed in comparison to Windows 10. Even more heinous, before you remove it, the AI button would lazy load causing you to sometimes hit it by accident when you mean to hit something else. God forbid I'm not 80 years old and click my menus with any sort of speed.

Also, if I'm going to have to adjust anything to use an operating system, I might as well use Linux. The only value prop for me to use Windows was gaming, but at this point I'm just completely ripping the band-aid off because it doesn't seem like Microsoft is going in a better direction.


> Even more heinous, before you remove it, the AI button would lazy load causing you to sometimes hit it by accident when you mean to hit something else.

Yup, definitely intended


I had a comment a few days ago about my freshly installing, configuring, and A/B testing W10 and W11 on a not-too-old HP laptop, using both HDD and NVMe:

https://hackernews.hn/item?id=46750358

That was then.

Now last night my partner comes back from the dumpster with a Toshiba laptop that's in great shape, its 20 full years old, came with Windows Vista, had been upgraded to Windows 7, has 1GB of memory, 120GB SATA2 HDD (slower than modern SATA), dual-core 1.73GHz CPU.

Seems to work perfectly and With W7 it's snappier than the 2019 HP booted to old or new Windows 10 or Windows 11 even more. All comparisons at baseline without the internet.

When you try W11 on a HDD it really emphasizes the difference from W10 on the same hardware.

But when this kind of W7 thing just falls in your lap, it really hits you how much better performance could have been available by now from Windows if they really tried.

Or didn't drop the ball every time they were at bat, depending on the general manager ;)


Yes, they could roll their own, but you have no issues with this being necessary? I think the attitude of "just deal with it" is far more negative than someone expressing they are upset with the state of the internet, its controllers, and its abusers.


There's trillions invested in AI. Don't expect any introspective insight or criticism about it.


This is like saying "lets just get rid of all the guns" to solve gun violence and gun crime in the USA. The cat is out of the bag and no one can put it back. We live in a different world now and we have to figure it out.


I hope this is a meme because it is wild to me how you don't see this as being a problem. You are contributing to an internet for bots and not people.


They will only stop when it becomes economically unfeasible.

> You are contributing to an internet for bots and not people.

I'd like to think that my websites and projects are evidence to the contrary.


Cared about anything other than their own upward movement, actively worked towards my professional development, made sure I had actual, not hand wavey, feedback, and made sure my compensation reflected my growing responsibility.

I am aware that all of those things may not be in their power to give, but some combination of that in any org that is somewhat functional would be motivating.


I think you'll find, especially within the tech community, people struggle with purity and semantics. They see that supporting and promoting FOSS is to be okay with its use for war, oppression, or whatever mental gymnastics they need to just not care or promote bad things. They will argue about what "free and open" means and get mixed up in definitions, political alignments, etc.

It is pretty obvious to me, that being blase about whomever using FOSS for adversarial reasons is not very "open" or "free". Somewhere in the thread there is an argument about the paradox of intolerance and I don't really care to argue with people on the internet about it because it is hard to assume the debate is in good faith.

My point is this: Throw away all your self described nuance and ask this yourself whether or not you think any malicious, war-monger, authoritarian, or hyper-capitalist state would permit a free and open source software environment? If the objective of a business, government, or billionaire is power, control, and/or exclusivity then, well, your lofty ideals behind FOSS have completely collapsed.


You're conflating freedom of use with moral endorsement. FOSS grants freedom, not ethical approval of every use.


No I am not. Your response proves my point in regards to getting bogged down in semantics. In a nutshell, my point is that if we do not care or do nothing when it comes to malicious use of FOSS, you very well may lose FOSS or at least the ability to develop in a FOSS environment. It is the paradox of intolerance of a different flavor.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: