HN2new | past | comments | ask | show | jobs | submit | miav's commentslogin

This is an excellent addition to the lineup and changes the list of reasons for why the average person would go for a Windows laptop from “cost” to practically nothing, but from a consumer perspective, is there any reason to buy this over M1 MBA which can be purchased new for less than the education discounted version of the MB Neo?

The M1 MBA is no longer available new, and was only available new for that price in the US.

It is genuinely incredible how well-fitting clothing is only generally available to some one-third of women who fit well into the anticipated height-waist ratio. Petite options exist in some places, but god forbid you're tall - your choices will be limited to "too short" and "too short and also too wide" if you try to go for a size up.


Skewing that further is the single length ratio of inseam to torso that a retailer's clothing is fit for. If you have short(er) legs and long(er) torso (than the median), you're doubly screwed, both in tops and bottoms; pants will flop past your feet and shirts won't reach your waist. The world is gradually starting to figure this out, but you typically only find 'tunic' (with its higher length-to-waist ratio) in a very few retailers at all.


The same goes for shoes and if you're dedicated to minimalist shoes you're basically doomed to men's shoes. Shoe size for women is a joke and seeing that most apparel and shoes are made in Asia, the divergence to standard measurements or sizes is doubly apparent.


Doesn’t help that high quality European made shoes are made either in Portugal, Spain or Italy, where women have also small feet.

I believe there are still quite some shoe factories in Germany and Eastern Europe, but are mostly dedicated to Hiking and Mountanieering shoes


The reaction to Discord age verification fiasco once again makes me believe that HN users just don’t have friends.

There is no alternative for Discord for bigger groups.

If there was, I still couldn’t move multiple social circles to it, no matter how much I evangelised.

The “just don’t use the less morally aligned platform” argument has always been valid only for those without a strong need for it, whether it’s X or Discord.


> The reaction to Discord age verification fiasco once again makes me believe that HN users just don’t have friends. There is no alternative for Discord

Are you saying that people who don't talk to their friends over Discord don't have friends?

Is that a statement you genuinely find reasonable?


They're saying people with friends generally don't have access to them all on one single alternative platform, like they might with Discord.


No no, they say "HN users just don’t have friends".


Well, someone has to start. It's ok if you use the alternative with a subset of your contacts.


It has to overcome some critical mass, i.e. political inertia.


Using whatever platform you prefer with a subset of people is fine and doable, but you're lying to yourself if you think that it is the "start" of anything.


Telegram, WhatsApp, Signal? I have friends and I don't use Discord or understand why I would want to use it.


I mostly use Telegram with my friends circle. You can have groups with individual topics. But we don't do group calls. I don't really see the appeal of group calls unless you are a gamer maybe. If I want to talk to them, I go meet them.

Signal for direct messaging and calls


> HN users just don’t have friends.

So once you have friends all connected parties requires to install Discords. How does that work?

Are your parents friendless, do they use Discord?


Self hosted TeamSpeak for communication & gaming and a signal group for chatting.


Is this genuinely common? I’ve only ever seen that level of hand holding extended to new grad hires.


It definitely happens at bloated organizations that aren’t really good at software development. I think it is especially more common in organizations where software is a cost center and business rules involve a specialized discipline that software developers wouldn’t typically have expertise in.


I have 13 years of professional experience, and I work in a small company (15 people). Apart from one or two weekly meetings, I mostly just work on stuff independently. I'm the solo developer for a number of projects ranging from embedded microcontrollers to distributed backend systems. There's very little handholding; it's more like requirements come in, and results come out.

I have been part of some social circles before but they were always centered around a common activity like a game, and once that activity went away, so did those connections.

As I started working on side hustles, it occurred to me that not having any kind of social network (not even social media accounts) may have added an additional level of difficulty.

I am still working on the side hustles, though.


> it's more like requirements come in, and results come out.

Wow someone is very good at setting requirements. I have never seen that in 25 years of dev life.


I've seen it many many times, a few from myself.

It's not so hard if you're an expert in the field or concept they're asking the solution for, especially if you've already implemented it in the past, in some way, so know all the hidden requirements that they aren't even aware of. If you're in a senior position, in a small group, it's very possible you're the only one that can even reason about the solution, beyond some high level desires. I've worked in several teams with non-technical people/managers, where a good portion of the requirements must be ignored, with the biggest soft skill requirement being pretending they're ideas are reasonable.

It's also true if it's more technical than product based. I work in manufacturing R&D where a task might be "we need this robot, with this camera, to align to align to and touch this thing with this other thing within some um of error."

Software touches every industry of man. Your results may vary.


I've seen that plenty of times. I suspect that you haven't seen it because you live in a place with high cost of living, which induces a high turnover in personnel, or perhaps you've been working in very dynamic markets such as SaaS.

When I was starting my career in Europe as freelance sysadmin, I worked several times for small companies that were definitely not at the forefront of technology, were specialised in some small niche and pretty small (10-15 engineers), but all its engineers had been there for 10-20 years. They pretty well paid compared to the rest of the country, and within their niche (in one case microcontroller programming for industrial robots) they were world experts. They had no intention of moving to another city or another company, nor getting a promotion or learning a new trade. They were simply extremely good at what they were doing (which in the grand scheme of things was probably pretty obsolete technology), and whenever a new project came they could figure out the requirements and implement the product without much external input. The first time I met a "project manager" was when I started working for a US company.


>I worked several times for small companies that were definitely not at the forefront of technology, were specialised in some small niche and pretty small (10-15 engineers), but all its engineers had been there for 10-20 years. They pretty well paid compared to the rest of the country

This isn't possible in the USA. Companies like this (small, and not in tech hub cities) always try to take advantage of their location and pay peanuts, with the excuse "the cost of living is lower here!", even though it's not that much lower (and not as low as they think), and everything besides houses costs the same nationwide.


I agree that something like that is very unlikely in the US, which is why so many people in this thread (I presume Americans) were incredulous as to whether that was even possible, but elsewhere in Europe good software and electronic/electrical engineers can be making very good money for the local standards in stable jobs, while at the same time being paid a lot less than they would be in a similar job in one of the US major tech hubs.


Of course, sometimes people realize that what they asked for wasn't actually what was needed.


I mean... This "realization" is what triggered the advent of agile, 2 decades ago, right?

People almost never know what they want, so put SOMETHING in front of them, fast, and let's go from there


I don’t understand how? Wouldn’t the signal be highly directional? Surely it wouldn’t be easily detectable unless the viewer’s POV intersects the path of the beam?


Anecdotal, as a kid I was really into CS:GO skin betting. Ended up losing my entire collection, never gambled since.


The entire skin gambling scene is a big reason why I don’t touch the game anymore. It seeps into everything around CS and now you can’t watch a professional match without seeing a sponsorship from a skin casino, can’t watch a YouTube video without a sponsorship.

This is also why I don’t like the way some gamers treat Valve as the only ethical company in the industry. CS skin gambling is like what if you take the lootbox mechanism and pave it over the game’s entire ecosystem.


It’s the kids who win big you need to worry about (anecdotal, not me, but a big win concurrent with a bad time during formative years can have a lasting impact on people who then ultimately become addicts).


Guys, the author presents an overall reasonable argument and I think it's more useful to engage with it in good faith than going "so it's all my fault just because I'm a man?" - no one's implying that.

At its simplest, the point is that much of programming language design is done with a masculine perspective that values technical excellence and very little feminine perspective that focuses more on social impact. Most, including myself, have a knee-jerk reaction to dismiss this argument since at first glance it appears to trade off something known useful for something that's usually little else than a buzzword, but upon further reflection the argument is sensible.

The theme of forsaking technological perfectionism in favor of reaching whatever end goal you have set is widely circulated on this forum and generally agreed with. Those of us that work as software engineers know that impact of your work is always valued more than the implementation or technical details. It's thus reasonable that when building programming languages, the needs and experience of the users should be considered. Not override everything else, but be factored into the equation.

I know if I were to write a programming language I'd probably focus on pushing the boundaries of what's technologically possible, because I find it fun and interesting. But I would have to agree that even if I did succeed in doing so, the actual impact of my work would probably be lower than that of Hedy - the author's language. Hedy is not novel technologically, but the fact that it makes it meaningfully easier to learn programming for significant numbers of people is real, undeniable impact.

Lastly, I want to note that the author's argument for underrepresentation of women in PL cannot be reduced to "those nasty men are keeping us out". Humans are tribal and any group of humans is bound to form complex social structures. Those are going to affect different people in different ways, linked paper investigates the effect on those structures on specifically women because the topic is close to the author. Whether you care about low numbers of women in PL design or not, the dynamics that have led to that being the case are worth investigating and are quite interesting on their own.


> At its simplest, the point is that much of programming language design is done with a masculine perspective that values technical excellence and very little feminine perspective that focuses more on social impact.

I guess my criticism of this is that it reduces both men and women to what amounts as little more than stereotypes, which leaves me rather uneasy.

I also find it somewhat of a distraction as to the actual issues. For example one of the topics is that all programming languages only accept Latin numerals (0-9) and often only support English in many keywords. It's not hard to see how this might exclude people, sure.

A counter-argument to this might be that having a single Lingua Franca enables a global community of people from very diverse backgrounds to communicate and work together. Just today I accepted two patches from someone from China. Thirty years ago even talking to someone from China would be a novelty, let alone casually cooperating. That's kind of amazing, no? If we'd both be stuck in our exclusive world of "English" and "Chinese" with out own languages and counting systems and whatnot, then that would have been a lot harder.

All things considered English probably isn't the best, fairest, or more equitable choice. But it is what it is, and it's by far the most practical choice today.

You can of course disagree with all of that, and that's fine. But reducing it to "technical excellence" vs "social impact" or "male perspective" vs. "female perspective" just seems reductive and a distraction.


It is absolutely a valid point that user experience, helpful error messages, ease of use are extremely important and historically neglected in programming language design, as contrasted with technical excellence.

It is absolutely an invalid point to claim that this is due to gender, that men are doomed by biology to care about technical excellence while women are doomed by biology to care about UX. We are not living in 1825!


I agree with this.

It is IMO very backwards and counterproductive to just gender-box mathematics/quantitative research vs peoples/qualitative research. Women are very capable of doing quantitative/mathematical research, and men are quite capable of doing qualitative/UX/anthropology research. Why must we be so narrow minded, _especially_ when we're talking about how we want to see the future?

The author may be totally right in suggesting that PL academic research needs more diversity in research, and that there might be a lot of status-quo bias, elitism and groupthink at play. The over-reliance on mathematical "purity/elegance" (like the monad meme someone mentioned below) instead of usability when it comes to new languages is something even I have encountered as an end user of programming languages.

However, claiming that there's some inherent gender based tendency to engage in one kind of research over another defeats their own purpose IMO. If they said that PL academic research could learn some tricks from fields X, Y and Z on how to engage in more end-user research, it would have made their point so much more convincing.


It just reductionist nonsense. The idea that valuing technical excellence is a masculine trait is just absurd. The idea that valuing social impact is a feminine trait is also nonsense. Yes, we could have an aggregate noticeable difference, but the idea that this creates causal outcomes in something like programing languages... it makes no sense.

People use different programing languages for different reasons. Python is easy to read, it's used more by people who come from non-cs backgrounds. Lower level languages are used by people with lower level needs.

It's like saying that "trucks are masculine" when, sure, I'll grant you that, but the point of a truck is to haul a bunch of shit, often to a work site, and there are plenty of women who need to do that. It's like saying that a "Prius is feminine" because it's built around a social cause (climate change)... I mean, sure, I guess. I still think literally millions of men drive Priuses because they actually care about social causes and want to save money on gas.

The concept between the aggregate outcomes of all masculinie and feminine coded things being driven by and arbitrary culture and not an actual random distribution of needed functions (men are more likely to work in hard labor than women, hence more likely to use a truck for work), just seem like the tail wagging the dog.

All of this, and yes, I consider myself very committed to the values of equality that feminism espouses.


If programming languages have masculine or feminine perspective, then I would like to know which culture lens this is being painted with and for what purpose. Is there any more than to dissect Charles V quote, "I speak Spanish to God, Italian to women, French to men, and German to my horse", and asking if speaking French would increase the number of women in a specific field.

If we are trying to define programming languages tailed toward children as being feminine, and then "real" languages used in the trade as being masculine, are we not devaluing the hard work of everyone involved? It seems to carry a very high risk of causing the opposite of an positive impact. Languages that are meaningfully easier to learn are a good thing, and like reading and math, seems like a good idea to teach children at an early age. The essay seems hopeful that this would decrease gender segregation in the work force, through they don't bring much to support it.


Thanks for saying so.

I'll add that if people want a historical perspective on the dynamics, CS Professor Ellen Spertus long ago wrote the paper "Why are There so Few Female Computer Scientists?" It helped me see a lot of the things I might have otherwise been inclined to dismiss: https://dspace.mit.edu/handle/1721.1/7040


The bulk of low-effort hostile responses to her article in male-dominated spaces like HN really just illustrate her point over and over again.

It is satisfying to read the engaging and curious reflections, like your own. Let us hope for more of these.


I would rather dismiss her point on the basis, that from my perspective this may be true for a small niche of academics that focus specifically on programming language formalisms.

When I studied programming language during my university time, this was really focused on formal approaches, so it is true there. But that is how this field of studies defines itself, and that should be considered their right.

Once you look outside of this narrow field, you can easily find a lot of projects and endeavors that cover exactly what she is requesting in that article.

* The rust compiler focuses a lot on more understandable error messages (a topic specifically covered) and even recommendations that make picking up the language easier.

* C++11 standardization also focused a lot on usability and how to improve hard to read error messages.

* Scratch is explicitly designed to look for alternative approaches to programming.

* Programming in other languages has been around for a long time.

In school we were taught a German version of Logo. I don't buy her argument that her language research was dismissed purely because it wasn't hard enough. We simply have anything we need to understand how we could do a programming language in another language. Replace a few lexer definition, and then re-define the whole stdlib in another language. There is simply nothing novel about this. I really hope her research on language covers a lot more than just this.

She also does a very bad bait-and-switch when she suddenly replaces the meaning of the word "hard" in the middle of the article. Initially she clearly used "hard" to refer to difficult, then later she suddenly switches hard in the sense of "hard" sciences, i.e. sciences based on formalisms and empirical research instead of discussions and opinions.

I agree with her a lot of research is missing from non-technical hard sciences (I would consider large parts of psychology a hard science, although it lives at the border of the two worlds). There is some research on the psychology of programming, but this is definitely under-researched. Also usability studies of programming languages are not well established.

In a lot of cases, however, I don't think this is actually something we can really do research on. I have a strong background in psychology, and I don't think we actually could study the impact of different paradigms. If you pick participants that already know programming, they will be highly socialized with the dominant paradigms. If you pick novices you will have to control what they learn over years until they become fluent in the studied paradigm. This isn't feasible and raises sever ethical concerns. Or you don't control it, make short time studies, in which case the results will just not carry any meaning.

Overall for me the article raises some really valid concerns about programming language research and CS in general, but I think she took a really bad turn in describing these as gender based issues. What I would see as the reason for these issues lies in completely different areas and are only very remotely related to gender.


The author is not advocating for friendlier messages in the googlesque sense of dumbing them down or introducing more positive wording, but in the sense of making them more readable and useful.

Of course it does not matter for "a straightforward computer error message", in cases where the error is a simple type mismatch or a missed semicolon, but if those were the majority of the problems we encounter as programmers, our work would be trivial.

It's not difficult to imagine a situation where structuring a compiler in such a way that it keeps more state and perhaps has to perform more analysis is worthwhile, since a more useful error message saves the user time in understanding and fixing a problem.

An example that comes to mind is when in Rust I tried to create a dynamically dispatched trait, where the trait in question contained a function an argument of which was generic over a different, statically dispatched trait. Since the compiler did not know at compile time the exact object which would be instantiated, it was incapable of inferring the exact type of the second, statically dispatched trait at compile time, thus failing to compile.

The error was presented to me in a clear way that pointed out the problematic relationship between dispatch types of the two traits allowing me to understand and fix the problem quickly. If the error message was far simpler, such as "can't dynamically dispatch trait", I would have figured that out too, but it would have simply taken more valuable time. Most importantly, having to track down the issue from a minimal error message, would not have been an honorable test of my intelligence and emotional maturity, it simply would have been inefficient.


This is about end-to-end encryption. Google doesn’t do that.


Where did you hear that?

A quick search tells me google does end-to-end encryption since at least 2021 [1].

https://www.androidcentral.com/how-googles-backup-encryption...


Could anyone more knowledgeable on the topic explain to what extent common wireless connectivity standards are open and feasible to implement for, say, a medium sized company? Apple has been working on a 5G modem for what feels like a billion years, but other standards seem to be more democratized.


The availability of hardware seems semi moot, since afaik there's basically no way to get spectrum short of big national auctions.

But now that T-Mobile is renegging their promise & not going to meet the minimum deployment size they promised, they have been saying the FCC should find a way to sell by area some of that spectrum sitting dormant in such a wide wide % of America (personally I think it makes their bid invalid & they should forefeit their bid for such egregious dirty lying). https://www.lightreading.com/5g/t-mobile-relinquishes-mmwave...

I think some of the analog tv spectrum has some precedent for being sold per-area rather than nation wide, but I'm not sure how that's been going.

In terms of hardware, there's some fascinating stuff. Facebook's SuperCell large-tower project showed awesome scale out possibility for large towers. Their Terragraph effort is spun out, and seems to have some solid customers using their hardware. Meta spun off their EvenStar 5G system, which has a strong presence at Open compute now. https://www.opencompute.org/projects/evenstar-open-radio-uni...

But it's hard to tell how acquireable such a thing really is. There's plenty of existing nodes out there too. It is unclear to me though how acquireable such things really are- there not being an open market, since there's no usable spectrum feels like a conundrum for the market, even though these are extremely high volume amazingly integrated advanced wireless systems that you'd think would be visibly prolific.


> The availability of hardware seems semi moot, since afaik there's basically no way to get spectrum short of big national auctions.

You can run 5G in the unlicensed spectrum. AWS can rent you hardware for it: https://aws.amazon.com/private5g/ - it's $5k a month per site. I know a plant that switched to that because they couldn't get WiFi to work reliably for them.

But even if you want to run within the licensed spectrum, local licenses for a couple of bands are cheap. I was involved in setting up a private network in the licensed spectrum around 10 years ago (based on https://aviatnetworks.com/ ), and a local site spectrum license was something ridiculously small (in the range of a hundred dollars).

It's expensive if you want to do it nation-wide.


From my limited understanding, the issue for Apple et al isn’t making a 5g chip, it’s making the chip small, cheap, power efficient enough and capable of having “decent” reception. I’d imagine existing patents by Qualcomm certainly make it a bit more challenging on terms of available (design) options.


Fabrice Bellard open sourced a 4G (LTE) base station.

https://bellard.org/lte/


It doesn't seam open source?

> The LTE/NR eNodeB/gNodeB software is commercialized by Amarisoft.

> A UE simulator is now available. It simulates hundreds of terminals sharing the same antenna. It uses the same hardware configuration as the LTE eNodeB.

> An embebbed NB-IoT modem based on Amarisoft UE software.


Yep, the word "source" never mentioned.


I assume Fabrice Bellard will crack cold fusion as a side project some day.


What do you mean by implementing? Make your own radio chips, designed from the ground up? Or merely producing a networking device using chips from suppliers like Intel, TI, Broadcom, Qualcomm etc? Or the software side only?


Stuff for GSM/CDMA has been around for years, OpenBTS is the primary example. This is the first I've heard of anything more modern/complicated being implemented. From my understanding, a lot of the hard eng work is in the RF frontend and making it small/low power enough to fit in a phone for example. OpenBTS got around this by using existing SRDs for their RF frontend.


WiFi, Bluetooth and Zigbee has bunch of public specifications and knowledge about it to make it feasible. AFAIK, the specifications for 4G/5G is publicly available but extremely complex + you'd need licensing agreements, pay royalties, etc. So unless this imaginary company of yours have specialized expertise in all that, it seems unlikely to be feasible.


The big problem is patents and copyright. No common wireless standards are open. No wireless standards are feasible to implement. Seriously. It's that bad. Certainly a modern 4G/5G standard is complex from a hardware standpoint to implement - the way you usually do these is using a very powerful embedded DSP, which is also not open (Qualcomm Hexagon is the most reverse-engineered of these if you want to understand what's going on). But the thing that's holding Apple up is purely legal IMO.


>No common wireless standards are open, No wireless standards are feasible to implement.

What is definition of "Open" here?

The current submission is entirely about Open Source 4G/5G. Fabrice Bellard on top of the crazy amount of other stuff he did also made a LTE/NR Base Station Software [1]. WiFi and Bluetooth are also "Open".

>But the thing that's holding Apple up is purely legal IMO

People constantly mistaken having an open standard regardless of patents and an useable product on the market. There is no reason why you cant have a software modem aka Icera that was acquired by Nvidia in the early 10s. And there are no modem monopoly by Qualcomm which is common misconception across all the threads on HN and wider internet. MediaTek, Samsung, Huawei, Spreadtrum and a few others have been shipping 4G / 5G Modem on the market for years.

The only reason why Apple hasn't released a modem 6 years after they acquired the modem asset from Intel is because having a decent modem, performance / watt comparatively to what on market is Hard. Insanely hard. You have Telecoms from top 50 market each with slightly different hardware software spectrum combination and scenario along with different climate and terrains. It took Mediatek and Samsung years with lots of testing and real world usage at the lower end phone to gain valuable insight. Still not as good as Qualcomm but at least it gets to a point no one is complaining as much.

[1] https://bellard.org/lte/


> What is definition of "Open" here?

Patent unencumbered in a way that someone could make a commercially viable implementation as a "small or midsized" company, as the parent post asked. Open Source proves my point - the issue is not implementation (note - I'm not claiming implementation isn't hard, it is - I certainly know from personal experience that it is and I would never claim to be able to personally build an energy efficient 4G or 5G modem, but I don't think that raw engineering horsepower is what's holding Apple/Intel/NVidia back here).

> MediaTek, Samsung, Huawei, Spreadtrum and a few others have been shipping 4G / 5G Modem on the market for years.

The CCP effectively told Qualcomm to get lost in 2015 and Taiwan settled an antitrust agreement between them and MediaTek in 2018, so MediaTek, Huawei, and Unisoc/Spreadtrum are not good examples here. I believe the South Korean government also intervened on behalf of Samsung. Actually, the list of modem vendors you list pretty much matches exactly the list of governments who prosecuted, fined, and settled with Qualcomm for antitrust.


>Patent unencumbered in a way that someone could make a commercially viable implementation.

Doesn't this exclude all modern cellular standards then?


yes.


If I remember correctly, all the documentation needed to implement a 5G radio approaches 10,000 pages. It’s not only insanely long and complicated but there’s a nasty path dependency with most of 4G which is why Intel and now Apple have such a hard time getting their radios to the finish line. Poaching a few Qualcomm or Broadcom employees with better salaries is one thing but without the cumulative expertise contained within the companies, it’s almost impossible to bootstrap a new radio.


> Apple has been working on a 5G modem for what feels like a billion years, but other standards seem to be more democratized.

The main problem is the sheer age of mobile phone networks. A phone has to support everything from top-modern 5G down to 2G to be usable across the world, that's almost as much garbage that a baseband/modem FW/HW has to drag along as Intel has to with the x86 architecture.

And if that isn't complex enough, phones have to be able to deal with quirks of all kinds of misbehaving devices - RF is shared media after all, and there's devices not complying with the standard, the standards containing ambiguous or undefined behavior specs, completely third-party services blasting wholly incompatible signals around (e.g. DVB-T operates on frequencies in some countries that are used for phone service in other countries, and often on much much higher TX power than phone tower sites). If it can't handle that or, worse, disrupts other legitimate RF users, certification won't be possible.

But that experience in dealing with about 35 years worth of history is just one part of the secret sauce - that just makes the costs of entry for FOSS projects really huge (which is why all of these projects I'm aware of support only 4G and afterwards since that generation is the first one to throw away all the legacy garbage).

The other part of why there are so few vendors is patents, and there is a toooooon of patent holders for 5G [1], with the top holders being either Chinese or known for being excessively litigious (Qualcomm). And even assuming you manage to work out deals with all of the patent holders (because of course there is, to my knowledge at least, no "one stop shop" compared to say MPEG), you still have to get a design that fulfills your requirements for raw performance, can coexist peacefully with almost all other users of the RF spectrum to be power efficient at the same time. That is the main challenge for Apple IMHO - they have a lot of experience doing that with "classic" SoCs, but almost none for RF hardware, virtually all of that comes from external vendors.

[1] https://www.statista.com/statistics/1276457/leading-owners-o...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: