I also live in Chicago. The closest bus stop to my house is 2 blocks away, and the 2nd closest stop on that same line is 3 blocks away - just one block further in the direction I’m going.
I simply don’t believe that eliminating that closet stop would worsen my commute. When I’m leaving home, I would walk a block further, but probably 80+% of the time it would not increase the time I spend out in the elements because I’d just replace time standing at the bus stop with time walking to the next one. The only time it would hurt me is on the rare occasion that the bus passes me while I’m walking that extra block. (Pessimistically assuming 2 minutes to walk one block, and with buses coming every 10 minutes on average, is how I get 80%.) But I bet doing that all up and down the route would make the bus much more predictable. That closest stop is within the distance that cars back up from a traffic light at that next intersection when there’s traffic, and when the bus stops at my intersection it can often get pinned in the stop for a while when motorists aren’t in the mood to let the bus re-enter traffic. Multiply that phenomenon by, say, 20 extra stops and you get to some pretty unreliable service for people trying to get to work in the morning. I bet most of us would happily walk an extra block if it means we no longer have to leave for work half an hour early. 2 minutes extra walking on either end adds up to 4 minutes “wasted” time walking (I also am not sure I count walking as wasted time, by the way - physical activity is good for me) is a lot less than 30 minutes wasted time padding my commute to account for less reliable service.
And then when I’m coming home I get off at that stop that’s a block further away anyway. Because there’s a light at that intersection but not at the one where the close stop lies. I can easily spend more time waiting for a gap in traffic large enough to cross a busy street during the evening rush than it takes to walk that extra block.
I’ve been a consistent split keyboard user for a quarter century now. My current daily driver is a Redox, which uses a columnar layout. I got into them when I first started having problems with tendinitis. I feel like they help, but I’m not sure what the science says about it.
Anyway, I’ve always hated that diagram because it’s so obviously hyperbolic. I also use standard keyboards on a daily basis, and while there are some posture differences, the bending to make hands perpendicular to the keyboard just does not happen. Comfortably placing your fingers on the home row requires angling your hands a bit because the fingers are all different lengths. Are there some posture differences? Sure. But from what I’ve seen they’re really quite minor.
What I would guess makes more of a difference is tenting. Which is admittedly only possible with a split design. But also, not all split keyboards do tent.
Also, and this one might be specific to my particular problem, moving keys the thumb strikes to a position that it can reach with less stretching has helped a lot. (I suspect that the space bar in particular might have been the source of most of my woes.) And that’s another variable that’s highly correlated with - but still not the same as - the keyboard being split.
Try things and see for yourself. I know that’s not super satisfying advice, but everyone has a different experience with these things so there are no easy answers.
Start small. Don’t feel pressured to dive straight into the $300 keyboards. I have a fancy custom mechanical keyboard myself, but that’s because a few years back I decided it would be fun to get into using a more hackable keyboard. For a very long time I was more than content with the (sadly now discontinued) Microsoft Sculpt keyboard, which was one of the least expensive options.
For what it’s worth, ObjC is not Apple’s brainchild. It just came along for the ride when they chose NEXTSTEP as the basis for Mac OS X.
I haven’t used it in a couple decades, but I do remember it fondly. I also suspect I’d hate it nowadays. Its roots are in a language that seemed revolutionary in the 80s and 90s - Smalltalk - and the melding of it with C also seemed revolutionary at the time. But the very same features that made it great then probably (just speculating - again I haven’t used it in a couple decades) aren’t so great now because a different evolutionary tree leapfrogged ahead of it. So most investment went into developing different solutions to the same problems, and ObjC, like Smalltalk, ends up being a weird anachronism that doesn’t play so nicely with modern tooling.
I've never written whole applications in ObjC but have had to dabble with it as part of Ardour (ardour.org) implementation details for macOS.
I think it's a great language! As long as you can tolerate dynamic dispatch, you really do get the best of C/C++ combined with its run-time manipulable object type system. I have no reason to use it for more code than I have to, but I never grimace if I know I'm going to have to deal with it. Method swizzling is such a neat trick!
It is, and that’s part of what I loved about it. But it’s also the kind of trick that can quickly become a source of chaos on a project with many contributors and a lot of contributor churn, like we tend to get nowadays. Because - and this was the real point of Dijkstra’s famous paper; GOTO was just the most salient concrete example at the time - control flow mechanisms tend to be inscrutable in proportion to their power.
And, much like what happened to GOTO 40 years ago, language designers have invented less powerful language features that are perfectly acceptable 90% solutions. e.g. nowadays I’d generally pick higher order functions or the strategy pattern over method swizzling because they’re more amenable to static analysis and easier to trace with typical IDE tooling.
I don't really want to defend method swizzling (it's grotesque from some entirely reasonable perspectives). However, it does work on external/3rd party code (e.g. audio plugins) even when you don't have control over their source code. I'm not sure you can pull that off with "better" approaches ...
Many of the built-in types in Objective C all have names beginning with “NS” like “NSString”. The NS stands for NeXTSTEP. I always found it insane that so many years later, every iPhone on Earth was running software written in a language released in the 80s. It’s definitely a weird language, but really quite pleasant once you get used to it, especially compared to other languages from the same time period. It’s truly remarkable they made something with such staying power.
>It’s truly remarkable they made something with such staying power
What has had the staying power is the API because that API is for an operating system that has had that staying power. As you hint, the macOS of today is simply the evolution of NeXTSTEP (released in 1989). And iOS is just a light version of it.
But 1989 is not all that remarkable. The Linux API (POSIX) was introduced in 1988 but started in 1984 and based on an API that emerged in the 70s. And the Windows API goes back to 1985. Apple is the newest API of the three.
As far as languages go, the Ladybird team is abandoning Swift to stick with C++ which was released back in 1979. And of course C++ is just an evolution of C which goes back to 1972 and which almost all of Linux is still written in.
And what is Ladybird even? It is an HTML interpretter. HTML was introduced in 1993. Guess what operating system HTML and the first web browser was created on. That is right...NeXTSTEP.
In some ways ObjC’s and the NEXTSTEP API’s staying power is more impressive because they survived the failure of their relatively small patron organization. POSIX and C++ were developed at and supported by tech titans - the 1970s and 1980s equivalents of FAANG. Meanwhile back at the turn of the century we had all witnessed the demise of NeXT and many of us were anticipating the demise of Apple, and there was no particularly strong reason to believe that a union of the two would fare any better, let alone grow to become one of the A’s in FAANG.
I actually suspect that ObjC and the NeXT APIs played a big part in that success. I know they’ve fallen out of favor now, and for reasons I have to assume are good. But back in the early 2000s, the difference in how quickly I could develop a good GUI for OS X compared to what I was used to on Windows and GNOME was life changing. It attracted a bunch of developers to the platform, not just me, which spurred an accumulation of applications with noticeably better UX that, in turn, helped fuel Apple’s consumer sentiment revival.
Good take. Even back in the 1990s, OpenStep was thought to be the best way to develop a Windows app. But NeXT charged per-seat licenses, so it didn't get much use outside of Wall Street or other places where Jobs would personally show up. And of course something like iPhone is easier when they already had a UI framework and an IDE and etc.
Assuming you mean C (C++ is an 80s child), that’s trivially true because devices with an ObjC SDK are a strict subset of devices that are running on C.
Yes, that is why I don't find it "insane" like the grandparent does, like yeah, devices run old languages because those languages work well for their intended purpose.
You should feel that C’s longevity is insane. How many languages have come and gone in the meantime? C is truly an impressive language that profoundly moved humanity forward. If that’s not insane (used colloquially) to you, then what is?
Next was more or less an Apple spinoff, that was later acquired by Apple. Objective-C was created because using standards is contrary to the company culture. And with Swift they are painting themselves into a corner.
> Objective-C was created because using standards is contrary to the company culture
Objective-C was actually created by a company called Stepstone that wanted what they saw as the productivity benefits of Smalltalk (OOP) with the performance and portability of C. Originally, Objective-C was seen as a C "pre-compiler".
One of the companies that licensed Objective-C was NeXT. They also saw pervasive OOP as a more productive way to build GUI applications. That was the core value proposition of NeXT.
NeXT ended up basically taking over Objective-C and then it became of a core part of Apple when Apple bought NeXT to create the next-generation of macOS (the one we have now).
So, Objective-C was actually born attempting to "use standards" (C instead of Smalltalk) and really has nothing to do with Apple culture. Of course, Apple and NeXT were brought into the world by Steve Jobs
> Objective-C was created because using standards is contrary to the company culture.
What language would you have suggested for that mission and that era? Self or Smalltalk and give up on performance on 25-MHz-class processors? C or Pascal and give up an excellent object system with dynamic dispatch?
C's a great language in 1985, and a great starting point. But development of UI software is one of those areas where object oriented software really shines. What if we could get all the advantages of C as a procedural language, but graft on top an extremely lightweight object system with a spec of < 20 pages to take advantage of these new 1980s-era developments in software engineering, while keeping 100% of the maturity and performance of the C ecosystem? We could call it Objective-C.
And, hear me out here - perhaps for the sake of morale it makes sense to leave a smidge of the part of the job that actually attracts people to this profession in the first place on their plates. Otherwise we may find that, after the novelty wears off, we’re left with a net productivity dropoff because there’s not as much left to keep people motivated to do à good job of the remaining work.
There is also an FCC angle that is relevant in that it concerns broadcast communications. And a “Streisand Effect” aspect that is perennially interesting to many hackers. And, relatedly, an angle concerning how newer media (YouTube) alters the communication landscape.
I would guess It depends on how thoroughly the organization doing the censorship can exert control over information.
For example, I’ve been pretty impressed by the extent to which the Chinese government is able to influence public discourse within its borders when they want to. But China is also home to over 90% of the world’s Chinese speakers, and it has its own domestic social media industry with very few users from outside the country, the Great Firewall, less of a culture of anti-authoritarianism, etc.
I’m not sure how feasible it would be for the US to get to a comparable position. The US is nowhere close to being a supermajority of the world’s English speakers, and it might be hard for the government to impose an isolationist policy on the country’s tech industry without inciting a revolt by its tech oligarchs.
The GFW is real and many people don’t bother trying to jump it anymore. Mostly people in China don’t really care what the rest of the world is thinking, they got 1.4 billion in their own world, it’s big enough. The USA has only 342 million people in comparison, and we have a whole country to the north of us that barely talks with a different accent. Heck, half of our movie stars are Canadian or Australian (it feels like it anyways).
Kind of fascinating that the first couple comments to be posted both start with a declaration of the author’s opinion of Colbert.
I read it as a pretty straightforward acknowledgment that we’ve reached a point in public discourse where people pay at least as much attention to who is making a point as we do to the actual point being made.
I wonder if finding common ground is even possible as long as that kind of tacit ad hominem remains baked into the way we think about public discourse.
I think this is an interesting conundrum, because I feel it is also important to critically consider the people who are making points because it can also inform you about why these points are being made. I don't consider this a tacit ad hominem per se, because we must acknowledge the open internet is full to the brim of bad actors and bots and we cannot equally engage with all comments. It is fine and appropriate to identify comments one doesn't want to engage with, not even with the brainpower of reading. This has increasingly forced people to make concessions to avoid misunderstandings due to other people's need to filter comments.
I was actually thinking the opposite. I sent my wife this text message:
Nonsense world when a comedian doing a late night show is the person willing to say something
It's not his job I'm glad he's doing it, but this is like watching the McDonald's workers Narcan people as part of their daily tasks
For reference she worked as a manager at McDonald's and would regularly have to deal with people ODing in the drive through and there was a "serious" discussion between managers, corporate, etc. about it they should be doing this or if they should be instructing everyone to stay indoors and let them die.
Corporate had to say that they must stay inside and eventually it became grounds for suspending employees that didn't want to sit and watch people die.
Contact me if you want her information for journalism purposes we have the receipts.
Corporate doesn't realize that forcing people to do nothing and watch others die might be bad for morale? Corporate doesn't realize that people regularly dying in their drive through without assistance is going to be an enormously negative PR hit if it ever comes out? Corporate doesn't even realize that people dying in the drive through is going to leave their drive through plugged for a long period of time?
I am appalled. Even ignoring the cold-hearted lack of empathy, I have a hard time imagining that this is even in their best interest.
The liability for corporate of an employee getting involved and them dying anyways is obscene. Their insurers probably said “if you allow this we will jack your premiums up by 10x” and that was that. Morale and bad PR isn’t going to compete with the inability to do business at all.
The drive through has a two lane system so cars can just go around.
The EMT would show up and drag them out of the car then a tow truck would come and get the car.
Most "regular" employees are happy to see the drive through disabled it makes their job much easier. Most restaurants place a small construction cone out to block one of the lanes to reduce customers as it's too much work.
The franchise owners are in a strange spot. I have some compassion and understanding for them, but also have helped build bots they use to submit fake reports to corporate to make their numbers go up or keep their numbers from going down too much. It's fascinating to be paid by a company to perform work they had to be hidden from other layers and that is fundamentally against the interest long term to any of them. I required a contract signed by them to ensure this bullshit isn't my problem. Again, I got the receipts if anyone wants!
I’ve never seen a come out at my McDonald’s in Ballard, and it has two lanes. They charge too much to leave money on the table though. Also, it’s a franchise, not corporate, so maybe a different attitude since the owner is probably also the operator.
Corporate doesn't care about morale, they want to stay as far away as possible from testing the limits of good samaratin laws at best, or directing unqualified employees to provide healthcare at worst.
What's so fascinating about it? I made one of those comments. It’s not about judging the person but about emphasizing the point. I’m saying the situation is so egregious that it overrides my usual dislike for the host. If anything, it validates the point being made, regardless of who is making it.
I’m thinking survivorship bias here. “Information Technology” is such a wide term, and we immediately think of the IT we currently use. Many of us can’t even remember all the blind alleys we wasted resources on in the ‘80s, especially those of us who weren’t there. I count myself among that group because I was a kid and didn’t pay much attention to business.
But I can say that, judging by historical artifacts, a lot of it was along the same broad lines as AI. And we maybe don’t realize how serious people were about it back then. The technology that actually changed the world was so comparatively boring and pragmatic that the stuff that was being hyped back then seems comically overwrought. It’s easy to assume it must have been a joke all along.
I have worked with some really really good product managers.
But not lately. Lately it’s been people who have very little relevant domain expertise, zero interest in putting in the time to develop said expertise beyond just cataloguing and regurgitating feedback from the customers they like most on a personal level, and seem to mostly have only been selected for the position because they are really good at office politics.
But I think it’s not entirely their fault. What I’ve also noticed is that, when I was on teams with really elective product managers, we also had a full time project manager. That possibly freed up a lot of the product manager’s time. One person to be good at the tactical so the other can be good at the strategic.
Since project managers have become passé, though, I think the product managers are just stretched too thin. Which sets up bad incentive structures: it’s impossible to actually do the job well anymore, so of course the only ones who survive are the office politicians who are really good at gladhanding the right people and shifting blame when things don’t go well.
There are individuals who have good taste for products in certain domains.
Their own preferences are an accurate approximation for those of the users.
Those people might add value when they are given control of the product.
That good taste doesn't translate between domains very often.
Good taste for developer tools doesn't mean good taste for a video game inventory screen.
And that's the crux of the problem.
There is a segment of the labor market calling themselves "product manager" who act like good taste is domain independent, and spread lies about their importance to the success of every business.
What's worse is that otherwise smart people (founders, executives) fall for it because they think hiring them is what they are supposed to do.
Over time, as more and more people realized that PM is a side door into big companies with lots of money, "Product Manager" became an imposter role like "Scrum Master".
Now product orgs are pretty much synonymous with incompetence.
Taste is pretty transferable, I think what you're talking about is intuition. The foundations of intuition are deeply understanding problems and the ability to navigate towards solutions related to those problems. Both of these are relatively domain-dependent. People can have intuition for how to do things but lack the taste to make those solutions feel right.
Taste on the other hand is about creating an overall feeling from a product. It's holistic and about coherence, where intuition is more bottom-up problem solving. Tasteful decisions are those that use restraint, that strike a particular tone, that say 'no' when others might say 'yes'. It's a lot more magical, and a lot rarer.
Both taste and intuition are ultimately about judgment, which is why they're often confused for one another. The difference is they approach problems from the opposite side; taste from above, intuition from below.
I agree with your assessment otherwise, PM can be a real smoke screen especially across domain and company stage.
The proportion of "really good" PMs on product engineering teams has to be less than 0.1%.
The counter to that is "the proportion of 'really good engineers' to product engineering teams has got to be in the single digits," and I would agree with that, as well.
The problem is what is incentivized to be built - most teams are working on "number go up?" revenue or engagement as a proxy to revenue "problems." Not "is this a good product that people actively enjoy using?" problems.
I simply don’t believe that eliminating that closet stop would worsen my commute. When I’m leaving home, I would walk a block further, but probably 80+% of the time it would not increase the time I spend out in the elements because I’d just replace time standing at the bus stop with time walking to the next one. The only time it would hurt me is on the rare occasion that the bus passes me while I’m walking that extra block. (Pessimistically assuming 2 minutes to walk one block, and with buses coming every 10 minutes on average, is how I get 80%.) But I bet doing that all up and down the route would make the bus much more predictable. That closest stop is within the distance that cars back up from a traffic light at that next intersection when there’s traffic, and when the bus stops at my intersection it can often get pinned in the stop for a while when motorists aren’t in the mood to let the bus re-enter traffic. Multiply that phenomenon by, say, 20 extra stops and you get to some pretty unreliable service for people trying to get to work in the morning. I bet most of us would happily walk an extra block if it means we no longer have to leave for work half an hour early. 2 minutes extra walking on either end adds up to 4 minutes “wasted” time walking (I also am not sure I count walking as wasted time, by the way - physical activity is good for me) is a lot less than 30 minutes wasted time padding my commute to account for less reliable service.
And then when I’m coming home I get off at that stop that’s a block further away anyway. Because there’s a light at that intersection but not at the one where the close stop lies. I can easily spend more time waiting for a gap in traffic large enough to cross a busy street during the evening rush than it takes to walk that extra block.
reply