To be honest, the examples stuck with me. They illustrated tons of different social interaction examples that I have seldom, if never, encountered in my life, but have plenty to learn from.
It's important to recognize that there are no optimal "right" answers. If someone is telling you there is a best way of doing this, always treat their advice with suspicion. (Yes, this includes what I'm going to write below.) The reason is that because AI advances so quickly, there isn't enough time for the industry to stabilize on best practices and spread them widely before the underlying system changes and it's no longer applicable.
Because of this, I find it very important that you build your own understanding of LLMs, and create your own best practices from these first principles. Then, when you read about the next best thing, you can decide if it makes sense or not. Is it hype? Is it real? Does it logically align with what I understand about LLMs?
That being said, here's two things I find universally applicable:
1. You have to get very good at asking questions. Not only that, you also have to ask questions about the questions you should be asking. You also have to ask it to ask YOU questions.
2. Spec driven development is, in my opinion, a good place to start. Writing down what you're going to do and how you're going to do it has always been a good practice in any industry.
You wanna know why this article is great? I can't quote it. There isn't a single, gold nugget line in this post that can be copy pasted into any possible form of short form content, without it losing some important aspect of the original message. Every idea is presented in conjunction with important supporting details that, if you take the time to digest it, you will finally get it. Why we recoil at AI generated content. Why code quality IS product quality. What "craftsmanship" argument is actually about. And like 12 other nuanced ideas we've all heard before, but may not have fully understood. I have nothing but immense praise for the author.
It's usually not a question of laziness, but just path optimization. For big raid events, limited only to a few hours, Pokemon Go players will do whatever gets them to the most raids in that fixed amount of time. In places where the gyms are well placed in walkable parks, there are enough gyms that are close enough so that you can loop through the park, giving the system time to spawn new raids in the first gyms you already raided. Doing raids walking is actually preferable to driving, because in order to take down raid Pokemon, you also need a big enough raid party, and that's easier to judge with a crowd of people who agree to go in the same direction, instead of mystery players in mystery cars.
But in SO many places in the US, the gyms just aren't close enough. It's both a fault of the urban environment and the game system, but it just frequently creates situations where if you want to catch a time-limited raid Pokemon that's shiny or has high stats, driving is often the only way to play in your area.
What's funny though is that car or not, there are lots of Pokemon Go events where people don't talk to anybody. It's just a bunch of people who show up to the same spot, nod hey at each other, and then tap away on their phones in awkward proximity. Many people just have the personality type that drives them to solely play single player games, and there's no amount of game design that will push them to socialize.
But it varies - that's what it was like on my university campus, but I've also been to park events where the age range and social friendliness is extremely varied and wonderful. It is a fascinating and unexpected community.
Have you ever measured your battery voltages over time storing it this way? Is that 6% capacity loss theoretical or measured data? I'm intrigued. This sounds crazy, but it should technically be fundamentally sound.
Degradation is driven by many things, but a big one is heat. Elevated temperatures during both charge and discharge is very bad for battery longevity. To manage this, almost all EVs use liquid cooling, with a cold plate directly contacting as many battery cells as they can to move heat out of the battery. This coolant is then cooled by a radiator, an AC chiller, or both.
The worst temperature abuse case is DC fast charging, aka Supercharging, where high current charging creates tons of heat due to resistive losses. This is why frequent fast charging causes faster battery degradation, but ordinary charging and driving does not, because the coolant loop is sized for the DC fast charge heat transfer requirements.
Besides removing heat, adding heat into the system is also desirable. Cold weather environments approaching freezing or below is also bad for battery longevity, and more importantly, terrible for range. Resistive heaters are super power hungry, and to heat the battery coolant loop requires power from the battery. This is why, conventionally, EVs are terrible in cold weather.
> Do EV manufacturers use any other tricks not covered by this?
And now, onto the magic trick.
Heat management is so important to both the driving range and the longevity of a vehicle that EVs have moved from traditional resistive heaters to heat pumps. These magical thermodynamic devices can move heat from anywhere, including drawing heat out of cold ambient air.
When you combine that with a valve design that allows the heat pump to access the battery coolant loop, the motor drivetrain coolant loop, the cabin coolant loop, the vehicle computer(s) coolant loops, and external ambient temperature, you can have a super efficient system that shuffles heat where it's "wasted" to where it's "needed".
I bristled at the title, article contents, and their spreadsheet example, but this does actually touch on a real paint point that I have had - how do you enable power users to learn more powerful tools already present in the software? By corollary, how do you turn more casual users into power users?
I do a lot of CAD. Every single keyboard shortcut I know was learned only because I needed to do something that was either *highly repetitive* or *highly frustrating*, leading me to dig into Google and find the fast way to do it.
However, everything that is only moderately repetitive/frustrating and below is still being done the simple way. And I've used these programs for years.
I have always dreamed of user interfaces having competent, contextual user tutorials that space out learning about advanced and useful features over the entire duration that you use. Video games do this process well, having long since replaced singular "tutorial sections" with a stepped gameplay mechanic rollout that gradually teaches people incredibly complex game mechanics over time.
A simple example to counter the auto-configuration interpretation most of the other commenters are thinking of. In a toolbar dropdown, highlight all the features I already know how to use regularly. When you detect me trying to learn a new feature, help me find it, highlight it in a "currently learning" color, and slowly change the highlight color to "learned" in proportion to my muscle memory.
> how do you enable power users to learn more powerful tools already present in the software?
On-the-job-training, honestly; like we've been doing for decades, restated as:
Employer-mandated required training in ${Product} competence: consisting of a "proper" guided introduction to the advanced and undiscovered features of a product combined with a proficiency examination where the end-user must demonstrate both understanding a feature, and actually using it.
(With the obvious caveat the you'll probably want to cut-off Internet access during the exam part to avoid people delegating their thinking to an LLM again; or mindlessly following someone else's instructions in-general)
My pet example is when ("normal") people are using MS Word when they don't understand how defined-styles work, and instead treat everything in Word as a very literal 1:1 WYSIWYG, so to "make a heading" they'll select a line of text, then manually set the font, size, and alignment (bonus points if they think Underlining text for emphasis is ever appropriate typography (it isn't)), and they probably think there's nothing more to learn. I'll bet that someone like that is never going to explore and understand the Styles system on their own volition (they're here to do a job, not to spontaneously decide to want to learn Word inside out, even on company time).
Separately, there are things like "onboarding popups" you see in web-applications thesedays, where users are prompted to learn about new and underused features, but I feel they're ineffective or user-hostile because those popups only appear when users are trying to use the software for something else, so they'll ignore or dismiss them, never to be seen again.
> By corollary, how do you turn more casual users into power users?
Unfortunately for our purposes, autism isn't transmissible.
Yes, and that's the point. In my opinion, this is the perfect use case for generative AI, one that takes advantage of the strengths of the technology while avoiding its weaknesses.
The generative UI example in the article is an example of the complete opposite of this idea - obtuse implementation of generative AI where it creates more problems than solutions. Yes, there is value in the idea of personalized UI. But UI/UX derives a lot of its value from consistency, as the other comments in this thread have mentioned. Losing that in exchange for personalization is a huge net negative, in my opinion.
Generative UI is incompatible with learning. It means every user sees something different, so you can't watch a tutorial or have a coworker show you what they do or have tech support send you a screenshot.
The solution could be search. It's not a House of Leaves.
I break out blender every six months or so in order to create a model for 3d printing. It needs to be precise and often has threads or other repetitive structures.
Every. Single. Time. I spend at least the first 3 hours relearning how to use all the tools again with Claude reminding me where modifiers are, and which modifier allows what. And which hotkey slices. Etc etc.
yeah but when you then need to do the same action 4 times in a row, getting claude to provide the correct action all 4 times takes a lot more brainpower on my part than just learning the menus yet again, right?
Writing is an expression of an individual, while code is a tool used to solve a problem or achieve a purpose.
The more examples of different types of problems being solved in similar ways present in an LLM's dataset, the better it gets at solving problems. Generally speaking, if it's a solution that works well, it gets used a lot, so "good solutions" become well represented in the dataset.
Human expression, however, is diverse by definition. The expression of the human experience is the expression of a data point on a statistical field with standard deviations the size of chasms. An expression of the mean (which is what an LLM does) goes against why we care about human expression in the first place. "Interesting" is a value closely paired with "different".
We value diversity of thought in expression, but we value efficiency of problem solving for code.
There is definitely an argument to be made that LLM usage fundamentally restrains an individual from solving unsolved problems. It also doesn't consider the question of "where do we get more data from".
>the code you actually want to ship is so far from what LLMs write
I think this is a fairly common consensus, and my understanding is the reason for this issue is limited context window.
I argue that the intent of an engineer is contained coherently across the code of a project. I have yet to get an LLM to pick up on the deeper idioms present in a codebase that help constrain the overall solution towards these more particular patterns. I’m not talking about syntax or style, either. I’m talking about e.g. semantic connections within an object graph, understanding what sort of things belong in the data layer based on how it is intended to be read/written, etc. Even when I point it at a file and say, “Use the patterns you see there, with these small differences and a different target type,” I find that LLMs struggle. Until they can clear that hurdle without requiring me to restructure my entire engineering org they will remain as fancy code completion suggestions, hobby project accelerators, and not much else.
reply