Someone presumably pitched this idea within HP and other people agreed it was something they should try. I guess probably HP didn't put its best and brightest in charge of call centres but still, isn't that sort of amazing?
I wonder if it's the same people who eventually decided it was a bad idea after all, or whether some other group discovered what was happening and got them to stop.
I’ve seen it pitched here even, with the idea that deflecting some call volume will make call centre jobs less hell. The thing it misses is that call center jobs are hell because they’ve used metrics to optimise to the minimum number of staff, and any reduction in average call volume will just result in the company cutting staff, so now staff still have the same workload but callers are XX minutes of waiting more frustrated.
Let’s not kid ourselves, they knew exactly what they were doing. They were hoping people would just hang up and give up. This would save money in the short term but lose money in the long term but that’s what you get when the current quarter is all that matters.
Anyway my experience with HP has taught me to never buy their products ever again.
> Let’s not kid ourselves, they knew exactly what they were doing.
Not at all, they say they’re “always looking for ways to improve customer experience” and just wanted to “encourage people to self solve” to increase customer satisfaction. /s
Optimizing the wrong thing, probably wanted to shave customer support costs by having lower call volumes, but those that need support probably were hanging onto the calls since nobody that can fix things calls support (so no savings) AND reduced customer satisfaction.
I think HP was absolutely right in doing this. How many times have you opened a GitHub issues only to come back an hour later with "nvm I figured it out" and close it?
The hope is always that you figure it out autonomously.
If offering free support is too expensive, then they shouldn’t offer free support, instead of externalizing the costs by wasting the time of every customer who calls.
Charge callers some small fee and refund it if it was a real issue.
Paid B2C support is a real tough sale. A lot of low-cost airlines don't provide support at all (if not in person), I guess because it's not practical. If costs could be covered this way, everyone could offer support. Even Google! And yet they don't.
It depends what your goal is. If HP gets charged per call answered, then their goal is to minimize the number of calls they answer. If they see a most of their calls are like "my internet is slow" or the laptop won't turn on because it's not charged up, it's easy to see how this could be approved. Same thing if they've just spent a ton of money on some AI chat agent that they need to justify as well.
People derive genuine satisfaction from a job well done. A sense of purpose and of being useful is important to our wellbeing. There's nothing dystopian about a desire to do your work well.
Well, there is when you no longer deserve credit for the work and your boss, should you be fortunate enough to even have a job, just expects you to do more work. The satisfaction will evaporate pretty quickly.
So it's modular. This is normally considered a good thing. It means you don't have to pay for features you don't need.
The ISA is open so there's no greedy corporation trying to upsell you. I mean there's an implementation and die area cost for each extension but it's not being set at an artificial level by a monopolist.
There's a good chance you're actually paying more for the features you don't need. Preparing an EUV mask set costs something like 30 million dollars (that figure may be out of date, i.e. it could be more now). So instead of a single mask set with everything on the device, whether you need it or not, you're paying $30 million for each special-snowflake variant. This is why vendors do a one-size-fits-all version of many of their products and then disable the extra functionality for the cheaper market segments, because it's much, much cheaper than making separate reduced-functionality devices.
It's a good thing in many cases but not if you're going to be running applications distributed as binaries. Maybe if we go the Gentoo route of everybody always recompiling everything for their own system?
RVA23 is, finally, the belated admission that maybe we shouldn't have everything as optional extras. Hopefully it'll take off, I can't imagine what sort of a headache it is for maintainers of repos who have to track a dozen different variants of binaries depending on which flavour of RISC-V the apt-get is coming from.
The "G" extension for everything you want to run shrink-wrapped binaries on a standard OS has been there since the May 7 2014 "User Level ISA, Version 2.0", which is before RISC-V started to be promoted outside of Berkeley e.g. at Hot Chips 26 in August 2014, and the first RISC-V workshop in January 2015 in Monterey.
The name "G" has morphed into now (along with the C extension) being called "RVA20", which led to "RVA22" and "RVA23", but the principle is unchanged.
"An integer base plus these four standard extensions (“IMAFD”) is given the abbreviation “G” and provides a general-purpose scalar instruction set. RV32G and RV64G are currently the default target of our compiler toolchains."
"Making everything optional" is for the embedded space.
As for general purpose processors, RISC-V has always had the idea of profiles (mandatory set of extensions). Just look at the G extension, which mandated floating point, multiply/division, atomics, ... things that you expect to see on user-facing general-purpose processors.
> the belated admission that maybe we shouldn't have everything as optional extras
That's why I disagree with the above claim.
(1) The optionality is a feature of RISC-V and it allows RISC-V to shine on different ecosystems. The desktop isn't everything.
(2) RISC-V has always addressed the fear of fragmentation on the desktop by using profiles.
RVA23 (and RVA20 before it) aren't an admission that Risc-V got it wrong. It's a necessary step to make Risc-V competetive in the desktop space as opposed to micro-controllers where the flexibility is hugely valuable.
The "G" extension for everything you want to run shrink-wrapped binaries on a standard OS has been there since the May 7 2014 "User Level ISA, Version 2.0", which is before RISC-V started to be promoted outside of Berkeley e.g. at Hot Chips 26 in August 2014, and the first RISC-V workshop in January 2015 in Monterey.
The name "G" has morphed into now (along with the C extension) being called "RVA20", which led to "RVA22" and "RVA23", but the principle is unchanged.
"An integer base plus these four standard extensions (“IMAFD”) is given the abbreviation “G” and provides a general-purpose scalar instruction set. RV32G and RV64G are currently the default target of our compiler toolchains."
But that means a port of Linux can’t be to RISC-V, it has to be to a specific implementation of RISC-V, or if sufficient (which seems still debatable) to a specific common RISC-V profile.
In what way are RISC-V profiles debatable? Canonical is spearheading the RVA23-as-a-default movement and so far, it seems that there are no heavy objections towards that effort (beyond the usual "Canonical sucks" shtick that you see in every discussion involving Canonical)
You can target the minimum instruction set and it'll run everywhere. Albeit very slowly. Perhaps you use a fat binary to get reasonable performance in most cases.
This isn't easy but it can be done (and it is being done on x86, despite constantly evolving variations of AVX).
Interestingly, RISC-V vector extensions are variable length.
So, you can compile your RISC-V software to require the equivalent of AVX and it will run on whatever size vectors the hardwre supports.
So, on x86-64, if I write AVX2 software and run it on AVX512 capable hardware, I am leaving performance on the table. But if I write software that uses AVX512, it will not run on hardware that does not support those extensions (flags).
On RISC-V, the same binary that uses 256 bit vectors on hardware that only supports that will use 512 bit vectors on hardware that supports it, or even 1024 bit vectors on hardware like the A100 cores of the SpacemiT K3.
So, I guess X86-64 is is the RyanAir of processors.
(Personal opinion)
I get the impression that RISC-V-related discussions often lack of awareness of prior work/alternatives. A large amount of (x86) software actually uses our Highway library to run on whatever size vectors and instructions the CPU offers.
This works quite well in practice. As to leaving performance on the table, it seems RVV has some egregious performance differences/cliffs. For example, should we use vrgather (with what LMUL), or interesting workarounds such as widening+slide1, to implement a basic operation such as interleaving two vectors?
> For example, should we use vrgather (with what LMUL), or interesting workarounds such as widening+slide1, to implement a basic operation such as interleaving two vectors?
Use Zvzip, in the mean time:
zip: vwmaccu.vx(vwaddu.vv(a, b), -1, b), or segmented load/store when you are touching memory anyways
unzip: vsnrl
trn1/trn2: masked vslide1up/vslide1down with even/odd mask
The only thing base RVV does bad in those is register to register zip, which takes twice as many instructions as other ISAs. Zvzip gives you dedicated instructions of the above.
Looks like the ratification plan for Zvzip is November. So maybe 3y until HW is actually usable?
That's a neat trick with wmacc, congrats. But still, half the speed for quite a fundamental operation that has been heavily used in other ISAs for 20+ years :(
Great that you did a gap analysis [1]. I'm curious if one of the inputs for that was the list of Highway ops [2]?
It's a good question. Costs will be lumpy. Inference servers will have a preferred batch size. Once you have a server you can scale number of users up to that batch size for relatively low cost. Then you need to add another server (or rack) for another large cost.
However I think it's fair to say the cost is roughly linear in the number of users other than that.
There may be some aspects which are not quite linear when you see multiple users submitting similar queries... But I don't think this would be significant.
What made HR act in this way? They clearly felt they were protecting the company by firing this person, but they've done nothing wrong and it's unclear they posed any kind of threat to the company. Certainly the complaint about his co-worker would not be perceived as a threat.
I will give some weight to the possibility that Uber HR are utterly disfunctional, but on balance I'm left with the impression there's more to this story than we're being told.
There are a lot of missing parts to the story. If we assume the author left out everything that made them look bad, and including only what makes them look good then the result is a very incomplete feeling article.
For example: they asked for guidance and then the very next thing is them being fired. How did they respond to the coworker? Something is off here - the coworker who had messaged him about non-work topics TWO days in a row - then immediately reported him for his reply. What?
Is this what motivates Sundar Pichai to work harder for Google? More money? Surely there's nothing he could want that he doesn't already have.
I understand it's insulting to be paid less than other CEOs, and I get that it's a way of keeping score.
All the same I think he's doing it for the power, the respect, the fame. Would he have walked away if the number was only $100m? Would that have been rational?
> Is this what motivates Sundar Pichai to work harder for Google? More money? Surely there's nothing he could want that he doesn't already have.
I think the confusion stems from mapping normal people money usage on people, that have much more money than they can use on themselves: They don't do that. You can use excess money to make things happen as you see fit.
Money enables you to do things in the world, and if you want to do things in the world, a few hundred million are very easily spent. I am sure most people around here would have no trouble allocating that amount of money towards something they would like to see happen or improved, and that's how a lot of money, that someone does not feel they need, is used.
I wonder if it's the same people who eventually decided it was a bad idea after all, or whether some other group discovered what was happening and got them to stop.
reply