Hacker News .hnnew | past | comments | ask | show | jobs | submit | t00's commentslogin

/FIFY A year ago this would have been considered impossible. The software is moving faster than anyone's hardware assumptions.


We do similar but sprints are somewhat flexible, more like versions. We chuck the features we want from the top of most needed, split into stories and estimate like you mentioned using brainstorming between devs and QA. Estimation happens by relatively comparing complexity of each new story compared to previously implemented stories, conservativy picking average one up if there is variance in estimates. QA is involved to determine how long it will take to test the feature or sometimes what uncertainty is there if this is even possible automatically.

In the end we have stable developer velocity metric and a really close estimate for each new version.


Closed, iOS only, invite only. Thanks.


Thanks for the heads up. You saved me some frustration and disappointment.


Orwell was a licenced critic even if anti-imperialist at heart he sold his soul to fight communism putting imperialist issues aside.


Am I missing something here? The mobile SPA app can be deployed using tools like capacitor to a device and the framework along with all static content is loaded into the app bundle. In such case it makes no (realistic) difference which framework is selected and it matters more how the background/slow transfers are handled with data-only API requests, possibly with hosted images. With the background workers PWA can be built as well, streamlining installation even more.


Does that involve shipping a native wrapper for your web app?

If so, you have the extra cost, effort and bureaucracy of building and deploying to all the different app stores. Apple's App Store and Google Play each have various annoyances and limitations, and depending on your market there are plenty of other stores you might need to be in.

Sometimes you do need a native or native-feeling app, in which case a native wrapper for JS probably is a good idea, other times you want something lightweight that works everywhere with no deployment headaches.


As much as I agree with app deployment headaches, apps provide something a website cannot (except PWA) - ability to do stuff offline, log and register data which can be uploaded when connection is re-established. When talking about user experience - launching the app, selecting new -> quote -> entering details -> save -> locking the phone without worrying or waiting, knowing that it will eventually get uploaded, is much more convenient than walking with the phone around the property to get better reception to even load the new quote page.

UX matters, and user does not care if the native wrapper or 500kB of js is there or not, as long as the job is done conveniently and fast.


That is a really good advice, copying data everywhere makes only sense if the data will be mutated. I only wonder why, why C-style strings were invented with 0 termination instead of varint prefix, this would have saved so much copying and so many bugs knowing the string length upfront.


That reminds me of one of my favorite vulnerabilities. A security researcher named Moxie Marlinspike managed to register an SSL cert for .com by submitting a certificate request for the domain .com\0mygooddomain.com. The CA looked at the (length prefixed) ASN.1 subject name and saw that it had a legitimate domain, they accepted it, but most implementations treated the subject name as a C-delimited string and stopped parsing at the null terminator.


Pascal strings have the issue that you need to agree on an int size to cross an ABI boundary, unless you want to limit all strings to 255 characters and what the prefix means is ambiguous if you have variable length characters (e.g. Unicode). These were severe enough that Pascal derivatives all added null terminated strings.

Took a bit for languages to develop the distinction between string length in characters and bytes that allows us to make it work today. In that time C derivatives took over the world.


If we're specifying the size of a buffer we obviously work in bytes as opposed to some arbitrary larger unit.

Agreed that passing between otherwise incompatible ABIs is likely what drove the adoption of null termination. The only other option that comes to mind is a bigint implementation, but that would be at odds with the rest of the language in most cases.


It wasn't obvious to everyone at the time that string size in bytes and characters were often different. It was very common to find code that would treat the byte size as the character count for things like indexing and vice versa.


I'm not here to defend zero- terminated strings, but I register that prefixed strings would be equally bad for the goal of OP, or even worse since you would need to inject int prefixes instead of zero bytes.


I have exactly opposite experience, Delphi was awful UI, verbose language experience, with hops and tricks and a ton of Win32 rendering to do simple controls like a ComboBox with checkboxes. Yet the community was brilliant, always helpful and SO questions answered the same day!


Agreed about your points, not so much about paper drink straws.


Both Apple and Android (stock) are candidates for anti-monopoly regulations regarding the limited, vendor locked backup API.

Enforcing choice of the backup solution would solve the problem of rogue countries like the UK meddling with privacy and security.

Like the browser choice, backup provider choice can end up being enforced, likely by the EU as they have a good history of breaking up vendor lock-ins.

Possibly an information/lobby campaign can be started and endorsed by some major online storage providers?


I agree, though with Android an argument can be had that Samsung and other manufacturers can offer alternatives if they want to (they have their own stores and their own platform keys).

I don't think there's a large lobby for the backup app industry but a lawsuit against Apple/Google/Samsung should be easily won here.


There are examples of cat and cp using io_uring. What are the chances of having io_uring utilised by standard commands to improve overall Linux performance? I presume GNU utils are not Linux specific hence such commands are programmed for a generic *nix.

Another one is I could not find a benchmark with io_uring - this would confirm the benefit of going from epoll.


>Another one is I could not find a benchmark with io_uring - this would confirm the benefit of going from epoll.

One of the advantages of io_uring, unrelated to performance, is that it supports non-blocking operations on blocking file descriptors.

Using io_uring is the only method I recall to bypass https://gitlab.freedesktop.org/wayland/wayland/-/issues/296. This issue deals with having to operate on untrusted file descriptors where the blocking/non-blocking state of the file descriptions might be manipulated by an adversary at any time.


So does the FIONREAD ioctl, but it's not a general solution. (According to https://hackernews.hn/item?id=42617719, neither is io_uring yet.) Thanks for the link to the horrifying security problem!


I thought for sure this was wrong, but when I actually checked the docs, it turns out that `RWF_NOWAIT` is only valid for `preadv2` not `pwritev2`. This should probably be fixed.

For sockets, `MSG_DONTWAIT` works with both `recv` and `send`.

For pipes you should be able to do this with `SPLICE_F_NONBLOCK` and the `splice` family, but there are weird restrictions for those.


Also useful for things like SPI with only blocking user space API.


GNU coreutils already has tons of Linux-specific code. But it would be a bit of a kernel fail if io_uring were faster or other preferable to copy_file_range for cp (at least for files that do not have holes).


Not at all; with io_uring, you can copy multiple files in parallel (and in fewer syscalls), which is a huge win for small files.


On a hard disk, copying multiple files in parallel is likely to make the copy run slower because it spends more time seeking back and forth between the files (except for small files). Perhaps that isn't a problem with SSDs? It seems like you'd still end up with the data from the different files interleaved in the erase blocks currently being written instead of contiguous, which seems like it would slow down all subsequent reads of those files (unless they're less than a page in size).


> On a hard disk, copying multiple files in parallel is likely to make the copy run slower because it spends more time seeking back and forth between the files (except for small files).

Certainly not; it's likely to make it run faster, since you can use the elevator algorithm more efficiently instead of seeking back and forth between the files. You can easily measure this yourself by using comparing wcp, which uses io_uring, and GNU cp (remember to empty the cache between each run).


Hmm, that's interesting! I don't have a hard disk handy right now, unfortunately.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: