Hacker News .hnnew | past | comments | ask | show | jobs | submit | frevib's commentslogin

OP is about Github. Have you seen the Github uptime monitor? It’s at 90% [1] for the last 90 days. I use both Codeberg and Github a lot and Github has, by far, more problems than Codeberg. Sometimes I notice slowdowns on Codeberg, but that’s it.

[1] https://mrshu.github.io/github-statuses/


To be fair, Github has several magnitudes higher of users running on it than Codeberg. I'm also a Codeberg user, but I don't think anyone has seen a Forgejo/Gitea instance working at the scale of Github yet.

I don't think OP was making a value judgment or anything. It's just weird to say you won't consider Codeberg because you need reliability when Codeberg's uptime is at 100% and Github's is at 90%.

To be fair, GitHub has several magnitudes higher of revenue to support that. Including from companies like mine who are paying them good money and get absolutely sub-par service and reliability from them. I'd be happy for Codeberg to take my money for a better service on the core feature set (git hosting, PRs, issues). I can take my CI/CD elsewhere, we self-host runners anyway.

I think the idea is that a Forgejo/Gitea instance should never have to work at anywhere near the scale of GitHub. Codeberg provides its Forgejo host as a convenience/community thing but it's not being built to be a central service.

I stopped using GitHub a long time ago. I don't understand why gitlab isn't the default alternative?

Scaleway for cloud: great UI, CLI, Terraform. Like you’ve never left Azure.

>great

>like you’ve never left Azure

hmmmmmm


Proton has mail, calendar, drive, docs, sheets and more coming. Everything is done e2ee where possible. In case of mail, when the peer has no Proton, mail is indeed send plaintext.

Mail is stored e2ee on server, so not even Proton can read it. Proton mail has also made PGP very easy to use. It’s Swiss based and a foundation, not a corporation. They’ve done this so they cannot easily be bought.

It ticks most boxes in terms of privacy and security.


I only want it for the mail. Everything else I self-host anyway. It's just that mail is hard with bad actors like Microsoft demanding certain reputation standards (e.g. you have to send X amount of legit mail per month or you get blocked even if they've never seen spam from you!). But for mail the encryption is just useless in my opinion.

The encrypted mail storage adds no value for me because I pull all my mail from the server immediately anyway. It's just a big hassle to deal with that bridge. And when a mail comes in they have to handle it in plaintext (and also, the other party sees it which is 90% of the time microsoft or google or another bad actor). I just view email as a lost cause really.

The only thing I get from Proton is the VPN.


> Mail is stored e2ee on server

Exclusively, or do they keep caches around? I am asking since everything is clear text in the webmail. I wonder if they handle the rare case of proton to proton (encrypted) mail differently from regular unencrypted mail. I assume they have to decrypt a master key stored on the server with your password, and then decrypt every encrypted email on the fly on the server, or they have to send the master key to the client side.

Now think that through when you have thousands of searchable e-mails, sorted arbitrarily. I won't say it is impossible, but I think that maintaining plain text indexes rather than encrypted ones are really tempting.


You’re post is full of misconceptions and mistakes.

Mail is stored e2ee exculsively. The’ve been summoned to hand over mail many times, which they weren’t able to do. Quick search on Ecosia and find the articles.

They don’t have a master key or else the whole e2ee story is a fad, which it isn’t. The Proton code is in Github so you can check how it works yourself. Part of the password is used to decrypt the data.

Search is done client side. You have to download a big search index in order to have proper search. The iOS app doesn’t support downloading the index so search is limited there.

Please think and do some work before you reply.


> They don’t have a master key or else the whole e2ee story is a fad, which it isn’t

You can store an encrypted master key (like Luks), download that key to the client and decrypt it there. Or you can have it in decrypted in server memory, but only during an interactive session with the user. But that quickly turns into a fad, as you pointed out, which was exactly my question.

> The Proton code is in Github so you can check how it works yourself. Please think and do some work before you reply.

I asked a simple question, so that at others could chime in about the exact details and limits. I don't understand why that was highly offensive to you, but I assume it is something like a Monday Mood.


I love Proton but it's really low on usability since their calendar doesn't integrate with anything (by design). If you are used to managing a busy calendar it's quite a shock. And their docs and sheets apps are extremely minimal and basic.

And of course the recent allegations that they hand over your metadata on >90% of requests. See https://x.com/DoingFedTime/status/2030108076531995016


Calender is basic indeed, but good enough for a family sharing a calendar. Proton is working on a new calendar app which is coming out this year.

> Proton has mail, calendar, drive, docs, sheets and more coming

As of today, there is no official Proton Drive client for Linux that I'm aware of. There is unofficial support via Rclone, but it is still beta and I try to avoid mounting via Rclone anyway. I recall that it wasn't a really convincing experience when I tried it with OneDrive.


Proton Drive for Linux (and Drive SDK) are announced for this year. Unfortunately not more specifically than “this year”.

> Proton Drive for Linux (and Drive SDK) are announced for this year. Unfortunately not more specifically than “this year”.

I hope you are right. I'm tired of waiting for a product I paid for while the company is working on new products instead of finishing the one that is halfway there.


Proton is moving out of Swiss, because of privacy concerns and new laws...

Just fyi


They aren’t. They do have made their IT such that they can move very easily. If the Swiss government will pass worse privacy laws, Proton said they will move to Norway or Germany.

uBlock Origin for iOS was released yesterday: https://hackernews.hn/item?id=44795825

If you are on another locale, search “ublock origin lite” with double quotes.


Still not as good as full fat ublock origin.


We introduced Kotlin at ING (15k IT colleagues) years ago and adoption rate is 11% and growing: https://medium.com/ing-blog/kotlin-adoption-inside-ing-5-yea...

Many larger companies in The Netherlands have moved away from Scala and Java and use Kotlin now. The switching costs are neglegible and the benefits are big.

The problem with Kotlin is, you don’t want to go back to Java.


Hey. Do you folks have office in India or are you hiring remote for Kotlin? (esp. someone from Android bg but pretty decent at general Kotlin; and also at Java)


If you like to play around with io_uring networking, here is a very simple echo server example: https://github.com/frevib/io_uring-echo-server


It could also be that there just isn’t enough demand for a non-blocking JDBC. For example, Postgresql server is not coping very well with lots of simultaneous connections, due to it’s (a.o.) process-per-connection model. From the client-side (JDBC), a small thread poool would be enough to max out the Postgresql server. And there is almost no benefit of using non-blocking vs a small thread pool.


I would argue the main benefit would be that the threadpool that the developer would create anyway would instead be created by the async database driver, which has more intimate knowledge about the server's capabilities. Maybe it knows the limits to the number of connections, or can do other smart optimizations. In any case, for the developer it would be a more streamlined experience, with less code needed, and better defaults.


I think we’re confusing async and non-blocking? Non-blocking is the part what makes virtual threads more efficient than threads. Async is the programming style; e.g. do things concurrently. Async can be implemented with threads or non-blocking, if the API supports it. I was merely arguing that a non-blocking JDBC has little merit as the connections to a DB are limited. Non-blocking APIs are only beneficial when there are lots, > 10k connections.

JDBC knows nothing about the amount of connections a server can handle, but to try so many connections until it won’t connect any more.

| In any case, for the developer it would be a more streamlined experience, with less code needed, and better defaults.

I agree it would be best not to bother the dev with what is going on under the hood.


They indeed optimize thread context switching. Taking the thread on and off the CPU is becoming expensive when there are thousands of threads.

You are right that everything blocks, even when going to L1 cache you have to wait 1 nanoseconds. But blocking in this context means waiting for “real” IO like a network request or spinning disk access. Virtual threads take away the problem that the thread sits there doing nothing for a while as it is waiting for data, before it is context switched.

Virtual threads won’t improve CPU-bound blocking. There the thread is actually occupying the CPU, so there is no problem of the thread doing nothing as with IO-bound blocking.


For disk IO it’s faster, there are many benchmarks on the internet.

For network IO, it depends. Only two things make it theoretically faster than epoll; io_uring supports batching of requests, and you can save one sys call compared to epoll in an event loop. There some other things that could make it faster like SQPOLL, but this could also hurt performance.

Network IO discussion: https://github.com/axboe/liburing/issues/536


> Network IO discussion: https://github.com/axboe/liburing/issues/536

I see an issue with a narrative but zero discussion at that link.

Furthermore, your io_uring benchmark being utilized in that issue isn't even batching CQE consumption. I've submitted a quick and dirty untested PR adding rudimentary batching at [0]. Frankly, what seems to be a constant din of poorly-written low-effort benchmarks portraying io_uring in a negative light vs. epoll is getting rather old.

[0] https://github.com/frevib/io_uring-echo-server/pull/16


That seems uncharitable.

The linked issue barely mentions frevib's echo server, and in the one place it does it's the fastest!

Further they show that performance improves when using io_uring for readiness polling but standard read/write calls to actually write - that suggests io_uring_for_each_cqe does not explain the cases where epoll is faster.

> I've submitted a quick and dirty untested PR

That's not improving the situation much then - surely any performance fix should come with at least a rudimentary benchmark?


In my tests, for NVMe storage I/O I found io_uring was slower than a well-optimised userspace thread pool.

Perhaps the newer kernel is faster, or there is some subtlety in the io_uring queueing parameters that I need to tune better.


Maybe you're doing large I/Os whereas their benchmarks are doing small random I/Os (like 4K). Are you measuring IOPS or throughout?


Measuring IOPS of random, small reads (4kiB, O_DIRECT, single NVMe). (It's for optimising a database engine doing random lookups, but the benchmark is just random reads, no other logic.)

Just now I have tested 1,133 kIOPs with threads and 598 kIOPS with io_uring. The SQE queue depth for io_uring and max threads for thread test are set the same, 512.

I'd like to think this is due to a particularly well-optimised thread pool :-)


And for the Dutch TÜV (Consumentenbond) Tesla also ranks last. The only brand that scores under the usual “pass” rate at 5,5/10

https://www.consumentenbond.nl/test/auto-fiets-reizen/automa...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: