Mongrel2 is also an excellent example of how libtask can be used.
On another note, alts and chans are underrated in libtask -- I wish the documentation promoted them more. To get an idea of what alts look like when used with a few chans, take a look at keyboardthread() from acme:
In 1980, barely two years after Hoare's paper, Gerard Holzmann and Rob Pike created a protocol analyzer called pan that takes a CSP dialect as input. [...] Holzmann reports that “Pan found its first error in a Bell Labs data-switch control protocol on 21 November 1980.” That dialect may well have been the first CSP language at Bell Labs, and it certainly provided Pike with experience using and implementing a CSP-like language, his first of many. [...] Holzmann's protocol analyzer developed into the Spin model checker and its Promela language.
How cool. Gerard (http://gerard.holzmann.usesthis.com) is now using Spin and several other verification tools to improve source code quality for space missions at JPL. I did not know about this connection of his work to CSP and, indirectly, to Go.
I think lthread (http://github.com/halayli/lthread) is the only coroutine lib that allows you to block inside a coroutine and do heavy computation without affecting other coroutines.
Example:
void my_function(void* arg)
{
char buf[1024];
int ret = lthread_recv_exact(fd, buf, 1024, 0, timeout);
lthread_compute_begin();
/* block as long as you wish, run heavy computation on buf, access your stack variables etc.. */
lthread_compute_end();
}
Just to comment on your post, as using blocking calls in coroutine might be a nice feature I think it defeats the whole concept of coroutine. I might be missing the point but this kind of library is absolutely great for asynchronous network calls and in this specific case the library provides a whole set of functions like read/write/connect/accept etc...
I like the idea of being able to schedule coroutines on multiple threads though. In librinoo, I have the concept of 'scheduler'. A scheduler handles a set of coroutines and you can one scheduler per thread (that I called 'spawning').
Those libs are just awesome to quickly create scalable network programs. Well done.
Failing immediately is certainly an easy way to handle unrecoverable errors. However, it seems as though your library assumes NDEBUG is never defined. Maybe you should make your own function rather than using assert()?
The asserts are in the .c files. Defining NDEBUG in your program won't change anything in lthread because the library was compiled without NDEBUG defined.
If you want to define NDEBUG when compiling lthread, that's up to you.
with NDEBUG, foo will never even be called. I've seen some funky bugs from this in the past, like people calling malloc inside an assertion... Works great during dev, release build has NDEBUG defined, no test coverage... You get the picture.
A better idea would be to capture the return value in a var and do the assert on the var.
Yes I understand that foo() won't be called if you defined NDEBUG. But when compiling lthread, NDEBUG is not defined anywhere in the code / makefiles so it's guaranteed to be called and I want to assert the returned value.
I don't understand why co-routines, green threads or any other n:1 user-level threading model are of much interest in today's multi-core processor world? To utilise the computing power in front of you, you really need to use 1:1 kernel level threads in your program.
This is also the biggest problem I have with node.js which is inherently single-threaded (libuv's thread-pool aside) and only uses a fraction of the CPU power available on your computer.
[Edit]: A trivial, but practical example which demonstrate the power of using your machine's capabilities compared to just a fraction. If you use Make to build your project. Try 'make -j n' where n is 2 x cores on your machine. And observed the speed of building in parallel compared to serialised.
The main use I've found for them is converting state machines to coroutines. Logic that is tedious, cryptic or unmaintainable with state machines becomes trivial with coroutines:
I agree though that someone needs to adapt coroutines to run concurrently. As long as there is no shared memory and you only use pipes to transmit messages (optimized with something like copy-on-write), there should be a way to partition the coroutines by the number of processors. I think it would work if you could switch the stack atomically.
One thing to also consider is that if you use nonblocking system calls, you get very high CPU utilization you're just not going to get with threads and blocking. I imagine users goof this with node.js though. It would be nice to have pure nonblocking middleware..
It just depends what you're doing / what you want.
If you're doing N different things that are all actively using the CPU, then yeah, you want real threads. But if you maybe have a bunch of different things "in flight" that are mostly just sitting around waiting for something to else to happen to make progress, then something like coroutines or green threads could make sense 'cuz of lower startup and memory overhead.
It can also be handy just for code structure. You want two things making forward progress at once, and instead of using callbacks and a bunch of variables to try to maintain state, you just use coroutines and the implicit state therein, but since you didn't go to real, executing-in-parallel threads, you don't have to worry about all the synchronization/race avoidance junk that comes from that (but of course, you also don't get the multi-core speedup).
maybe i mis understand lightweight, but this looks enormous to me.
wrapping the atomic int in a platform independent interface with maximal inlining and compiler understanding of what it is, as an example, is what i imagined from this.
Well, these are green threads, if you need to do actual concurrency you should probably look elsewhere.
And, lighweight looks like it refers to the minimal amount of code in the library. Its a handful of fairly short files, enormous by no standards...
Besides, if you're writing C, C11 has the atomics support you desire. Not to mention that most programmers should stick to higher level abstractions than atomics (like mutexes).