I'd consider myself a 'C++ programmer' - I've used it for years, it works very well for me for what I do with it, etc. However, what I find most frustrating is that nothing is intuitive. And there is always a reason for why it is like that, I know. My favorite example is the erase/remove idiom.
Anyway, here is a question about C++. Last week I tried to implement a parser for a subset of CSV - one-byte-per-character, fields separated by comma, no quoting, fixed line ending. The only requirement was speed. So I started with a simple C++-style implementation, read one line at a time with std::getline, split with boost tokenizer, copy into vector of vector of string. But it was too slow, so I reimplemented it C-style - boom, first try, 30 times faster. Through some micro-optimisations I got it 30% faster still - copying less, adding some const left and right, caching a pointer. So if anyone is so inclined, how would I make the following more C++-ish and still get the same speed? Using fopen and raw char* to iterate over the memory buffer are what I'd consider the non-C++ aspects of it, but feel free to point out other idiom violations...
Anyway, here is a question about C++. Last week I tried to implement a parser for a subset of CSV - one-byte-per-character, fields separated by comma, no quoting, fixed line ending. The only requirement was speed. So I started with a simple C++-style implementation, read one line at a time with std::getline, split with boost tokenizer, copy into vector of vector of string. But it was too slow, so I reimplemented it C-style - boom, first try, 30 times faster. Through some micro-optimisations I got it 30% faster still - copying less, adding some const left and right, caching a pointer. So if anyone is so inclined, how would I make the following more C++-ish and still get the same speed? Using fopen and raw char* to iterate over the memory buffer are what I'd consider the non-C++ aspects of it, but feel free to point out other idiom violations...