Python isn't even remotely close to the popularity Perl had or its omnipresence till date. Python couldn't reach the peak of nearly anything it tried. For serious app dev Java/Golang won, Web is ruled by JS/React, C/C++/Rust rules places where performance is more critical. Python's refusal to take text and OS work serious has largely rendered it useless for serious scripting work. People use shell and old timers use Perl.
In the AI world, Python use is mostly invoke a series of framework API(a few hundred lines of code), for data manipulation you still have to use shell or Perl.
The internet, telecommunications, bioinformatics etc industries and entire generations of backend were built entirely in Perl. And by many definitions is still the case.
>>Note that you need to be able to infinitely divide your stake for this to work out for you all the time.
This is what most people discover, you need to play like every toss of the coin(i.e tosses over a very long periods of time). In series, like the whole strategy for it to work as is. You can't miss a toss. If you do you basically are missing out on either series of profitable tosses, or that one toss where you make a good return. If you draw the price vs time chart, like a renko chart you pretty much see a how any chart for any instrument would look.
Here is the catch. In the real world stock/crypto/forex trading scenario that means you basically have to take nearly trade. Other wise the strategy doesn't work as good.
The deal about tossing coins to conduct this experiment is you don't change the coin during the experiment. You don't skip tosses, you don't change anything at all. While you are trading all this means- You can't change the stock that you are trading(Else you would be missing those phases where the instruments perform well, and will likely keep landing into situations with other instruments where its performing bad), you can't miss trades, and of course you have to keep at these for very long periods of time to work.
Needless to say this is not for insanely consistent. Doing this day after day can also be draining on your mental and physical health, where there is money there is stress. You can't do this for long basically.
While I don't agree on nearly anything you stated, I enjoyed your prose: I suppose you left out words here and there as a metaphorical proof of your claim that you can't miss a single toss, didn't you?
>>I suppose you left out words here and there as a metaphorical proof of your claim that you can't miss a single toss, didn't you?
You must always practice in real world conditions. Notice in the experiments conducted in programs, you are taking series of tosses as they come, even if they are in thousands in numbers, one after the other, without missing a single one. Unless you can repeat this in a live scenario. This is not a very useful strategy.
Kelly criterion is for people who are planning to take large number of trades over a long period of time, hence the idea is to ensure failures are not fatal(this is what ensures you can play for long). As it turns out if you play for really long, even with a small edge, small wins/profits tend to add to something big.
If you remove all the math behind it, its just this. If you have a small edge to win in a game of bets, find how much you can bet such that you don't lose your capital. If you play this game for long, like really really long, you are likely to make big wins.
You are conflating 2 concepts: a) that the reality converges to what the theory predicts only after a great number of samples; b) that if you skip some events the results will vary.
Now, b) is false. You can change the code to extract 3 random numbers each time, discard the first 2 and only consider the third one, the results won't change.
Instead a) is generally true. In this case, the Kelly strategy is the best strategy to play a great number of repeated games. You could play some games with another strategy and win more money, but you'll find that you can't beat Kelly in the long term, ideally when the repetitions approach infinity.
>>Now, b) is false. You can change the code to extract 3 random numbers each time, discard the first 2 and only consider the third one, the results won't change.
Might be in theory. In practice, this is rarely true.
Take for example in trading. What happens(is about to happen), depends on what just happened. A stock could over bought/over sold, range bound, moving in a specific direction etc. This decides whats about to happen next. Reality is rarely ever random.
Im sure if you study a coin toss for example, you can find similar patterns, for eg- if you have tired thumb, Im pretty sure it effects the height of the toss, effecting results.
>>Instead a) is generally true. In this case, the Kelly strategy is the best strategy to play a great number of repeated games.
Indeed. But do make it a point to repeat exact sequences of events you practiced.
Linked paper does not state that; it states that tossed coins tend to be caught on the same side they stared with slightly more than half the time. The results explicitly exclude any bouncing (which will happen if a coin lands on a hard surface).
The paper does discuss coins allowed to land on a hard surface; it is clear that this will affect the randomness, but not clear if it increases or decreases randomness, and suggests further research is needed.
qw, qr, references, multiline regexes, a far advanced OO ecosystem, Data::Dumper, map, grep, pack/unpack, DBM, ``, top class unicode handling, given/when, functional programming etc etc.
Won’t nitpick, how much of what’s useful here is unique to perl? You could make a similar list for most languages and barely scratch the surface.
top class unicode handling
This one I remember struggling with, afair due to io vs perlio layers impedance mismatch and use utf8. Not sure about now, but perl was anything but unicode for the clueless.
Moose is definitely what I go for when I have to use OO with Perl, this more than a decade old and stable for production use cases.
Its based on Class::MOP, which is in turn based on Meta Object Protocol, the same concepts on which CLOS(Common Lisp Object System) is based on. Its always nice to have CLOS goodness in Perl. For eg- https://metacpan.org/dist/Moose/view/lib/Moose/Manual/Method... these methods like before, after, around do fix need for design patterns to a large extent.
But of course the more you explore, the more you discover the possibilities with this.
To a large extent I think Perl brings OO and functional paradigm in a far better package than Python does.
Perhaps HOP needs a new chapter for OO given how few people are aware of this.
OO in Perl is lower level compared to Java or C++, so e.g. instead of having a class construct, you have to simulate a class - typically by binding (blessing) a data structure to a namespace. Then any functions in the namespace become methods that can be called via the data structure (object).
In practice, this has turned out to be a mixed blessing because of how tedious it is to do this repeatedly. So over the years there have been many libraries created to make this easier, each with different features.
There's currently work underway to modernise Perl's built in OO to address these problems.
On the other hand, this makes some things easier, e.g. Design by Contract can be added to Perl just by writing a library.
Another example, adding traits to PHP required updating PHP itself, whereas in Perl there are libraries to do that.
given/when are deprecated and will be removed in a future release.
References are a PITA - cumbersome to use, and they make code less readable.
Perl doesn't have first class functions (you can pass or return functions via references, but that's cumbersome and less readable compared to languages with better FP support).
>>given/when are deprecated and will be removed in a future release.
given/when won't be removed, too much back wards compatibility issues.
>>References are a PITA - cumbersome to use, and they make code less readable.
Depends what you mean readable though. In python you can't tell whats a variable and whats a list, and whats a dictionary by looking. One can claim whole language is unreadable since variables are needed at every step.
>>but that's cumbersome and less readable compared to languages with better FP support
Sure let them add all the other practical goodness of Perl, then we can use them.
% perl -v | head -2
This is perl 5, version 40, subversion 0 (v5.40.0) built for x86_64-linux
%
% perl -E 'use feature "switch"; my ($x, $y); given ($x) { $y = 1 when /^abc/ }'
given is deprecated at -e line 1.
when is deprecated at -e line 1.
This is also mentioned in the docs:
Smartmatch is now seen as a failed experiment and was marked as deprecated in Perl 5.37.10.
This includes the when and given keywords, as well as the smartmatch operator ~~.
The feature will be removed entirely in the Perl 5.42.0 production release.
Perl is one of those power tools(as recommended in Let Over Lambda, together with vim and Lisp), if you learn it well enough to be good at it, and early enough in your career- You really have a mad productivity work horse. Its really one of those things which save tons of time and effort. A bit like O(1) hack of the productivity world.
Perhaps at the core of this just how many what we called regular programming tasks are just knowing how to work with text and unixy operating system. Perl is just unbelievably awesome at this. Another big edge of Perl is commitment to backwards compatibility, most people who learned Perl early in their careers decades back can use it on-demand basis. Its universally installed, fast and scripts written from the earliest days continue to run with 0 changes, despite endless OS upgrades.
Early enough in the timeline Perl was full unix hackers, and you learned a lot from them, this culture further bolsters Perl's productivity credentials. Not a week passes, that I have to whip up a Perl script to do something for someone has a 2 people and a month budget to do. And they are often amazed it can be done this quickly.
CPAN is another big edge Perl has. No other language apart from JS/npm ecosystem has anything close, and these are basically competing in very different domains so to that end CPAN really doesn't have a competition till date.
If you are looking to build using Perl do checkout the Camel book and HOP by mjd.
I'm not aware of anything that can be as effective at text processing as perl. While I don't do much perl anymore, if I have to wrangle text files in all kinds of ways, there is no comparison, perl rules.
Keep Higher Order Perl for later. It's great but far too much for a 2nd book when getting started. The camel book Programming Perl is still a great start which adheres to the layered aspect. Don't read all of it, just a few parts as needed. I recommend to add Perl Cookbook early, to look things up as you go - it gives idiomatic-ish ideas of how you might do X in Perl. Perl comes with a mountain of very complete and useful man pages: you start with looking things up in the camel book, then transition to looking them up in the man pages.
Just curious, what does Perl give you over Ruby if anything? I learned Ruby first, always heard it was strongly inspired by Perl (as well as Smalltalk), and it provides all the benefits you mentioned, I think.
It's not a race. Perl got there fast by basically not giving a damn about anything.
Perl (talking about Perl 5, don't know anything about Raku, don't want to know anything about Raku) simply treats strings as sequences of numbers without requiring numbers to be in the 8-bit range. This makes it easy to say that those numbers could in principle be Unicode codepoints. The problem is that the actual assumptions about what those numbers represent are implicit in programmers' minds, and not explicit in the language, much less enforced in any way. The assumptions shift as strings are passed between different libraries, and sometimes different programmers working on the same codebases have different ideas. Perl will happily do things like encode an already-encoded string, or decode an already-decoded string, or concatenate an encoded string with an unencoded string, or reverse a utf-8 string by reversing the encoded byte sequence, etc. etc. So it's easier in Perl than in any other language I've ever used to end up with byte salad.
It'll take you, let's say, the first few years of your Perl career, involving painstaking testing of everything you do with nontrivial characters, to truly grok all of that. But the problem is: You're not alone in the world. If you work on a nontrivially-sized project in the real world that heavily utilizes Perl, then byte-salad will be what you will get as input. And byte-salad will be what you will produce as output. It is frustrating as hell.
Unicode was a pretty painful matter in the transition from Python 2 to Python 3, but Python's approach means that the Python ecosystem is now pretty usable with Unicode. This is not the case with Perl at all.
I've had the complete opposite experience. If I need to do more with non-ascii text then treat it as an opaque blob, I still haven't found anything better or easier than perl to do it in.
Node/NPM definitely is on-par with CPAN. But pip isn't even close.
The real deal with Perl is you not only get bleeding stuff, you get all kind of obscure and rare libraries. Its not that you can't do that in Python. Its just that the culture in Python world isn't made up of people who think about those problems or even in that dimension. Its a great language for writing all variety of apps.
Perl is the og hacker's language. Python is awesome too, but its not really what Perl is.
Most of the bad rep Perl gets is because programmers who only interact with http endpoints and databases tend to not understand where else it could be useful.
Like even the regexes. They sound like a pain, until you have to do non trivial string manipulation tasks. Then you discover, regexes do it a lot better than cutting and slicing strings some 100s of times.
I'll be frank: I think this idea that ${faveLang} is for misunderstood geniuses who truly understand computers where mainstream languages like Python are for dunces who only know how to glue together APIs is a large part of why such languages as Perl are nearing extinction. It turns out that there are people working on challenging problems in domains you've never heard of in Python -- and pretty much every other language. Give it a rest
In the real world, the ability of a lone genius to cobble together a script in an hour is actually not that much of an edge -- it is more important for people to write something that others can understand and maintain. If you can do that in Perl, great, and if writing Perl makes you happy: also great. But beware that smug elitism turns people off, it kills communities and also tends to signal a pathological inversion of priorities. All this should be in service to people, after all
I wonder how much of catering to the lowest common denominator / being a team player is an internalizing of corporatisms reduction of the worker to a fungible interchangeable cog.
As a solo dev it is a massive advantage to use sophisticated languages and tools without worrying if the dumbest person on my team can use them. It’s a strategic advantage and I run rings around far larger companies.
I agree with you that it is sad there isn't more diversity in languages and tools, and that generally organizations are using the same terrible slop. We could have such nice things
You lose me with the smugness. Make no mistake, you aren't smarter or better than someone else purely by virtue of your willingness to hack on BEAM languages or smlnj or Racket or whatever languages you like.
There are probably people smarter than you working in sales at $bigcorp or writing C# on Windows Server 2008 at your local utility. Novice programmers often have an instinct to rewrite systems from scratch when they should be learning how to read and understand code others have written. Similarly, I associate smugness of this form with low capacity for navigating constraints that tend to arise when solving difficult problems in the real world. The real world isn't ideal, sorry to say
That sounds like post facto rationalization, sour grapes, and perhaps a bit of learned helplessness. To paraphrase you ‘We can’t have nice things because nice things are in reality bad and unrealistic. People who do have nice things are not special.’
I could readily believe that your stated reality is true of the majority of solo devs, but it’s not true for me or those that I know. I understand that my sampling is biased and probably not the normal experience. I don’t seek to show off for my anonymous HN account and instead wanted to say that sometimes we can have nice things and it can work out successfully.
It's not learned helplessness et al, just a plea to drop the smug elitism if you want people to take you seriously. I actually want nice things, I hate writing brittle systems in languages that offer no meaningful guardrails, and setting up Rube Goldberg contraptions to get a poor approximation of e.g. basic BEAM runtime functionality.
Any success I have had in getting very boring companies to adopt nice things at all has not come from insulting people's intelligence and acting like I'm the smartest person in the room. I despise this kind of elitism that is rampant in certain technical communities. It turns people off like nothing else and serves no purpose other than to stroke your own ego -- it's pointless meanness.
I worked applied research at a few very big companies and did have a measured amount of success getting some advanced tech adopted so I know what it takes to move the needle. My lesson, and one I wish I learned sooner, was that the effort was not worth it. I had assumed that the lack of adoption was due to lack of exposure to ideas but having exposed these ideas to a large number of people I reluctantly came to the conclusion that it more of a lack of innate intelligence. I honestly wish it wasn’t so.
My goal has not been to fix big companies for a long time, I was just musing on the rational and commented to see what other people think on the topic.
> reduction of the worker to a fungible interchangeable cog
I see this trope a lot on HN, and I don't understand it. All of the highest skilled developers that I have met are the quickest to adapt to new projects or technologies. To me, they look like a "fungible interchangeable cog".
And every solo dev that I ever met thinks they are God's gift to the world -- "tech geniuses". Most of them are just working on their own Big Ball o' Mud, just like the rest of us working on a team.
If only the highest skill devs could quickly learn new projects then they are no longer interchangeable.
Your sampling of solo devs could very well be biased, similarly so could my sampling. Not working on a big ball of mud is a massive perk of being solo dev. It’s my company and I’ll refactor if I want to.
>>In the real world, the ability of a lone genius to cobble together a script in an hour is actually not that much of an edge
Any macro/multiplier is that way. You don't miss it, until some one shows you how to do it.
In the last six months alone the scenarios where I had to call upon Perl to help slam dunk a thing insanely laborious are dozens in number.
Its just that if you don't know this, or don't know it exists, you grow up being comfortable doing manual work with comfort.
Sheer amount of times, I have seen some one spend like half a day doing things which can be done using a vim macro in like seconds is beyond counting at this point.
Sometimes a language/tool gets to evolve with the operating system right at their birth. The relationship with between Unix, vim, and Perl is that way.
This is a unique combination, which I don't think will ever change. Unless of course we move away from Unixy operating systems to something entire new.
You are missing my point. For transparency, you are talking to someone who writes Racket in emacs on my Linux desktop, has used Rust macros to clean up awful code in widely used open source packages, and regularly generates code for all manner of purposes in lots of different languages. I know the slam dunk feeling of generating exactly the code that will topple a problem -- and I also know it's not actually that big an edge!
It matters little that you can generate code in an hour that would take your colleague days. It is nice for you and it provides a short lift for your team, but in the limit what matters is maintainability. Peter Hintjens writes fondly of metaprogramming and code generation, but also warns that it makes it difficult for others to work with you, and it's easy to fall into the trap of building abstractions for their own sake. The "edge" in technical work comes from seeing both the forest and the trees, and missing that technical work is in service of humans, first and foremost.
I am glad you enjoy writing Perl, and I like encountering people passionate about it in my work. But I still think there are good reasons why it's in decline, and Perl users should reflect more on that rather than assuming people aren't using it because they are dumb / not technical enough / don't think about problems as creatively or deeply.
I personally believe there are 2 types of Perl - development which you’re talking about here, and sysadmin/devops.
For the first category you’re right - these days there’s not much difference between Perl vs Java vs Rust because abstractions are the same.
But where OP’s smugness comes from I totally agree when applied to the second category - there’s an ocean of a difference between using tools like Perl, awk, sed, jq, and bash to transform Unix command inputs and outputs that it really is a massive superpower. Try doing a day’s work of a Unix admin using these tips compared to writing Java to do it. Oceans I say!
But I don’t think their being a basement dweller genius as you put it is because of Perl - it’s the smugness for the same reason why BOFH Unix sysadmin got their reputation - their tools are literal superpowers compared to GUI tools etc and they can’t believe everyone doesn’t use them!
I use nearly of these tools with the exception of Perl. I go to great lengths to make sure I have access to them because it's so critical for quality of life. I love them and I understand why people love them.
Here's the reason: these languages/tools are tactically very powerful. Tactics are immediate and decisive. Tactics are effectively tricks in the sense that if you can "spot the trick", you can -- with a tiny amount of work -- reduce a formidable problem to virtually nothing. Having a vast toolkit that facilitates such tricks is incredibly powerful and makes you appear to have superpowers to colleagues who aren't familiar with them.
But tactics are definitionally short-term. You deploy them in the weeds, or at least from the forest, (hopefully) never from the skies. Tactics aren't concerned with the long term, nor how things fit together structurally. They are not concerned with maintainability or architecture.
This is why it isn't actually that important that you can cobble together a 15 line Perl script in an hour to do something that would take any of your colleagues a week. Years from now, when you are gone and someone runs into a similar but slightly different problem, someone will find your Perl script, not understand it, and rewrite it all in Java anyway. Or assume it's too hard and give up. Maybe they will adapt your Perl script, but more likely it'll be seen as a curiosity
It sucks, because there is beauty in that approach of solving problems. As I said in another comment, I wish there were more diversity in tooling and languages. But at the same time, it's important to consider that people are fundamental. All of this is in service to that. And I personally would rather build software that people use over the long term.
I think there's a deeper truth here. Perl was notoriously difficult to make C language extensions for. Languages like Ruby and Python really took off because they had a much more approachable and useful C interpreter API which; honestly, made gluing various library APIs into the language far easier. This being the key to taking a very slow and memory hungry scripting language covering a fraction of POSIX into a useful domain extension and embedded language.
Ruby did better at the domain extension part and Python was better at the embedded language part. Perl 6 went entirely the other way. I think this was the real driver of popularity at the time. This also explains why gem and pip are so different and why pip never matured into the type of product that npm is.
True but I don't remember it being nearly as convenient to distribute those modules as it still required the whole build environment on the target and you still had to deal with perls exceptionally efficient but ancient and cumbersome object and type system.
XS wasn't _that_ bad once you got the hang of it; anyways, but I do remember ruby 1.6 coming out and being blown away by how improved the experience of creating distributable C modules was. The class system was flat and easy to access, you could map ruby language concepts into C almost directly, and the garbage collection system was fully accessible.
perl 6 started being discussed right around this time and I think it was clear in the early years that it wasn't going to try to compete on these grounds at all instead focusing on more abstract and complex language features.
Anyways.. even seeing your name just brings me back to that wonderful time in my life, so don't get me wrong, I loved perl, but that was my memory of the time and why I think I finally just walked away from perl entirely.
I don't know what caused this reaction. Was the OP being smug or elite? I did not read it that way. If anything, in my experience, C++ and Rust folks are way more smug/elite compared to Perl hackers.
In my experience, the biggest problem with Perl is readability. Python crushes it. Without list comprehension, it is also very slow during for loops. But, no worries: Most people writing Python don't care too much about speed, or they are using C libraries, like NumPy or Pandas or SciPy. I write this as someone who wrote Perl for years, personally and professionally. Later in my career, I came into Python, and realised it was so much easier to read and maintain large code bases, compared to Perl. To be fair, much like C, it is possible to write very clear Perl, but people quickly get carried away, using insane syntax. With Python, the whole culture, from the bottom up, is about readability, simplicity, and accessibility. I think my only "gripe" about Python is there are no references like Perl, but you can fake it with single-item-lists.
Probably the lines "Its just that the culture in Python world isn't made up of people who think about those problems or even in that dimension."
and "Most of the bad rep Perl gets is because programmers who only interact with http endpoints and databases tend to not understand where else it could be useful."
> Node/NPM definitely is on-par with CPAN. But pip isn't even close.
I'll say this as someone who still does more than 80% of his backend work in Perl: This is not true. I wish it was.
CPAN was awesome once. Now it's mainly old. Yes, you will find obscure things which you won't find for Python. At same time, anything interfacing with modern stuff is often only 40% done in CPAN or not at all compared to the Python, PHP or JavaScript eco system. Not talking about data science stuff here, where Python gained a huge lead - simply have a look at how much support you get nowadays in CPAN for interfacing with, for example, current web api versions or interfacing with third party files like docx, pdf, excel, odt. If there's support for things at all, it is so far far far behind to what libs in other ecosystems have to offer, most of the time.
It simply shows that the crowd implementing business applications went elsewhere, so anything in that area seems stuck in the 2000 to 2010s in CPAN.
I did one small project with Perl but wasn’t excited. I was hoping for bash 2.0 but the shell integration wasn’t great. Seems like python is a similar feature set, less exotic, and has great libraries.
Anyway? What did I miss? anything I should take a second look at?
This encapsulates every conversation I have ever had with someone who tried Perl 6: Tried it, was blown away with [some aspect of it], never found anything practical to do with it, never used it again.
If you're into Lisp you're probably not missing that much. The main reason I use Perl over Lisp is that Perl is preinstalled nearly everywhere, whereas Lisp is not.
LLM do Perl well enough, and since 90% of productivity with Perl is earned through small snippets or one-liners (anything bigger is better served with uv init --script), you can just ask the bit you want when you need it. You can even ask the AI to explain the hieroglyphs.
I work with a relative who is in the real estate space here in India and often deals with land shark mafia. The biggest thing I learned from him, to win in these situations is don't fear or not be afraid of consequences.
You need to have ice water flowing in your veins if you are about to mess with something big. At worst you need to have benign neglect for the consequences.
Often fear is the only instrument they have against you. And if you are not afraid, they will likely not contest further. Threat of jail, violence or courts is often what they use to stop you. In reality most people are afraid to go to war this way. Its messy and often creates more problems for them.
Its not exactly difficult. Or more precise, It can appear difficult as people don't show how the work is done.
You see this even among the Math people as well. It appears as pure magic. But in reality its mostly understanding what the axioms and rules are. Then starting at some point, and making the most smallest atomic change possible to a thing and seeing it its consistent with the rules. Else move to making new change(s) and test if it works/consistent with rules. This can resemble working down trees of changes.
At the end its basically having the patience to sit for hours, and then days full of such hours- And then work things on paper(paper work). At the end you will find yourself with lots of paper work, basically trees of decision branches, that you made while making small changes.
Think of it like a trial of large change log. And forks.
When people see this from the outside, they only see the start point to end result, not all the decision trees. So there is a tendency to imagine the Math guy thought up a perfect solution in exact number of steps. The paper work is hidden and it all looks like pure magic.
When it comes to chess, more top people need to come out and show how they are working through their decisions. Their prep work, their though log etc.
Long story short, they are showing you a magic trick, like a excellent display of sleight of hand. Once you see the practice and the trick is revealed it doesn't appear as difficult.
I had to quit because I found it extremely frustrating. I could hardly ever win. And when I did it was from massive blunders, not from anything I did. I never had an issue with losing in other games, I'm not a competitive type that has to win. But chess made me feel like I had the illusion that I could win but in fact it was almost never possible. I also hate the characteristic that if you make one mistake you lose. You can never recover, whereas in tennis or basketball or something you can be ahead and behind and still have a chance. Chess is like that board game Operation, you make one bad move and it nullifies everything you did before that moment.
Exactly. I also believe a lot of these magic like fields in the current era sound so 'meh' largely because the mechanical processes behind achieving results is revealed.
>>What is difficulty other than priori repetition of a sequence?
Its not easy, but its not where it was either.
Only a decade or two back these things were restricted to nerds and was somewhat like rare occult knowledge. You had to be part of some club to even participate.
While you still need to practice a lot to work in the upper level in competitions and all. It no longer has a magic like appeal.
I once randomly typed 'How to think like a chess grandmaster' or something to that effect in YouTube. And it was interesting how the whole they went about playing.
There are patterns. Like how to start(openings), endgames. Like patterns of movements well defined. You just have to read enough books, and play enough games to have them hard burned into your memory. They are not really thinking the way an ordinary person imagines. By and large a chess GM is database of chess games. And most of your moves are these.
Second part is having a strong internal monologue, which is basically a way of saying to oneself(silently)-
1. What happens if I make this move? Where is my game headed? Where is the opponent's game headed?
2. Does my move fit in to the patterns I already know?
3. Are my pieces in trouble? How do I save them? Or should I?
4. If I make these series of moves I can make check mate the opponent in k moves.
5. Which move do I make to take this opponent's piece?
6. Are there an obvious traps I can spot?
etc etc.
Like a never ending monologue/internal chatter. What if I do this, or What if I do that? What are knowns, unknowns? Same questions now from the opponents perspective. Like you need to develop these skills. And the ability to work this fast.
There are already broad moves that are supposed to be made. Like you move the horses as early as you can. You don't get stuck in piece traffic jams. The rook is supposed to be a endgame piece. White gives you an edge, as you can move that horse first, castling etc etc.
Mostly its reading a lot, and talking to oneself while playing a lot.
As someone who has written and maintained large Perl programs at various points in my career. There is a reason why people do this- Java and Python like languages work fine when interfaces and formats are defined, and you often have 0 OS interaction. That is, you use JSON/XML/YAML or interact with a database or other programs via http(s). This creates an ideal situation where these languages can shine.
When people do large quantity text and OS interaction work, languages like Java and Python are a giant pain. And you will begin to notice how Shell/Perl become a breeze to do this kind of work.
This means nearly every automation task, chaotic non-standard interfaces, working with text/log files, or other data formats that are not structured(or at least well enough). Add to this Perl's commitment towards backwards compatibility, a large install base and performance. You have 0 alternatives apart from Perl if you are working to these kind of tasks.
I have long believed that a big reason for so much manual drudgery these days, with large companies hiring thousands of people to do trivially easy to automate tasks is because Perl usage dropped. People attempt to use Python or Java to do some big automation tasks and quit soon enough when they are faced with the magnitude of verbosity and overall size of code they have to churn and maintain to get it done.
Strong disagree that it's because "Omg, no more Perl" but just complexity cranked up and that Perl person stitching scripts together became their full job and obviously Perl only got you so far. So now you have additional FTE who is probably expensive.
Also, if end user is on Windows, there is already Perl like option on their desktop, it's called Powershell and will perform similar to Perl.
I did a big automation task in native code, because efficiency is desirable in such cases, while bash+grep favor running a new process for every text line. In order to be efficient, you need to minimize work, and thus batch and deduplicate it, which means you need to handle data in a stateful manner while tracking deduplication context, which is easier in a proper programming language, while bash+grep favor stateless text processing and thus result in much work duplication. Another strategy for minimization of work is accurate filtering, which is easier to express imperatively with nice formatting in a proper programming language, grep and regex are completely unsuitable for this. Then if you use line separated format, git awards you with escaping to accommodate for whatever, which is inconsistently supported and can be disabled by asking null terminated string format with -z option, I don't think bash has any way to handle it, while in a sufficiently low level language it's natural, and it also allows for incremental streaming so you don't have to start a new process for every text line.
As a bonus you can use single code base for everything no matter if there's http or something else in the line.
Yes I agree - my favorite language is Python, but it can be annoying/inefficient for certain low-level OS things. This is why I created https://www.oilshell.org (and the linked wiki page)
I've been seriously considering learning some Perl 5-fu ever since I realized it's installed by default on so many Linux and BSD systems. I think even OpenBSD comes with perl installed.
That may not seem like a big advantage until you're working in an environment where you don't actually have the advantage of just installing things from the open Internet (or reaching the Internet at all).
>>When your business is cost control on human life, it's not surprising that decency goes out the door.
Bingo!
Not sure why many people are missing this point. Beneath all this. The healthcare industry, doctors, pharma companies and all connected infrastructure has a simple equation. Their profits co-relate directly with human suffering. That is, the more you suffer, the more they get rich. And for some people in the chain, their very survival, like making a living depends on this.
You can't have a decent, even bearable conversation when somebody begins an argument on this premise. Notice how you refusing to suffer might appear indecent to them. I have known some doctors get angry on patients here in India for asking questions on alternate lines of treatments. Sometimes they have commissions from insurance, diagnostics and even suppliers, and they find it unfair that the patients think about their own good and not about the doctors. So you mean to say you are not willing to suffer to help me make money? So cruel of you!
There are lots of industries like these. Weapons manufacturing and Armed forces is another one. Im sure the thought of absence of wars, and some long term peace would be deeply disturbing to people in that ecosystem.
Notice how this is different compared to something like hospitality business, where profits co-relate directly with customer joy and inversely with customer dissatisfaction.
The real issue with the health care industry is their profits lie in hurting people, not helping them.
In the AI world, Python use is mostly invoke a series of framework API(a few hundred lines of code), for data manipulation you still have to use shell or Perl.
The internet, telecommunications, bioinformatics etc industries and entire generations of backend were built entirely in Perl. And by many definitions is still the case.
reply