Hacker News .hnnew | past | comments | ask | show | jobs | submit | ebolyen's commentslogin

There's really a lot more you can look at here. Lot's a prior art on super-cookies and fingerprinting:

https://coveryourtracks.eff.org/

https://amiunique.org/


Hmm interesting. I tried the EFF site and among other things it told me I'm on "MacIntel".

Gave me a scare, thought I'm still somehow running an x86 build of Firefox.


Both linked in the Sources & Confessions modal at the bottom. Cover Your Tracks is the spiritual ancestor of this whole piece. amiunique is more rigorous; this is the editorial cousin.

Brutally dark site doesn't seem to show much to my eyes. No modal appearing at the bottom.

Another info leakage feedback tool:

https://www.ipleak.com/full-report/


The moxy of this is inspiring.

I'm curious to know what you would rate as the most important features to make this work? It seems like calc+if do a lot of the heavy lifting, but the new function syntax is what makes instruction lookup tractable.


It does seem like the state of the art differs from popular understanding. Not only is mitochondrial DNA straight forward (although not especially useful for forensics as it is maternal), but with specialized extraction it is still possible to recover nuclear DNA, just exceedingly painful to do so.

https://www.sciencedirect.com/science/article/pii/B978032399...


Gattaca did portray hair as being more than enough for forensics. (“Keep your lashes on your lids where they belong. How could you be so careless?”)


It is the initial purpose of a microbiome to be at least commensal, in that it is usually prohibitively expensive to maintain a sterile environment so the odds of a true pathogen colonizing a system is greatly reduced if you simply have a crowded space of neutral participants.

Once that's true it does seem there's a lot of host and microbiome interactions we've only begun to explore, but it shouldn't be surprising that co-evolution of the microbiome and host begins to take over as soon as you have one. One great example is short-chain-fatty-acid (SCFA) producing bacteria in the human gut. [1] These seem to be essential, and if there was a general takeaway to improve health, it would be to eat your roughage so they can do their job.

This is also why high alpha-diversity (community richness in particular) is such a dead-ringer for healthy vs diseased states. And frustratingly, is often exactly where the story ends for a lot of observational studies.

Also, in case you are curious, artificially sterile mice (gnotobiotic mice) tend to act differently than other mice, which is pretty odd to be honest, and why the gut-brain axis is a plausible mechanism to research further. [2]

[1]: https://pmc.ncbi.nlm.nih.gov/articles/PMC10180739/ [2]: https://www.sciencedirect.com/science/article/pii/S088915912...


No, because alignment, in the general case, is O(n^2). It is ironically one of the more tractable and well solved problems in bioinformatics.


If anyone is interested in a more formal descriptions of these control-loops, with more testable mechanisms, check out the concept of reward-taxis. Here are two neat papers that I think are more closely related than might initially appear:

"Is Human Behavior Just Running and Tumbling?": https://osf.io/preprints/psyarxiv/wzvn9_v1 (This used to be a blog post, but its down, so here's a essentially identical preprint.) A scale-invariant control-loop such as chemotaxis may still be the root algorithm we use, just adjusted for a dopamine gradient mediated by the prefrontal cortex.

"Give-up-itis: Neuropathology of extremis": https://www.sciencedirect.com/science/article/abs/pii/S03069... What happens when that dopamine gradient shuts down?


Not an expert, but I have a bit of formal training on Bayesian stuff which handles similar problems.

Usually Gibbs is used when there's no directly straight-forward gradient (or when you are interested in reproducing the distribution itself, rather than a point estimate), but you do have some marginal/conditional likelihoods which are simple to sample from.

Since each visible node depends on each hidden node and each hidden node effects all visible nodes, the gradient ends up being very messy, so its much simpler to use Gibbs sampling to adjust based on marginal likelihoods.


It's been a long time since I took a class like this, but I definitely had a similar experience to the author.

Ideas like fold and map where _never_ mentioned in lisp (to exaggerate, every function had to have the recursive implementation with 5 state variables and then a simpler form for the initial call), at no point did higher-order functions or closures make an appearance while rotating a list by 1 and then 2 positions.

The treatment of Prolog was somehow worse. Often the code only made any sense once you reversed what the lecturer was saying, realizing the arrow meant "X given Y" not "X implies Y", at which point, if you could imagine the variables ran "backwards" (unification was not explained) the outcome might start to seem _possible_. I expect the lecturer was as baffled by their presentation as we were.

In general, it left the rest of the class believing quite strongly that languages other than Java were impossible to use and generally a bad move. I may have been relatively bitter in the course evaluation by the end.


Thie irony is palpable. I had the (misfortune) of only being (mis)taught procedural languages by professors who thought computers were big calculators who could never be understood, but could be bent to your will by writing more code and maybe by getting a weird grad student to help.

Patterns might appear to the enlighted on the zeroth or first instance, but even the mortal must notice them after several goes. The magic of lisp is that if you notice yourself doing anything more than once you can go ahead and abstract it out.

Not everything needs to be lifted to functional valhalla of course, but not factoring out e.g. map and filter requires (imho) a wilful ignorance of the sort that no teacher should countenance. I think it's bad professional practise, bad pedagogy, and a bad time overall. I will die on this hill.


If you are only used to Java (the bad, old, ancient version), you don't even notice that you can factor out map and filter.


Having been on the other side of this (teaching undergrads), I do get why courses would be structured like this. If you actually try explaining multiple things, lots of students freeze up and absorb nothing. Certainly there’s a few motivated and curious students who are three lectures ahead of you, but if you design the class for them, 60% of students will just fail.

So I get why a professor wouldn’t jump in with maps and folds. First, you need to make students solve a simple problem, then another. At the third problem, they might start to notice a pattern - that’s when you say gee your right there must be a better way to do this, and introduce maps and folds. The top 10% of the class will have been rolling their eyes the whole time, thinking well duh this is so boring. But most students seem to need their hand held through this whole process. And today, of course, most students are probably just having LLMs do their homework and REALLY learn nothing


Ah, yes. Like in the class where we learned Moscow ML, where loops don’t and variables ain’t, and Godspeed!


> I wish you luck with tracking down versions of software used when you're writing papers... especially if you're using multiple conda environments.

How would you do this otherwise? I find `conda list` to be terribly helpful.

As a tool developer for bioinformaticians, I can't imagine trying to work with OS package managers, so that would leave vendoring multiple languages and libraries in a home-grown scheme slightly worse and more brittle than conda.

I also don't think it's realistic to imagine that any single language (and thus language-specific build tools or pkg manager) is sufficient. Since we're still using fortran deep in the guts of many higher level libraries (recent tensor stuff is disrupting this a bit, but it's not like openBLAS isn't still there as a default backend).


> home-grown scheme slightly worse and more brittle than conda

I think you might be surprised as to how long this has been going on (or maybe you already know...). When I started with HPC and bioinformatics, Modules were already well established as a mechanism for keeping track of versioning and multiple libraries and tools. And this was over 20 years ago.

The trick to all of this is to be meticulous in how data and programs are organized. If you're organized, then all of the tracking and trails are easy. It's just soooo easy to be disorganized. This is especially true with non-devs who are trying to use a Conda installed tool. You certainly can be organized and use Conda, but more often than not, for me, tools published with Conda have been a $WORKSFORME situation. If it works, great. If it doesn't... well, good luck trying to figure out what went wrong.

I generally try to keep my dependency trees light and if I need to install a tool, I'll manually install the version I need. If I need multiple versions, modules are still a thing. I generally am hesitant to trust most academic code and pipelines, so blindly installing with Conda is usually my last resort.

I'm far more comfortable with Docker-ized pipelines though. At least then you know when the dev says $WORKSFORME, it will also $WORKFORYOU.


This is true of corals, and they are often considered "colonial" organisms instead of an individual.

That said, I don't think anyone who studies biology is particularly concerned with hard-line definitions, as nature tends to eschew them every chance it has.

I think Pando and corals being considered "modular bodyplans/habits" is perhaps a more useful concept than individual or clone.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: