Hacker News .hnnew | past | comments | ask | show | jobs | submit | Double_Org's commentslogin

I've done a fair amount of work with employee survey data as an HR data scientist. Particularly at large companies it can be pretty useful, but executives often like to reinterpret the reports they are sent in interesting and creative ways.


There is a quasi-legal standard for creating and validating pre-employment assessment (https://www.apa.org/ed/accreditation/personnel-selection-pro...) but there isn't a lot of regulation and there are plenty of sketchy startups happy to sell low quality assessments to middle-managers at companies with poorly run HR departments.


Note that "operationalization" implies a fairly specific set of epistimolgical and ontological approaches which do not necessarily require that what is being measured has a one-to-one correspondence to a 'real' entity.


Indeed. You can operationally define anything you want within your model. If done carefully, a good operational definition may simplify your model quite a bit. (A bad operational definition, on the other hand, will almost certainly make your model overly complex and can be quite detrimental).

When you use your model to infer about a real world phenomena you have to be careful how you treat your operational definition. If you use it to make prediction, you cannot make a claim that what your operand caused it, not until you go into the real world and find it. If your model is successful you may use your operand to describe your prediction, but you have to justify why your operand is necessary, a better model may exist which doesn’t use an operand at all.

A successful model is neither a sufficient nor necessary condition for proving an operand exists.


My impression is that philosophers and statisticians are often working with different focal examples. I think that in many fields important scientific knowledge essentially takes the form of a point estimate (e.g. the R0 of Covid is XXXX). It is also easy to come up with useful priors (e.g. the R0 is likely below 20) that arise more from characteristics of the model rather than theory.

Note that it is possible to reformulate the Covid example into a Null hypothesis test at the cost of being less informative (e.g. Is the R0 significantly above 1?) but then the knowledge becomes less useful for making certain important decisions.

Anyways, my general impression is that Bayesian statistics are probably more useful for making good decisions that require precise numerical knowledge of certain types of information but maybe less useful for many of the sorts of conceptual issues philosophers are often interested in.


A bottling company is interested in determining the accuracy with which their equipment is filling bottles of water. One answer would be "95% percent of the bottles contain between 11.9 and 12.1 ounces". A different way of answering the question would be to estimate the actual distribution of water amounts.

The difference here, is that knowing a distribution is often more useful than just knowing the mean, or the variance, or some confidence intervals. Bayesian methods tend to be useful when you want this sort of information which is often the case when you are using it for decision making (or something like game theory).

Another uses case is when you are making decisions requiring multiple pieces of information that don't neatly fit together. A simple example is cancer screening. A rational decision about the proper threshold requires you to combine information about (1) The accuracy of your test, (2) The prevalence of the cancer in the population.

I will also add that the formula presented in the article is the simple case with discrete distributions. The more general version of the formula can also handle continuous distributions.


lol is this copypasta? i'm quite familiar with all of these toy examples of inference instead of point estimation. i'm talking about fitting models rather than descriptive statistics (or decision theory).


Commonly a model is being used primary to make better decisions. Specifically in the context of fitting models, Bayesian methods are really popular for hyperparameter tuning.

I guess my main point is that at least one reason people are using Bayesian methods is because they are dealing with problems that are qualitatively different than more prototypical prediction problems.


My impression is that many college newspapers lost a lot of ad revenue when it became common for universities to ban them from advertising bars and alcohol.


Would have thought recruiters/employers could have made for a lot of revenue.


> ban them from advertising bars and alcohol.

Why would they be banned from advertising bars? They're just local businesses.


Because the "freedom-loving" laws in America require that college students aged 18, 19 and 20 are legally prohibited from drinking.


One could look at Canadian ones to see if they fared any better, especially those diametrically opposite Albertans and Quebeckers that have an 18 drinking age.


Most psychologists wouldn't operationalize intelligence as years of schooling but its common practice in economics and some other fields.

In social science research there is often a tradeoff between measurement quality and sampling quality. Do you want to measure your variables really well in a small sample, or measure them crudely in a large sample.


> In social science research there is often a tradeoff between measurement quality and sampling quality. Do you want to measure your variables really well in a small sample, or measure them crudely in a large sample.

Wouldn't it be honest to just accept the fact that both methods don't provide data that's relevant and trustworthy enough for the research at hand? It sound irresponsible to draw conclusions (that will be picked up by newspapers and eventually policy makers) based on data that does not merit them, only because no beter data is available.


There is a lot of scientific support for pre-employment screening (selection). The most conclusive evidence comes from a series of absolutely enormous studies conducted by the US military in the 80's known as project A.

That being said, there is a great deal of pseudo science being gobbled up by organizations because this is a mostly unregulated industry.


IIRC, the outcome of these studies were that intelligence is the best predictor of job performance. The ASVAB is essentially an IQ test with some domain knowledge questions sprinkled in.


I worked at a consulting firm that sells this sort of philosophy. A big part of my job was explaining basic concepts from statistics and measurement to MBA types. A big problem with quantitative management approaches is that the people who are in charge of implementing them have weak math/statistics ability.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: