Hacker News .hnnew | past | comments | ask | show | jobs | submit | alive2007's commentslogin

The real shocker is that it took until 2012 for this pivotal algorithm to win a Nobel Prize. David Gale had passed by then! The Nobel committee must be backlogged to Hell and back.


I see this sort of application having a lot of use in the kinds of derivative pop music developed by ensembles of songwriters and manufactured purely to generate radio hits. Bacon's Triptych of George Dyer is genius. The average person listening to Taylor Swift just does not care about Bacon's Triptych of George Dyer.

In a way, Magenta's job is not besting Bach. By the definition of Bach (a human being who changes the way we view and enjoy music), a non-human being cannot best Bach. Magenta's job is besting a much simpler, if equally challenging role - Max Martin, or the writers of "Let it Go".

As it turns out, this kind of music is already pretty formulaic. Much has been written on repetitive chord progressions being spammed across hundreds of famous singles. In a way, artists shouldn't fear the potential of these technologies besting them - they should thank them.

Freed now are artists from loading their albums with eye-rollingly generic lead singles that they immediately get sick of ("Stairway to Heaven", "Creep", "Smells Like Teen Spirit") because record labels know that's what will get the most radio play. You can just let the machine do those. Now, an artists' reputation is determined purely by his relative mettle against other human artists.


The average person listening to Taylor Swift is thinking about Taylor Swift, and not what they're listening to.

Pop is maybe 75% performance, sex, status, and charisma. The music isn't irrelevant, but it only really needs to be a committee-produced mashup of contemporary cliches to do its job.

The rest is posing and attitude.

>As it turns out, this kind of music is already pretty formulaic.

But it's less formulaic than it sounds. Discovering that it uses Standard Chord Sequence Number 7 (from the small standard pop set) won't get you close to an interesting song.

A lot of creative detail goes into the production, arrangement, and the vocal performance. Not the MIDI file.

Basically there are huge gaps between a MIDI cliche machine - buildable now, and not particularly difficult - to a full virtual artist who produces even moderately successful tracks without human help, to a musical AI genius who produces completely new musical styles that capture the human imagination for centuries.

You need a model of mind to do that last one, and we're at least 50 to 100 years away from that.


> The average person listening to Taylor Swift is thinking about Taylor Swift, and not what they're listening to.

I think this is a grand oversimplification. Personality certainly _contributes_ to pop stardom, but the music is still #1. Before anyone knew who Taylor Swift was, they connected with her through one or more song.

> A lot of creative detail goes into the production, arrangement, and the vocal performance. Not the MIDI file.

Of course, but even having an autonomous "songwriter" that could write _a_ hit would be a gamechanger for music (though obviously most immediately applicable to top 40 / pop)

> You need a model of mind to do that last one

I disagree. Machines already produce what would otherwise be considered "experimental" music, you just need some deep reinforcement learning to know what has mass appeal.


> Before anyone knew who Taylor Swift was, they connected with her through one or more song.

Only if by 'connected with her' you mean heard her debut hit over and over and over again on radio until it became an earworm.


I disagree about the songs creep and smells like teen spirit being generic. These were exceptionallu crafted pop songs that expressed heart wrenching emotion. Nothing like the typical pop song at all.


In other news, bananas have surpassed OS X in daily Caloric consumption. I knew Apple couldn't make it without Jobs.

People conflate the purposes of various products under the umbrella term "social network", which makes the assumption that all social networks offer arbitrarily similar function. Simply not true.

Snapchat offers ephemeral pictoral content with humorous ad-hoc additive filters. This tailors the product for those that desire ephemeral pictoral content with humorous ad-hoc additive filters. Apparently, a bunch of young kids.

Twitter offers what is now known through the shorthand, "microblogging". This tailors the product for people who like "microblogging". Apparently, a bunch of people, the median age being higher than Snapchat's median age.

Just because more people are spending time on Snapchat than on Twitter, does not mean that Snapchat is "taking away" from Twitter time. Perhaps different people are signing up for Snapchat that are not signed up for Twitter. Perhaps people that are spending X hours on Twitter are now spending Y hours on Snapchat without taking away from the X hours spent on Twitter. Elapsed eyeball time, and, ergo, ad revenue, are still the same. This isn't a zero-sum game. Twitter shouldn't break a sweat because Snapchat is getting more popular. Facebook shouldn't break a sweat because Twitter's no-character limit policy may increase user base.

Perhaps the zero-sum approach to competition makes sense in certain fields, like when products are similar substitutes of each other; like two different kinds of general-purpose glue offered on the market. Or when market capitalization of a certain kind of product is fixed : it's been widely assumed that the average American's entertainment budget is relatively fixed, so in a philosophical sense, if one popular band didn't exist, another popular band would be more popular.

But when it comes to products like "social networks"? With vast design philosophies and product focus and varying demographics? It's very reductionist to publish a headlines like this and assume it makes any sort of meaningful conclusions. As with any sufficiently popular business, this stuff is capital-C complicated and you can't explain relative differences in success using singular metrics like this.


I think I almost had a seizure reading your post. So many buzzwords lol


I have thought of a fundamental fallacy in my argument. If the Universe is a computer, then it should be able to be entirely simulated within a program. Ergo, this simulation must be able to reach a point in time in which the human inside the simulation will be able to code a simulation. So the second simulation must be able to reach a point in time in which the human inside the second simulation will be able to code a third simulation.... This directly conflicts with the heat death of the Universe. However, you can formulate a new belief. You can say that the probability of life spawning during the heat death is not 0; but rather, it approaches 0. As in, it is 1 - .9 repeating. If you can imagine, it is the largest irrational number greater than 0. So there is an infinitely small but still non-zero chance that life will occur at the heat death of the Universe. Eventually, Bayesian probability will ensure that this life will occur. The Big Bang will repeat. This is a cyclical view of the Universe : Big Bang -> Heat Death -> Big Bang. Again, I do not have any evidence for this point. It is a purely irrational view. Atomism is a religion, not a philosophy. In terms of religions, it only really fundamentally breaks the second law of thermodynamics. That isn't that bad. I think all atheists are atomists. If you do not believe in the afterlife, then you do not believe that your life has any meaning. Ergo, kill yourself/live as hedonistically as humanly possible. The very fact that you do not kill yourself/just fuck and drink all day signals that you think your impact on the planet is not finite. But this conflicts with the second law of thermodynamics, which insures all thought, life, and computation are finite. Ergo, your belief system breaks the second law of thermodynamics, so you must be an atomist.


Second Law only applies to 'isolated systems'. They do not exist in reality.


I've spent some time with Stallman in real life. There is very little variance between Ubuntu and OS X to him. They both have free kernels (Linux, XNU) that come with nonfree binary blobs (device drivers) and nonfree userlands (Ubuntu apt-get main channel is not entirely free, OS X is obviously nonfree). To Stallman, once you give convenience the time of day, you might as well be Bill Gates.


I noticed that myself. I clarified my point. IQ is heritable within observable socioeconomic brackets.


In a philosophical sense, it is not helpful to perceive the difference between "ideas" and "execution" as a binary reduction on two sets. Instead, I view it as the idea-execution gradient.

The theoretical limit on an idea-first mindset is 'literally only your idea matters. It doesn't matter how you execute it.' The theoretical limit on an execution-first mindset is 'literally only your execution matters. It doesn't matter what the idea is'. We can represent your stance on this gradient with a rational number coefficient, M. What you believe Matters.

If you have a low M, you are idea-first. If you have a high M, you are execution-first.

Let's assume the existence of a hypothetically Platonic M. By definition, founders of the most successful startups have a Platonic M.

My point was, I believed the Platonic M is currently far lower than what the Valley rhetoric advocates for.


I agree with this comment -- excepting the last sentence -- but it's a far cry from what the Medium article said.


Perhaps my subjective belief of the Valley rhetoric is wrong.


I guess in a superficial sense, OSS is still losing. But it's losing less than it was losing 10 years ago. Its been steadily growing in popularity.

F(x) is still negative, but f'(x) is positive.


In the 90s, if I wanted to release a game through the mail or whatever means, I was on equal grounds as everyone else. Remember that Doom was just a shareware game passed around through the mail primarily.

Today, if you want to write a game for a major platform (ie: iOS), you basically have to get the permission from Apple. Put a confederate flag in a civil war game?

Grounds for termination.

http://money.cnn.com/2015/06/25/technology/apple-pulls-civil...

Feel like writing a Bitcoin wallet for iOS?

Grounds for termination. http://www.pcworld.com/article/2095060/apple-removes-blockch...

You can only write applications today if the platform holder agrees with your purpose. How the heck is this better than the 90s, when you were allowed to write whatever you wanted and anyone was allowed to install?

You may think things are improving, but if things keep going in this direction of the "walled garden", I really don't think they're better.

At least the Windows Win32 platform is still relatively open for games / applications. But everything else is basically run by an integrated store now: iOS, Android (Google Play).


Yeah, I definitely agree that iOS is a walled garden. It sucks and I hope Android continues growing in the high-end smartphone field so it can demolish the restrictive ecosystem that is iOS.

The Play Store is relatively free. If anything, at least you can access the Android filesystem so if you so wished, you could install an .apk outside of Google's ecosystem. Jesus.

And that's just mobile. In the desktop and web environment, platforms are super-open. You can pretty much distribute any .exe or .app you'd like on your website and it's the user's fault if he ends up downloading a virus. Furthermore, you can write your webserver code in literally anything you want. There are CGI scripts for C if you were really that insane. I'm pretty sure someone out there has figured out how to turn a physical Turing tape machine into something that generates HTML and CSS templates.

Your templates end up having to have some JavaScript in them, I guess, but even then. JS is still open-source, and it's ended up being more of a target language than it is a programming language these days.


Good points.

Developers need to make a living. In Microsoft's heyday, Microsoft could abuse their monopoly to coerce developers to care more about making a living than they cared about what they liked (OSS).

Now, this simply isn't true anymore.

My insults lobbed at Bill Gates were merely jokes. I obviously have a ton of respect for the man. Monopolies aside, he was the leading entrepreneur of his day alongside Jobs.

What were my false statements?


Minor question; is it just me or is the "Results of Interview Simulations by Mean Score" a bit difficult to parse? I understand that observing the behavior of any singular cohort involves looking at the endpoints of the cohort's curve at the horizontal line 'x=n', where n is number of simulations you wish to observe (the right point of the curve at x=n is P(fail) of the worst performer in the cohort at n simulations, the left point of the curve at x=n is the P(fail) of the best performer); which is why the gap between endpoints within a singular cohort decreases as n increases. But it seems kind of counterintuitive to observe any other kind of trend -- shouldn't the information be graphed as P(fail) being a function of # of simulations, as opposed to the other way around, seeing as the latter is the independent variable?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: