HN2new | past | comments | ask | show | jobs | submit | Fr0styMatt8's commentslogin

Any chance for a Samsung Gear VR version?

Cardboard is a great ecosystem, but the tracking and latency on Gear VR is just leaps and bounds above what cardboard can do (the GVR unit has its' own low-latency sensors).


I don't have the hardware, but the models of the brain structures are in the github repo in Wavefront .obj format in case someone else wants to play with them.


I really wish AMD would promote this type of thing more heavily in general (having advanced features that Nvidia restricts to their high-cost cards). I seem to remember a similar thing with AMD being fantastic for Bitcoin mining and from memory it's because Nvidia gimped certain features that were on the cards, while AMD did not.

(on that note, I'm constantly annoyed that despite it being Nvidia specific, everyone supports CUDA for everything cool).

I understand companies have to do this to segregate their products and that it's impractical to have separate fab lines for a chip family for this kind of difference. It's still something I think AMD has a competitive advantage on though and they really need to push that alone (unless they just don't really want to - to sell their own high-end professional cards).

A worse example of this is Nvidia cards not running in systems that have AMD graphics cards in them. Which means, for example, you can't buy a low/mid-range Nvidia card to sit in your system as a dedicated PhysX/CUDA card. I'd consider such a setup for my system and I imagine there's probably others like me out there.


Cryptocurrency mining demand for Radeon GPUs between 2010 - 2012 had improved AMD's revenue stream quite a bit back then. http://www.fool.com/investing/general/2015/02/14/why-the-par...

AMD GPU mining rigs and some other hardware pics. http://www.buttcoinfoundation.org/mining-rig-megapost#more-3...


The bitcoin mining edge that AMD cards had over Nvidia cards was came down to the AMD ISA having a single cycle hardware rotate instruction, which the NVIDIA cards lacked. Also the AMD philosophy of many dumb cores vs the Nvidia's philosophy of fewer smarter cores made a significant difference.


Not true - AMD GCN also has bitselect, which does

dst = (A & B) | (~A & C).

This is equivalent to the SHA-256 Ch "Choose" function.

It also can be used to compose the SHA-256 Maj "Majority" function with an additional XOR.

Additionally, the AMD ISA doesn't have a "rotate" instruction per-se, BFI_INT is "bitfield insert", which can be used to compose arbitrary length rotates, but is not strictly a rotate in and of itself.

AMD ISA also has instructions for computing integer add with carry, which is great for crushing elliptic curves.

NVidia really, really, sucks for integer compute. Friends don't let friends buy NVidia!


AMD similarly tends to avoid playing bullshit games with things like ECC memory and IOMMUs for consumer-level systems, unlike Intel.


AMD didn't expose the necessary features for good SHA256 performance on their GPUs either, for a while you had to hack their SDK to get them.


IIRC, it was the BFI_INT instruction.


VGI (VDI for gaming) is the future. We shouldn't have to run on bare metal; we should be able to have a single-VM Host (which theoretically gives full hardware performance) with full hardware pass thru.

This gives all the flexibility of VHDs and virtualization without losing the bare metal advantages. Installing a new OS shouldn't mean the current OS knows about it.


I some how doubt that VDI of any kind is the future in pure home use.

Game streaming is gathering steam now, don't let the OnLive failure fool you, services like Play-Cast have been running in Korea and other Asian companies for some time now.

They've opted to service their streaming through the carrier (cable companies) instead of over the internet which allowed them to pretty much bypass the latency and bandwidth limitations of open internet routing.

EA is now setting up a similar service with Comcast, and many will follow.

NVIDIA has setup it's "gaming cloud" service for it's Tegra based consoles which also seem to work pretty damn well, and localized streaming on NVIDIA cards has been working very well for several years now.

So sorry while VDI/VGI might be a future like any commercial product if it ever materializes it will be abstracted to a point in which you might not call it VDI/VGI, and no you won't get to tinker with it at that point.


You can get Nvidia and AMD cards to live together in the same machine: https://www.youtube.com/watch?v=YqkI7bOfRkA


A bit younger (I was in my last year of primary school when I graduated from the C64 to the Amiga 2000). I fondly remember reading through the back of the Amiga 2000 reference manual (the ring-bound one) on Christmas day. It had schematics and everything! (back at that age, schematics just LOOKED cool - I couldn't really read them as such).

I remember those $5 shareware disks you used to be able to get at the markets.

By the end though, several years later.... Oh man.... Games weren't as smooth as on the Amiga, but OH THOSE SIERRA VGA GAMES ON THE PC!!!! Especially compared to their horrible Amiga ports (which were also Sierra abandoning the platform.... the port of King's Quest VI that was done by Revolution Software shows how much better they could have been).

Awesome memories!


They had shareware disks bundled with magazines. I knew someone who worked at the magazine shop and we would slide some out to copy them. It's amazing how now I can get any software so easily.


Absolutely. When you listen to a violin live, you're of course not just getting the violin, but you're getting all the reverb of the environment, the spatial cues created by sound bouncing off your body and around your ears, etc.

So taking a violin sound and then playing it out of a speaker, you're playing into a totally different environment and it's going to sound different. If you've also captured the sound of the environment in which the violin was played, you're then also playing THOSE sounds into a different environment. The reverb still gets affected by the environment your speakers are in.

I think the closest you can get is binaural recordings done with mics worn on your own ears and then played back using suitable headphones.

Even that still doesn't cover the tactile dimension of sound (think about the feeling you get when bass goes through large stage monitors). There are products that try and reproduce this - I haven't tried any of them but would be eager to, as I primarily listen to music with headphones.

Nor does it cover spatialization properly either - without some form of processing, the sound source won't stay in position when you move your head.

A bit off-topic, but the more I think along these lines, the more I imagine the ideal music delivery medium being dry multitrack recordings with mixing and reverb supplied as metadata that's then applied with real-time audio processing. That'd be pretty wild!


> A bit off-topic, but the more I think along these lines, the more I imagine the ideal music delivery medium being dry multitrack recordings with mixing and reverb supplied as metadata that's then applied with real-time audio processing. That'd be pretty wild!

That sounds a bit like MIDI. =)


You can also tweak the sound with impulse sampling of speakers in your room and use DSP to correct for the room response. I went this route and couldn't be happier with the result.


Interesting!

What did you use to do this?


EDIT: Just a bit more context. Neil Young has been trying to sell an 'audiophile' grade audio player. This post seems like nothing more than trying to stealthily promote the supposed virtues of his own product.

I'm sorry but what an utter absolute load of crap. Especially given this:

http://gizmodo.com/dont-buy-what-neil-young-is-selling-16784...

Read some of the comments down on his Facebook also. His fans are not happy (rightfully so).


Indeed. The part I find most ridiculous is that he is pushing a portable music player on the strength of its high bitrate digital source.

The key thing to realize is that there are a lot of pieces in this signal path (bits to brain). The best you can do is start with the highest quality source possible, and then focus on minimizing the reduction in quality which happens at each step past that.

His intentions are good with choosing a high quality source. However, the idea that a portable music player is going to have a good enough DAC, good enough analog amplification / filtering, that the user will select good enough headphones, and will be listening in a low enough noise-floor environment (sitting absolutely still in a dead room with the A/C turned off) to be able to come anywhere close to hearing the difference made by that high quality source is laughable.

Is it possible to detect the difference between 320kbps streaming and 192kHz/24bit lossless? Sure, you'd see it on an oscilloscope. But could you hear it through $50 ear buds while walking down the street?

One way to reason about sound quality is to mentally model it as two sounds mixed together: a loud, perfect signal, and a much quieter distortion signal. For humans, loud sounds mask quieter sounds, and if the amplitude difference is great enough, you simply can't perceive the quieter sound at all.

Now, take that one step futher: model the sound as a loud, perfect signal mixed with five or six small distortion signals (A, B, C, D, etc, each representing a step in the path from bits to brain). Neil's player reduces distortion signal A by a tiny fraction. Great! But that reduction is only perpectible if it isn't swamped by distortion signals B, C, D, E, and the noise floor created by whatever environment you happen to be sitting in (ultimately, "is this reduction in distortion swamped by the noise floor created by the sound of blood rushing through the veins in my ears?")


A friend pointed out to me that his stance is even funnier when you consider this:

http://consequenceofsound.net/2014/03/neil-young-confirms-ne...

The sad part is that his PR has probably achieved what they wanted to. I mean, I'm on here writing a comment about the Pono Player and talking about it with friends. So yeah....


No I don't think that's anything particular to do with the fact that one is iOS and one is Android; Google's OWN YouTube apps are inconsistent.

The Android TV YouTube app, for example, has no way to subscribe to a channel. It also has no immediate 'go back to the video I was watching' either; you have to wait some amount of time until your YouTube history gets updated. This is Google's own YouTube app on their own platform.


It's interesting that you mention YouTube, because the thing that finally convinced me that a 'web app' could be indistinguishable from a native app was YouTube's Leanback interface (YouTube TV). Check it out:

https://www.youtube.com/tv

Video player functionality aside (I understand the complaints), that thing full-screened just feels like a native app.


For me, it was the Netflix web client. Feels 100% as smooth as a good native app.


>For me, it was the Netflix web client. Feels 100% as smooth as a good native app.

You mean when they were using silverlight or now when they are using HTML5?


I never knew about this URL, thanks very much.


> YouTube on TV is not supported on this device?

Is it only for mobile? I didn't see any "mob" or something in the URL. Anyway, with regards to being indistinguishable from native, I guess there is nothing quite like being reminded that you are "native" than an app complaining that you are on the wrong native device. :-)


Works in Chromium on Linux for me.

In Firefox (my default br.) the message is: "Youtube on TV is not supported on this device, for more info go to: www.youtube.com/devicepartners" which is unclickable and unselectable. Modern web...


Might require MSE support (Firefox Nightly) or something.


Works fine on Ubuntu desktop, vivid, [EDIT:] chrome.


Just not on mine. :-) (with Firefox)


It gets more complicated when you ask the question "How much money is the software engineer making for the company vs. what the company pays them?"

For argument's sake, let's say you're paid $80K per year. On the one hand, you may think that is high (it always depends what your frame of reference is). If the code you produce directly makes the company millions of dollars though, does that change your assessment of the 'reasonableness' of that salary?

This is the argument I've heard as to why sports stars get paid so much (because you have to look at what they get paid vs the amount of money they're making for the comapnies paying them).

It's a real can of worms.


I see the value in your point, and I generally agree with it. However, in this hypothetical situation, I think you'd largely be ignoring the hypothetical product manager, any marketing involve, servers involved, code written before the software engineer that payed enough for the development (R&D?) time, etc. Also, the company probably wouldn't want to pay that person unless the value that person produces is higher than their wage. That being said, as a software developer I wouldn't argue with a more "reasonable" wage. :o)


Google has 1 million in revenue per employee. They could pay everyone 500,000$ / year and still be extremely profitable.

Edit: 1,259,520$ (March 31, 2015) http://csimarket.com/stocks/singleEfficiencyet.php?code=GOOG

In 2014 Apple had a pretax income of 53.48B and 92,000 employees. It could give them all a 200,000$ / year pay bump and still make 35Billion in profits.

PS: Not that any company thinks this way, but it's not hard to argue that top programmers are underpayed.


But companies have expenses other than payroll. Google has to buy machines, run datacenters, pay partners, etc. If you look at income per employee, it takes all of this into account. For Google, this is $270K: http://csimarket.com/stocks/singleEfficiencyeit.php?code=GOO...

So they could presumably pay everyone $250K more than they do now and still be marginally profitable.


Contract programmers.

I was one in 1989. So were many of my coworkers. Someone told me that Apple employed so many contractors so that it could inflate the ratio of revenue to employees.

Hiring more contractors doesn't make a company any more money but it does help inflate the stock price in a purely artificial way.


I totally agree with this.

The main point I want to put across is that it's unfortunate that as software engineers, we have somewhat of a tendency to feel guilty for what we do earn and as a whole profession we can be prone to underselling our own value. I find the parent comment a perfect reflection of this tendency.


Going to sound a bit cynical here, but really isn't $30 million just spare change to a VC?

To us we see $30 million and think "that's a ton of money!! how could he get that funding?" but to the VC it's like "Here's a bit of spare change, maybe you'll make something out of this, maybe not".

From that viewpoint alone I could understand why a VC would fund this. I mean, could you guarantee that it's NOT going to be some breakout hit? What if you did pass on that $30 million investment and the company did actually turn out to be valuable? It's easy to say post-hoc once the company has been stupid that the VCs ought to have known about it, but it looks different if you think back to what the VCs must have known when they were approached.


Relevant talk by 'Uncle Bob' about how Scrum became what it is now, as opposed to what it was intended to be:

https://www.youtube.com/watch?v=hG4LH6P8Syk


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: