Justin from Zencoder here... I've been doing a bit of similar testing and seeing similar results. So they're obviously controlling the settings per frame (or per segment) during the encode, meaning the values in that encoder line are not very useful in analyzing what they're doing.
I'm definitely seeing a loss in quality on the "mini" encodes relative to the source -- mostly in loss of grain, and fine details (in hair, etc).
However, if they're analyzing the motion within the video and using perceptual algorithms to determine what the focal points are, etc. then it's totally fair to throw away details in sections that are peripheral, and likewise getting rid of grain in a blurred-out portion of the screen that's panning makes sense if your eye wouldn't perceive that anyway. This seems to be a much more aggressive version of x264's "psy" optimizations, essentially.
So the resulting video might look very similar in quality to someone watching the movie, but not to someone analyzing the frames and details. There's a hard line to draw on how that's marketed -- it's not actually "without losing quality", but it might be "without looking worse"?
Based on the comparing I've done on the second clip so far, they seem to be doing absolutely nothing special - at approximately the same bitrate and settings, the videos are practically identical quality-wise (in fact, the most recent x264 seems to fare a small bit better).
I'm going to do two sets of comparisons for each clip: Beamr's video compared to x264 with similar settings and bitrate, and then Beamr-like settings and high quality settings compared at a much more realistic bitrate that you would actually see in use in digital video on the internet.
I just came back to add that, encoding 2-pass to the same bitrate that theirs results at, I have a very hard time telling the difference. I look forward to your comparison sets.
I agree. Intrusive LCD panels at the table, removal from human interaction, horrid restaurant-chosen hierarchies of categories to wade through, etc...
With menus, everybody gets one and can peruse as they see fit, rather than a little shared computer. With a waiter/waitress you can talk through special options or dealing with split bills and anything like that.
And don't get me started on social and tracking -- I can't think of anything that would more quickly destroy my interest in a restaurant than having to log in with twitter or facebook. (And believe me, if the restaurant posted/tweeted on my friend's behalf, they would lose my business.)
If I'm going on a date I want a chef. If I'm going to a cheap izakaya in tokyo with 4-12 friends for cheap drinks and snacks I'm perfectly happy with color LCD ordering touch screens which they've had here since 2005.
There's a time and place for both. Being able to sit and eat with a group of friends and not have a waiter interrupt your conversation can be a plus
>> "Being able to sit and eat with a group of friends and not have a waiter interrupt your conversation can be a plus"
but that's exactly my point. without useful methods of communication with waiters, they will likely interrupt, and/or you will be kept waiting.
i'm flabbergasted by the number of people who seem to interpret efficient communication as mutually exclusive from a high-end restaurant or good customer service.
they aren't at all mutually exclusive, and in fact they are exactly in pursuit of same goal.
Here's a more concrete example of how efficient communication would improve your dining experience. (Disclaimer: The startup I founded, Cloud Dine Systems, does this, so take this with a bit of salt since I'm a bit biased.)
You want a refill of your coffee during the breakfast rush. You can either try to flag down one of the wait staff or send a text to the restaurant with "#Table12 can you refill my coffee?" The wait staff gets the text and delivers you a coffee refill. Notice, you sent off the asynchronous message and then continue talking with your friends instead of pausing the conversation to flag down staff. It translates to better service and a better dining experience because the timing matches your needs. Similar examples follow for anything you would flag a waiter down for.
IMHO, one of the more pressing problems is efficient communication within the restaurant. Here's where the gains from better communication are the greatest. But that's a whole different conversation.
It sounds like EC2 and GCE are using different terminology here, so Google is giving you half as many cores as you think but thanks to Sandy Bridge they're crazy fast.
We went back and forth on naming the machine types. I'm sure you can imagine those discussions. In the end we opted for naming them based on the number of virtual CPUs from inside the VM. This is easier to remember than arbitrary sizes (small, medium, large) and is always going to be an integer. Hopefully this naming scheme can hold up and make sense over time as new machine types are introduced.
The cases may be different crimes, and in different areas, but it's just illustrative of how out of balance the justice system is. When the victim is high-profile, the enforcement is swift and overbearing. When the victim is unknown or powerless, the enforcement is often nonexistent.
Seizing personal computers, cameras, cell phones, servers, and paperwork, for a case where the property has already been returned, might be legal but is also a completely inappropriate response. There's already a very clear case here, and lives are not in danger. It doesn't matter if he's a journalist or not, this is just not the level of action that should have been taken. We should be able to expect fairly consistent responses to legal situations, based on the severity of the crime and the impact, rather than based on the identity of the victim or the media attention focused on the case.
I'm definitely seeing a loss in quality on the "mini" encodes relative to the source -- mostly in loss of grain, and fine details (in hair, etc).
However, if they're analyzing the motion within the video and using perceptual algorithms to determine what the focal points are, etc. then it's totally fair to throw away details in sections that are peripheral, and likewise getting rid of grain in a blurred-out portion of the screen that's panning makes sense if your eye wouldn't perceive that anyway. This seems to be a much more aggressive version of x264's "psy" optimizations, essentially.
So the resulting video might look very similar in quality to someone watching the movie, but not to someone analyzing the frames and details. There's a hard line to draw on how that's marketed -- it's not actually "without losing quality", but it might be "without looking worse"?