HN2new | past | comments | ask | show | jobs | submitlogin
Mars 360-degree panoramic view from Curiosity (wsj.com)
168 points by rdamico on Aug 10, 2012 | hide | past | favorite | 99 comments


Curiosity should totally become the first extraterrestrial Street View car on mars.google.com!


Have a look in Google Earth. It has street view from generations of rovers ;)


Can someone explain how/why a $2.6 billion piece of equipment appears to have cameras that can't match a $100 digital camera? I know it takes a long time for data to travel from there to here, but I am sure a lot of people wouldn't mind waiting for 1080p, full-color video footage of the red planet. Instead, we get these 1-megapixel, black-and-white shots that need to be stitched together to create a half decent panorama image. Sorry, there is probably a good technical reason for this, but I am just ignorant of it and I would appreciate an explanation.


Repost from another thread:

These are still from the navigation cameras. The main camera, not yet deployed, can take 1600x1200 full-color pictures and HD video at 10fps. There's actually two of it, so it can also make 3D images/video. Other features are wide/telephoto lens, panorama stitching and 8GB flash storage. Before complaining about resolution, this is like having a GoPro 3D on Mars, it has plenty of detail.

One of the articles mentions that the cameras are provided by the same manufacturer as the Viking missions, so they must certainly have improved. Expect some awesome images in the coming months!

More info: http://mars.jpl.nasa.gov/msl/mission/instruments/cameras/mas..., http://msl-scicorner.jpl.nasa.gov/Instruments/Mastcam/, http://www.nasa.gov/mission_pages/msl/news/msl20110531.html


3D images are not possible since the two cameras have two different (fixed) focal lengths (for wide and tele shots). There were plans to add zoom lenses (which would make 3D images possible) but they couldn’t make it happen (mostly because of technical issues, it seems).

(Hm, well actually, technically 3D images are possible, just not because there are two cameras. Since rocks tend to not move, Curiosity could just snap one image, drive a bit and snap a second picture.)


They've actually posted a few anaglyph 3D images. Perhaps they manipulate the images to approximate the same focal length (cropping, maybe).

Ah - they used the hazard avoidance cameras: http://photojournal.jpl.nasa.gov/catalog/PIA16002


I remember reading the zoom lenses weren't possible because of having to keep a lubricating fluid wet. This posed challenges (heating from mar's extreme cold, etc) so it was scrapped for the time.


See this discussion and article from earlier today: https://hackernews.hn/item?id=4360502


Thanks! The reason seems to be: It's the way it was planned 10 years ago and we don't go back on planning on a project like this one.


Its more than just choosing the camera 10 years ago, its having 10 years of testing of that particular camera under their belt and knowing every in and out of it. 10 years is plenty of time for any quirky behavior to emerge and be dealt with.


This topic was also discussed here (regarding some previous missions): https://hackernews.hn/item?id=4356878

They use a black and white CMOS to accurately measure the actual light hitting the lens (which may include non visible spectrum. To get a colour image they need to use filters and take multiple photos, where as consumer cameras have a fixed grid (Bayer pattern) of one filter per each pixel. That probably won't give you such an accurate picture. Also when they're using filters they're not interested in reproducing true colour but rather highlighting a difference in materials.

The bandwidth constraints no doubt the reason they didn't change the sensor. Besides, when they combine three different images taken with filters to create one colour image it will essentially be 6MP anyway.


More like A) We aren't getting the pictures from the best cameras yet and B) We didn't send the latest, greatest best cameras because we didn't want them to fail after 10 minutes use.

We send up camera's tested and hardened against temperature changes, dust, vibration, and radiation. Also cameras that require low power.

Furthermore, there is limited bandwidth. So we probably want to send back mediocre pictures most of the time and if we see something we really want in high quality we take a bunch of pictures of it and put them together.

As a final note, the quality of the sensor and lens probably matters a lot more than the number of mega pixels.


Several articles have mentioned bandwidth limitations of about 250 megabits per day, which also has to be used to send instrument data.

But I haven't seen an explanation why bandwidth is so low -- is it lack of spectrum? Interference issues?


The bandwidth to the orbiters is up to 2 Mbps, but because they're in low orbit they're only in sight 8 minutes per day.

http://mars.jpl.nasa.gov/msl/mission/communicationwithearth/...


I think it may be time to put a dedicated communications satellite around Mars. These rovers do seem to last a long time so it probably would be worth it.


We would need at least three for constant contact, Arthur C. Clarke style.


There is, that's the role MSO is performing, no?

What you'd need is a whole series of them to increase coverage.


A bit of clarification, I think the orbiters are in sight of Earth for most of the day.

It is Curiosity that is not in sight of the Orbiters, again because of their low orbit.


I'm not sure I understand. Gale Crater is pretty near the equator, so any equatorial satellite would pass quickly over it, but many many times per day.


You answered your own question: both satellites are in polar orbits. They pass the equator many times a day, but at a different longitude each time.


> But I haven't seen an explanation why bandwidth is so low -- is it lack of spectrum? Interference issues?

Transmit power.


According to wikipedia article, the power source is designed to create 125 watts of electrical power from 2000 watts of thermal power. (It might look inefficient, but they are using the heat in all parts of the rover.)

For comparison, that might be how much your graphics card might be consuming while you are reading this article.

I guess that also means it is going to move very slow.


It's not like they have extra batteries and they just add a power strip for the better camera. The change you suggest would involve re-engineering, which would increase your lamented cost of this mission.


Maybe Instagramm built the camera?

But seriously, I guess for the same reason Couriosity runs on old PowerPC processors. They are just well tested and less likely to fail. If you want a high res image, you can always take 10 images and glue them together, result is the same.


Not quite - it's still the same res, but bigger.


I remember somebody who worked for NASA telling me it cost 750K to test the fluorescent lightbulbs for the spaceships. The crappy technology (by Earth standards) is mostly due to testing constraints, from what I understand.

Just think of age testing - they probably want the camera to last at least 5 years. If the camera you want to send up has been developed in the last year, how do you really do that? In a lab, for reliability testing of new components, typically you temperature cycle equipment, up and down every couple of hours, to simulate time passing, in order to try and compress the time frame required to test the aging rate, but we all know it doesn't really achieve what we want it to too well.


" it takes a long time for data to travel from there to here"

It doesn't take that long for it to travel, the data transmission rate is not very fast, and Curiosity doesn't have constant access to the Mars-orbiting satellites that relay the data to Earth.

And besides that, its primary job is not taking pretty pictures.


If there is a satellite that orbits between Earth and Mars parallel to orbit of Mars, there could be better transmission.


The main thing is that the rover only has about 8 minutes a day to transmit to the orbiting satellites.

The optimal thing would be a satellite in areostationary orbit over Mars' equator such that it is always reachable by the lander.

Unfortunately such a satellite wouldn't be well-placed for many other potential landing sites.


I believe the reason NASA sent the probe was not to take "1080p, full-color video footage", but rather prove something scientifically about life on Mars with other more suitable instruments. They couldn't care less about video footage. A shame really, but that is the main reason.


If they expect taxpayers to foot a $2.6B bill they need to give serious consideration to things which cause excitement and emotional resonance in same taxpayers. Having state-of-the-art resolution cameras (and audio microphone, in my opinion) in order to capture and transmit back a "you are there" experience should be a priority. If it were totally a privately-funded operation, then they can do whatever they want. I understand the desire to have rock-solid confidence in the overal system and it's components, thus testing-fest, thus analysis-fest, thus ultra-conservative instincts. But I'm sure there are ways to, say, isolate a "higher risk" component like a modern camera/microphone, and design for redundancy. And if you're going to spend $2.6B you might as well cough up an additional $100M, for example, if that's all it would take to put better, human emo-optimized sensors on there. Precisely because it takes so long and is so expensive anyway. So once it lands there and deploys, you can yield maximum ROI, including emotional/psychological ROI.

Anyway, if a NASA operation never gets around to doing such a thing, I bet if SpaceX puts a craft on Mars they'll have the philosophy where they could justify doing it.

And this is only a minor criticism on my part. I have massive appreciation for the NASA engineers and team, and what they just accomplished with the MSL landing. I think this issue is just one little blind spot it's easier for somebody outside their org to see and point out. They clearly nailed all the hardest bits of the mission, and "stuck" their EDL phase, even though it was so seemingly complex. Between SpaceX, the NASA Mars teams, and Planetary Resources, Armadillo Aerospace, etc. I feel these folks are taking us back to the historical trajectory we all thought we were on as kids. They're taking us back into the Space Age again, leaving the static doldrums of the Shuttle/ISS era.


When the design was finalized these cameras WERE pretty state of the art. Once something is finalized for a space mission it is very very hard to change it because everything else is designed around it.


"... Can someone explain how/why a $2.6 billion piece of equipment appears to have cameras that can't match a $100 digital camera? ..."

Why is this marked down? It's a great question because the explanations are not obvious. The camera design relates specifically to the mission requirements & technology available ~ https://en.wikipedia.org/wiki/Space_probe#Probe_imagers & https://en.wikipedia.org/wiki/List_of_probes_by_operational_...


IIRC one of the cameras can take 720P images at 10FPS which is adequate since it's not going to be a very action packed video. They'll get to it eventually though. The slow uplink is an issue but having local storage available in orbit around Mars really does help a lot when they can use it. It's also important though the camera works in that environment so they can't pickup a nice camera from Amazon and duct tape it on. Just about everything they use is either custom or hold-overs from previous missions.


If they had used a $100 dollar camera, it would already be broken, much less last 2 more years.


Yeah. I'm not even sure the mechanics of a $100 can survive a vacuum, much less deep space and Mars reentry.


This release suggests that the image is only 1/8 of the full resolution capability of the camera system:

http://www.jpl.nasa.gov/news/news.cfm?release=2012-237#5

It also links some other images (and full image files, rather than a flash viewer).

Edit: Also, the tif I just downloaded has more detail than the WSJ viewer makes available.

So if you are judging based on the quality of the image in the linked story, maybe give it some more time.


When the $100 camera fails in Mars you can't just replace it with another one. You can't take it to the repair shop either. You can't try to "shake it a bit". Hell, you can't even bang it on your knee:

http://www.pechorin.com/m/2006/06/13/SONY_Cybershot_E6100_ER...


You know, I didn't claim to understand the reasons behind the technical limitations, but the reactions of some of you are puzzling to me.

We can go on and on about familiarity considerations, and planning considerations, and instrumentation considerations, and scientific concerns, etc. It all sounds like excuses after a bit.

Let's reformulate the question: Why not keep the same enclosure, the same lens assembly, the same mechanical parts and simply swap the 2MP sensor for a 20MP sensor?

Further, let's say you are really not sure about that spanking new 20MP sensor, because it is new, you are not familiar with it, and it was not part of the plan drafted 10 years ago, why not add 1 extra camera with a 20MP sensor? If it blows up during the trip or during landing, too bad, it's lost. If it makes it down there safe and sound, you can collect a 10-minute HD sequence that you can stream back to earth over the next 2 years, during down-times, as a very low priority task.

Potential benefit: You offer the world the first and a truly spectacular HD film shot on Mars, in all its splendor, and you inspire a new generation of world explorers, who will ultimately drive man to take another small step, but a giant one for mankind.

Clearly, if that's the best we can do for something as trivial as shooting a video up there, I cannot fathom a man setting foot on Mars in my lifetime. That may be normal, but it saddens me somehow.


Say you were developing a website, and once you get the first real life visitor, you can't SSH in any more; it's set in stone forever. Would you swap out a JSON parser on your backend just before you ship? They're functionally equivalent, and should be interchangeable, right? Your unit tests say it's fine, so what's the problem?

Any kind of change is adding risk to a project that costs billions of dollars.

The rover a one of a kind project, and any issues are set in stone before any kind of real world testing can be done.

If anything, the team should be praised for being conservative enough that they managed to pull it off with success.

You're right that dropping an extra 20 megapixel camera in there would be less risky than changing the main lens; but it's still an unspecced addition to the plan. I don't blame them for choosing not to.

In the future when launch costs are cheaper, people with a hacker approach are going to make rovers on their own. Lots of experimental designs, safety in numbers, and they'll do amazing things.

But when you've got one shot, your risk profile makes you conservative in the extreme.


> We can go on and on about familiarity considerations, and planning considerations, and instrumentation considerations, and scientific concerns, etc. It all sounds like excuses after a bit.

It sounds like excuses? By what standard? That NASA operations don't move at the same speed as consumer technology? By what qualifications do you even have the balls to make a flippant comment like this?

> Further, let's say you are really not sure about that spanking new 20MP sensor, because it is new, you are not familiar with it, and it was not part of the plan drafted 10 years ago, why not add 1 extra camera with a 20MP sensor?

You don't even sound remotely technical when you blurt out some nonsense like this. Ever heard of the phrase: "Fast, good, cheap: pick two"? Well when you're sending shit to other planets it ain't gonna be fast and it ain't gonna be cheap, so it better damn well be good. And not good the way an iphone is good where it dazzles you and your hipster friends to wait in line so they can sell a hundred million of them. Good as in, you build one of them, you get one shot at it, and if you fuck up some small detail hundreds of millions of dollars and years of people lives are utterly wasted.


> Let's reformulate the question: Why not keep the same enclosure, the same lens assembly, the same mechanical parts and simply swap the 2MP sensor for a 20MP sensor?

You can't "simply" swap the sensor. What about the processor behind it, that interpolates the sensor data and produces a viewable image? Then you need more power for both of those units, and if you're moving to a 20MP sensor, probably size too.


Let's reformulate the question: Why not keep the same enclosure, the same lens assembly, the same mechanical parts and simply swap the 2MP sensor for a 20MP sensor?

sigh

I expected more from the people of HN than be stuck in some stupid megapixels race like they're comparing cameras at Best Buy

There, I said it

The first Canon Digital EOS Cameras had a 3Mp sensor. I can bet they wipe the floor with most of today's compact cameras

In a camera, the sensor is much less important than optics. In a 20Mb sensor, the noise in a high radiation environment would probably be much worse as well


  > Why not keep the same enclosure, the same lens assembly,
  > the same mechanical parts and simply swap the 2MP sensor for a 20MP sensor?
Because:

  > The other advantage of the Truesense Imaging chips was the team's familiarity
  > with their behavior. 'We've built-up decades of cumulative experience of working
  > with Kodak and now Truesense interline sensors. We know how to clock them and
  > drive them - they're a very easy CCD to drive,' says Ravine. A similar level of
  > confidence was needed for the cameras’ memory, he says: 'the flash we ended up
  > using was because we had a lot of radiation test data for it.'
http://www.dpreview.com/news/2012/08/08/Curiosity-interview-...


> Let's reformulate the question: Why not keep the same enclosure, the same lens assembly, the same mechanical parts and simply swap the 2MP sensor for a 20MP sensor?

You do know that a 20MP sensor is significantly (~10 times) larger than a 2MP sensor, right? So the mechanics would probably simply not fit.


Not necessarily, and this is important

If they are of the same size, the pixel size - area - will be 10 times smaller for 10x more pixels (or a dimension - length x width - reduction of ~3.1x)

Now, noise is bigger if the pixels are smaller. This is one of the ways to radiation harden components: make their features larger.

I'm sure a small sensor like those in a compact camera would be subject to a lot of noise. Bigger sensors like 3/4 or full-frame are usually better (but they're probably using something else)

So yeah, maybe a 20Mp sensor with the apropriate feature size would be huge


Obviously I was assuming same pixel size. Sorry if that wasn't clear. You're correct about the noise, of course. Pixel size is at least as important as the number of pixels. That's why a 4 megapixel cell phone sensor ($15) is vastly inferior in image quality to a 2 megapixel industrial sensor that's ten times larger in area ($200).

I have a 24 megapixel sensor lying on my desk. Its pixels are 6 micron wide, which is large (but not huge), and the thing is about 4 cm by 3 cm in size. I don't know much about cameras but I think that's about the biggest you can fit in a typical consumer camera body.


Huh, no, not at all. The last 10 years have been spent increasing photosite density. Today's sensors can have 10-20 times the resolution, and be the same physical size or smaller. Today's sensors are also more reliable, less sensitive to dust and noise than sensors made 10 years ago.


Sure it is, when we're talking about the same sensor and pixel design, which is implied when you say "simply swap the sensor". Otherwise it's a completely different camera setup.

Also, while obviously technology has improved, the industry doesn't necessarily move towards smaller pixels as a rule. To illustrate that: I work for a company that designs and produces image sensors, and I can tell you that we have one sensor product that has a resolution of just over one megapixel, but it's larger in area than our seventy megapixel product.

It all depends on the specifics of the project, and I can imagine that operating in space has certain constraints that prohibit just swapping in another sensor. These camera's are scientific instruments, and that means different rules apply.


I think there's also the factor that there's no real advantage to the mission in a 20MP camera.


It's so plebian to complain about the image quality, especially when we haven't seen any full images from the rover's main camera yet.


Well, we already know that the main camera has only twice the megapixel density, so we are back to stitching images to increase the quality. Fortunately nothing moves much on mars over the timescales that the camera operates, so its perfectly reasonable to get the quality up that way, just as we have seen with Spirit and (the still working) Opportunity.

But as much as we all want to accept that it's not Curiosity's main mission (taking pretty pictures,) it is a very important part of the mission because it's pictures, not data that gets the public's attention, and more importantly, makes future missions to mars a priority for politicians who see a voting public willing to support them.


It may even be "plebeian" to ask questions about the technical features of a data collection instrument named "Curiosity". I should have been an aristocrat instead of an engineer ; I could marvel at everything wrapped in the bliss of my ignorance.

And for the record, I think it is amazing and wonderful to receive these images. I am so thankful for that. But, for a data collection instrument that cost $2.6 billion to build, I cannot conceal my very personal feeling that an opportunity was missed.


The $2.6 billion price tag is not just for the rover, it includes R&D for the rover itself, employee salaries, Sky Crane & parachute R&D as well as the rocket to launch it, which cost over $100 million.

Also included in the budget is funding needed for all of the science teams to do what they do over the next few years while the rover does experiments.


The rover has been there for only a few days. They probably even haven't turned most of its instruments on!


I'm still trying to wrap my head around the fact that this piece of metal traveled 350 million miles through space and then landed on Mars.


Me too. I mean, look at this. The place looks like a desert on Earth, and yet its millions of miles away in space, on another planet. There's a certain feeling I experience when I look at those photos, like there is a vast, cold void inside of my chest. I felt something similar when I touched a meteorite for the first time. That rock came here from god knows where, probably it was floating around in space for many many years, and now here I am holding it.


That rock came here from god knows where, probably it was floating around in space for many many years

Don't sell yourself short, you yourself were floating around in space for about ten billion years. Or at least, your constituent atoms were. :)

"We, who embody the local eyes and ears and thoughts & feelings of the cosmos, we’ve begun - at last - to wonder about our origins. Star stuff, contemplating the stars, organized collections of ten billion, billion, billion atoms, contemplating the evolution of matter, tracing that long path by which it arrived at consciousness here on the planet Earth and perhaps - throughout the cosmos." ~ Carl Sagan


Don't forget the part where it rode on top of a giant rocket to escape a unfathomably deep gravity well.


And was eased to a landing by a rocket-powered sky crane. Yeah, it's cool that it rode a rocket, but man has been doing that for some time. But sky cranes? Completely automated? And no crashing?!? Holy crap...


... using first aerobraking, then a giant balloon, and then a sky crane with thrusters.


Yeah, I just watched the launch again last night...it was nine months ago.


Well it doesn't look like path planning is going to be an issue for the first couple of Km :-)


Very cool! Does anyone else think the interactivity adds nothing and just makes looking at a big image more difficult?


Yes, I agree.


Why is it all black-and-white? Is it because color cameras are too heavy/expensive to take or because color photography is for some reason impossible under those conditions?


All camera sensors are actually black and white. To obtain color information, the sensors have a color filter. This filter is optional/interchangeable in this camera. To obtain a color photo, you take three shots with three different filters (R,G,B). This way you can also get UV, IR pictures - and more.


To clarify that a bit more: consumer camera's use a Bayer filter, which is an alternating R/G/G/B filter over your pixel array. So, with a consumer camera you take just one picture, and then software assembles that into a color picture.

For space missions you would definitely prefer a camera with interchangeable filters, because then you can also have filters for other spectrum ranges.



Most (all?) of the pictures so far are from the hazard-cameras, which are used by Curiosity to make sure it doesn't fall off a cliff. (Or, less dramatically, to make sure it doesn't get stuck in some gravel.) The fact that the images are beautiful to us is just gravy.


I am surprised by how many good sized pebbles are on the top surfaces of curiosity. I guess I expect dust, but those are pretty big. Kicked up during the landing?


I had assumed that they were blown there by the wind. I'm sure it's just as likely they settled after being kicked up during the landing.

[EDIT] From the nasa website [1]:

Today's Sol 3 morning and afternoon passes by NASA's Mars Odyssey and Mars Reconnaissance Orbiter spacecraft provided a plethora of new data, including more high-resolution black-and-white 360-degree and deck panorama images from her Navigation Camera, or Navcam, which revealed some small pebbles deposited on the deck during landing, which should pose no problems for mission operations. Curiosity also returned 130 low-resolution thumbnail images from the color Mast Camera, or Mastcam, providing scientists and engineers with their first color panorama glimpse of Gale Crater.

[1] http://mars.jpl.nasa.gov/msl/news/whatsnew/index.cfm?FuseAct...



Here are some more pictures (including color) http://photojournal.jpl.nasa.gov/targetFamily/Mars

I wonder what the sky of mars looks like. Can we see earth?


> Can we see earth?

At night, as a pretty bright star, I'd guess. That would be an awesome picture, especially if they expose it long enough.


I found this http://www.msss.com/mars_images/moc/2003/05/22/

IIRC, Malin is the guy in charge of the cameras on this mission, too, so that's pretty much from the biggest expert on this topic.

EDIT Make sure you click on the full image -- that is amazingly beautiful!


There it is then, going around and taking pictures. A bit lonely out there though, I must say.

Space programs are always, by nature, unbelievable stunts. There are so many things that can go wrong and then they don't, and you end up landing a tonne of sensitive equipment safely on another planet and there it is then, sending vacation pictures to Earth.

If they can do that, what could we do or what could I do that I don't?


Also I think choice of the low-res camera is related to it being better shielded against the space radiation http://www.sciencedirect.com/science/article/pii/S1350448710...

Its still somehow unbelievable that I'm actually seeing photos from another planet almost realtime.


Intellectually stimulating, but after the first few pictures... well, there's nothing to see there, just sand and rocks. The most visually interesting thing on this panorama image is the rover itself, so I don't mind the relatively low quality. Now I'm waiting for some non-visual data. Curiosity has a lot of sensors besides the cameras.


The Cosmos magazines Mars issue on the iPad has a Mars panorama which utilizes gyroscope.

http://www.youtube.com/watch?v=xy3r8k8hX-8

Sadly its not the panorama Curiosity took but the one from the previous Mars rover.

Sill its pretty awesome.


Can someone explain what represents the marking on Curiosity next to NASA logo (middle part, white circle). It is present all over the robot.


I think it indicates moving parts.


Any thoughts on why the top area has been blacked out from view? Protecting devices? limitations of the equipment?


I suspect they haven't taken those photos yet.


Anyone know what the circular symbols (similar to the BMW logo) all over the rover mean?


They look like calibration targets to me.


I asked the same question. My guess is these are movement markings as seen on crash dummies.


Why is it all just barren desert? :D

Too bad it's not enough for a complete 360 panoramic :(


Full screen is tight. After a few minutes of looking around at the landscape, I find my self more interested in cool 360 view of curiosity not from.


amazing...


A quick take on the project.

Software Architect: "Hello sir! As you know, it has taken millions to billions of dollars and years of time to develop your project. We are ready to launch! As you know, it will take a year to actually deploy it in production, during which point a bug or misconfiguration of the software during the transfer, installation, or activation stages would cause it to fail. If a failure happens, we will need millions of dollars and lots of time to rebuild the project, as well as to figure out what went wrong and to try again. However, if we do succeed in installing the project as we hope, we expect will be able to run for ten years without updates. If you do need an update or if an error is encountered or the hardware deteriorates (such as because of a memory error, hard-drive crash, etc.), then parts of the application never work again. Also, as a reminder, the user will have to wait many hours after submitting input before receiving acknowledgement from the system, and many days to receive even partial results."

Customer: "Thank you for your report. It sounds great. I have one suggestion. Lets make sure that when ask the system for an image, we use the .BMP format to increase the file size. This way, we can please the media with numbers about how much raw data we have obtained."

Software Architect: "This will take additional time and millions of dollars and not be much of a benefit."

Customer: "Ok, fine let's skip that."


Hacker News went offline before I could finish editing the introductory sentence. The point was that a software project with the same time and cost requirements would also have interesting constraints: once it's launched, it can't be patched, and has to run for ten years. Would you as a manager really risk a failure?

Don't forget that the software can fail at any time during the year it takes for the software to install, too. Any failure before installation or after would require the project to be rebuilt for millions of dollars and months of delays.

Given that receiving a confirmation that your input was accepted already takes hours, and that returning output as textual data will serve the project's purpose for the next ten years, then the company should probably not delay the launch of the project even further just to be able to return .BMP screen shots of the text files, just to impress the media with how much raw data was found.


What's the relevance of this? Since they're bandwith limited, using a more inefficient compression scheme wouldn't gain them any extra transmitted bytes, only less actual data.


My point was that delaying an expensive and years-long software project that's already prone to failure, just to replace a perfectly adequate feature with another whose only benefit is to impress the media, would probably be considered stupid. In my example, it would be to receive images as BMP files just to be able to brag to the media how much data was received. (Maybe I shouldn't have used images in my example software project since it confused people, but I blame Hacker News for going offline before I could make improvements.)


Is that BMP thing based on something that actually happened?



You are clueless, NASA uses lossless and lossy compression on its image data.



What are you talking about?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: