How can I practically verify 2TB of a life's worth of files while guaranteeing I won't have data loss due to some edge cases and race conditions that delete my data.
Every time I've created my own backup script I realized knowing what to delete and when is not easy. IMO the practical solution to this is to just pay for more storage (within reason).
How can you guarantee you'll have access to your 2TB Google Drive when they ban your Google account for breaching terms or accidentally tripping a circuit breaker across one of their offerings?
A backup is not something I fear of losing access to because by definition it’s a copy.
However I am more afraid of my data being exfiltrated and imo there is a more risk of that with a “vibe coded 1 person 1 week old” app rather than any of the major providers.
This is unironically why I do not depending on google for products this important. I do have premium google drive as I needed barely over 15gb, but my main cloud storage is dropbox. A YT comment I made 10 years ago can't break Dropbox's TOS, and since premium storage is their whole business, they will take the product more seriously.
I also have a 14TB RAID 5 NAS at home. And my Desktop PC has 6TB of RAID 5 (had that first, mostly used for video games these days).
> How can I practically verify 2TB of a life's worth of files while guaranteeing I won't have data loss due to some edge cases and race conditions that delete my data.
Same with literally every other backup software. Have two, and test restorations regularly. It's not easy, but nothing worth it when you need it ever is.
I will never use one drive even if they paid me. I use Dropbox + Homelab NAS with RAID 5 + old desktop PC with a RAID 5 drive. I have a lot of RAW photos to keep.
I'll never forget when my Grandpa died 20 years ago, the first thing my dad did - even before telling us - was look for photos. His siblings did the same and they came up with a collage of around 30 photos I had never seen before that gave me a small glimpse of the highlights of his life.
My other grandpa, controversially used a big chunk of their wedding money on a good camera. They traveled the world and lived abroad for several years right before and after my mom and aunt were born. Because of this, we are all able to see such a fascinating and meticulous glimpse into their lives. Each photo tells a story even if the story is boring, but I really appreciated the small details. Even random pictures of cars that my Grandpa thought were cool. Or the mean guard dog they had in Taiwan while it was still a puppy. Or my mom on the Trans Siberian Railroad in the middle of the Cold War.
These stories and my own appreciation of photography have made me realize how valuable every photo I have is, and I'm willing to put in effort to save them. When I'm old and dying of dementia, I'll be able to look back at my life in incredible detail one last time. Even the dumb meme's I decided to save will tell a story.
I still have a deep appreciation for living in the moment and knowing not everything should be captured, but we live in an era where I have a really good camera in my pocket at all times, and the ability to store all those photos forever cheaply.
In my opinion there are two main groups on the spectrum of "vibe coding". The non technical users that love it but don't understand software engineering enough to know what it takes to make a production grade product. The opposite are the AI haters that used chatgpt 3.5 and decided LLM code is garbage.
Both of these camps are the loudest voices on the internet, but there is a quiet but extremely productive camp somewhere in the middle that has enough optimism, open mindedness along with years of experience as an engineer to push Claude Code to its limit.
I read somewhere that the difference between vibe coding and "agentic engineering" is if you are able to know what the code does. Developing a complex website with claude code is not very different than managing a team of off shore developers in terms of risks.
Unless you are writing software for medical devices, banking software, fighter jets, etc... you are doing a disservice to your career by actively avoiding using LLMs as a tool in developing software.
I have used around $2500 in claude code credits (measured with `bunx ccusage` ) the last 6 months, and 95% of what was written is never going to run on someone else's computer, yet I have been able to get ridiculous value out of it.
I agree with the other comment that measuring productivity is pointless, as there has never been a good way to do this.
But the closest answer I can give you (without detailed examples of work projects) is I can prototype things faster than my team of 5 devs + 1 BA + 1 Manager before AI / Covid. The speed isn't just the faster code generation, but a fundamental paradigm shift from the commonly accepted project management philosophies. Agile and scrum are (in my experience) meant to protect developers from "wasted work" or "throwaway code" and also placate this non technical stakeholder fantasy that they know the best about product and can micromanage their way into a predictable timeline.
I have effectively been working as a team of 1 and I have been able to prototype things in days or weeks that would of taken months before. 95% of the code generated by claude is throwaway but the goal is to discover the real requirements faster. In the old model every step and possible risk needs to survive 3 meetings. If the story points are arbitrarily high then we have to split the tasks into more tasks.
Ironically, the obsession of quantifying productivity is what killed the productivity. People that live through spreadsheets would rather have 10 units of measurable productivity vs 50 units of unmeasurable productivity.
These kinds of comments are so spectacularly useless. It was almost impossible to measure productivity gains from _computers_ for nearly two decades after they started being deployed to offices in the 1980s.
There were articles as late as the late 1990s that suggested that investing in IT was a waste of money and had not improved productivity.
You will not see obvious productivity gains until the current generation of senior engineers retires and you have a generation of developers who have only ever coded with AI, since they were in school.
It was not impossible to measure them. It is just that you dont like the result of the measurement - early adopters often overpaid and endes up with less efficient processes for more money.
Eventually companies figured out how to use them effectively and eventually useful software was created. But, at the start of the whole thing, there was a lot of waste.
Quite a lot of people are now paying a lot for ai that makes them produce less and lower quality. Because it feels good and novel.
Purchases that wouldn't go through if they didn't reduce competition shouldn't happen anyway. Banning those kinds of restrictions would help with that.
Regardless one of the conditions surely is giving them permissions to sell this to starlink as and everyone else. So whether the information is the same is probably irrelevant, how they are using it is.
Probably, because you are now associating your internet browsing with your personal information. (I don't know if they have the sophistication to actually do this, but it is very possible.)
Calling everything a logical fallacy, is also a logical fallacy.
We have already seen the federal government use facial recognition data to create an app that tells ICE goons who's legal. We should not tolerate the government forcing more data tracking and privacy violations just because you are not "sliding" today.
This is also why I think we will enter a world without Jr's. The time it takes for a Senior to review the Jr's AI code is more expensive than if the Sr produced their own AI code from scratch. Factor in the lack of meetings from a Sr only team, and the productivity gains will appear to be massive.
Whether or not these productivity gains are realized is another question, but spreadsheet based decision makers are going to try.
The business leaders do not care about this yet. I think a lot of people think we already have more Seniors than we will need in the next 5-10 years.
Also - the definition of Senior will change, and a lot of current Seniors will not transition, while plenty of Juniors that put in a lot of time using code agents will transition.
>while plenty of Juniors that put in a lot of time using code agents will transition.
But will they? I'm not at all convinced that babysitting an AI churning out volumes of code you don't understand will help you acquire the knowledge to understand and debug it.
The bet from various industry leaders appears to be that the current generation of engineers will be the last who will ever need to think about complex systems and engineering, as the AI will just get good enough to do all of that by the time they retire.
I think it’s deeper than that because it’s affected more industries than software and already started pre AI.
American corporate culture has decided that training costs are someone else’s problem. Since every corporation acts this way it means all training costs have been pushed onto the labor market. Combine that with the past few decades of “oops, looks like you picked the wrong career that took years of learning and/or 10 to 100s of thousands of dollars to acquire but we’ve obsoleted that field” and new entrants into the labor market are just choosing not to join.
Take trucking for example. For the past decade I’ve heard logistics companies bemoan the lack of CDL holders, while simultaneously gleefully talk about how the moment self driving is figured out they are going to replace all of them.
We’re going to be outpaced by countries like China at some point because we’re doing the industrial equivalent of eating our seed corn and there is seemingly no will to slow that trend down, much less reverse it.
If you look at the luddite rebellion they weren't actually against industrial technology like looms. They were against being told they weren't needed anymore and thrown to the wolves because of the machines.
The rich have forgotten they are made of meat and/or are planning on returning to feudalism ala Yarvin, Thiel, Musk, and co's politics.
Apprenticeship. You will have to prove to the company that working at a minimal wage is still beneficial. Or we can take it even further, you will have to pay the company for getting the necessary experience. Maybe you sign a 5 year contract with a big cancellation fee. It is not unheard of. I remember some of the navy schools having something like this. You study for 5 years for free (bed and food are paid by the school) and then you have to work for at least 5 years for the navy or pay a very big fine if you refuse to do so.
The highest volume OTR shipping lane is LA to Phoenix, which is already the perfect place for self driving vehicles.
I've been saying for years, trucks should drive autonomously from one mega parking lot outside a city to another at nighttime, and have humans handle the last mile during the 7-3 shift.
My biggest frustrations with it aren't even related to the look of things, its the all around disregard for user experience. The new screenshot UX on iOS is an insanely bad downgrade.
I think it makes sense. They refocused it on sharing or extracting information from screenshots. Which is what people want more than saving them to the camera roll. Being able to copy text or translate the text in a screenshot is super useful.
Every time I've created my own backup script I realized knowing what to delete and when is not easy. IMO the practical solution to this is to just pay for more storage (within reason).
reply