The EU AI act activates this year. Facial recognition is in the restrictive list. You don't want to give auditors ammunition before it goes live as top fine would cost FB around $4B, and wouldn't be a one time fine.
Even if only law enforcement can use it, having that feature is highly regulated.
[edit] I see this is from years ago. I should read the articles first. :)
I've found Claude works so much better if you build a CLAUDE.md and tell it that you want it to be an interactive design process.
It helps formalise your plan, then creates some code, you review, talk about what you believe to be wrong or asking it why it took that approach, or even telling it to take the approach that you want.
The end result a world of difference, and I feel I have a better grasp of what is going on in the whole application.
2017 GPT could generate text that looked factual and well written but was total garbage. Compare that to 2023.
The technology is accelerating. Hard projects from early last year are now trivial for me. Even AI related tools we are using internally are being made redundant from open source models and new frameworks (eg. OpenClaw).
It feels like we are in the AI version of "Don't look up". Everyone is on borrowed time, you should be looking at how to position yourself in an AI world before everyone realises.
I was in the same mindset until I actually took the Claude code course they offer. I was doing so much wrong.
The two main takeaways. Create a CLAUDE.md file that defines everything about the project. Have Claude feed back into the file when it makes mistakes and how to fix them.
Now it creates well structured code and production level applications. I still double check everything of course, but the level of errors is much lower.
An example application it created from a CLAUDE.md I wrote. The application reads multiple PDF's, finds the key stakeholders and related data, then generates a network graph across those files and renders it in an explorable graph in Godot.
That took 3 hours to make, test. It also supports OpenAI (lmstudio), Claude and Ollama for its LLM callouts.
What issue I can see happening is the duplication of assets in work. Instead of finding an asset someone built, people have been creating their own.
Instead have Claude know when to offload work to local models and what model is best suited for the job. It will shape the prompt for the model. Then have Claude review the results. Massive reduction in costs.
btw, at least on Macbooks you can run good models with just M1 32GB of memory.
I don't suppose you could point to any resources on where I could get started. I have a M2 with 64gb of unified memory and it'd be nice to make it work rather than burning Github credits.
You can then get Claude to create the MCP server to talk to either. Then a CLAUDE.md that tells it to read the models you have downloaded, determine their use and when to offload. Claude will make all that for you as well.
Mainly gpt-oss-20b as the thinking mode is really good. I occasionally use granite4 as it is a very fast model. But any 4GB model should easily be used.
That just moves the question to "why is this one being shared" then. I don't think "because the authors didn't know better than to avoid sharing it like 'most of us'" is a particularly good answer.
> just to have the exact same output, exact same time, but now "nicer".
The majority of code work is maintaining someone else's code. That's the reason it is "nicer".
There is also the matter of performance and reducing redundancy.
Two recent pulls I saw where it was AI generated did neither. Both attempted to recreate from scratch rather than using industry tested modules. One was using csv instead of polars for the intensive work.
So while they worked, they became an unmaintainable mess.
> maybe the tests aren't the best designs given there is no way I could review that many tests in 3 hours,
If you haven't reviewed and signed off then you have to assume that the stuff is garbage.
This is the crux of using AI to create anything and it has been a core rule of development for many years that you don't use wizards unless you understand what they are doing.
I used a static analysis code coverage tool to guarantee it was checking the logic, but I did not verify the logic checking myself. The biggest risk is that I have no way of knowing that I codified actual bugs with tests, but if that's true those bugs were already there anyways.
I'd say for what I'm trying to do - which is upgrade a very old version of PHP to something that is supported, this is completely acceptable. These are basically acting as smoke tests.
You need to be a bit careful here. A test that runs your function and then asserts something useless like 'typeof response == object' will also meet those code coverage numbers.
In reality, modern LLMs write tests that are more meaningful than that, but it's still worth testing the assumption and thinking up your own edge cases.
Thanks. Do you mean the Totally Under Control documentary from 2020?
One question, have you ever considered the opposite of what you're saying to be true, or looked for the evidence for that? Saying because I've heard of and looked at both opinions you've expressed in your comment and heard and seen evidence for them to be true. I also did the opposite. And looking at the opposite seemed more true objectively and with the emotions and popular biases like authority bias and other harmful ones removed.
"If you can't see anything wrong with the side you agree with, and you can't see anything right with the side you disagree with, you've been manipulated." Very wise quote.
Authority bias is the cognitive bias where we give disproportionate weight to the opinion of someone perceived as an authority figure, even outside their domain of expertise.
What a lot of people outside the scientific field fail to realise that when a claim is made by an expert in their field, it is peer reviewed and challenged or accepted.
The whole point of science is that you can challenge wrong ideas and change your perspective.
Like I said though. If you are just going to the internet to find something that aligns with what you believe, that isn't peer review.
> Authority bias is where an expert in field X claims to have knowledge in field Y when they don't.
Did you not share this above? This contradicts to me your definition here, which is just copying what I shared and stating it as thought its now your opinion. Comes off like gaslighting by the way.
Even if only law enforcement can use it, having that feature is highly regulated.
[edit] I see this is from years ago. I should read the articles first. :)
reply