Speaking generally and without having access to QC procedures or records, I believe that sampling is involved.
For example, 500,000 actuations (i.e. ‘the population’) might get 5000 test actuations (i.e. ‘the sample’). Then if, say, 100 shutters (2%) perform outside limits, the margin of error is 0.39% for 95% confidence level meaning that over 500,000 there would be between 1.61% and 2.39% failures. Whether that is acceptable to QC or not, I have no idea!
A “failure” does not always mean that the shutter fails to click. For example, the manufacturer might set limits of +/-10% on shutter period (about +/- 0.14EV) in the factory but, if after purchase the shutter time was out by20%, that would be about 0.26 EV - well within most photographers’ tolerance.
Those able to look at raw metadata are often surprised by the difference between what showed in the viewfinder and what the actual shutter time was. Well, certainly on my early Sigma DSLR …
Now “SHUTTER” might be just a value from a floating point geometric progression used to activate the shutter, and “SH_DESC” is of course text i.e. what you see in the viewfinder and in review screen on the monitor
@ChristopherPerez I have a bad memory, but it is achievable through certain methods that may or may not brick the camera. It certainly depends on the manufacturer and model.
Another possibility is when a camera has been powered down for a long time and has a dead backup battery.
Possibly for a camera used by professionals. But an enthusiast who does not use a camera daily may not take more than 10k–20k images a year, so you see plenty of cameras with less than 40k shutter counts. And for such a person, a camera with 400k shutter activations will be good for years.
Also, note that the video you linked talked about DSLRs, which is a completely different beast: MILCs do not flip the mirror, so the count is much less relevant. MPB even claims that it is unreliable and totally irrelevant for MILCs. Probably a holdover from the DSLR era.
Free and public models trained with free and public data seem fair and it’s the best outcome we can expect out of this. The cat is out of the bag, best we can do is make the technology as free and open to as many people as possible, contrary to what OpenAI is doing.
I believe she’s saying this for Meta’s own gain, but the idea itself is not bad.
That said, I believe any reasonable “old”(40/50+ years, or less if the creator has benefited well financially from it) cultural item should be open to everyone regardless if it is used to train an AI or not.