

Law enforcement AI is a terrible idea and it doesn’t matter whether you feed it “false facts” or not. There’s enough bias in law enforcement that the data is essentially always poisoned.
Law enforcement AI is a terrible idea and it doesn’t matter whether you feed it “false facts” or not. There’s enough bias in law enforcement that the data is essentially always poisoned.
Maybe in some cases. But I’ve been requested by Google support to provide a video for a very simple and clear issue we were having. We have a contract with them and we personally brought up the issue to a Google employee during a call. There was no concern of AI generated bullshit, but they still wouldn’t respond without a video. So maybe there’s more to this trend than what you’re theorizing.
but they’re a different kind of hassle
Can you elaborate on this? I thought they would be straight up better to work with and I was thinking of buying one in the future. Is it just about the drying up issue you mentioned or are there other drawbacks?
“Gender” means nothing without context. By a MAGAs definition of gender this policy doesn’t protect trans people, for example. We don’t know how this rule will be interpreted in practice. Even if you don’t consider the intent behind making this change, this is objectively a weaker guarantee of protection than what we had with “gender identity and expression”.