11/12/2025
We don’t get to abandon our ethics because we are told something is inevitable.
As we’ve discussed the (many) problems with the new AI dog training app (environmental degradation, plagiarism, data security, etc), I’ve noticed a theme:
AI is here to stay, it’s either get on board or be left behind.
First of all, citation needed.
We don’t know that.
We do know that a lot of extremely wealthy people have sunk ungodly amounts of money into AI, and need it to work out.
We know that there is a huge hype and normalization campaign around it.
We don’t know what the future holds, and we all have an active role in creating it.
And secondly, then leave me behind.
We don’t get to just abandon our ethics because some tech bro (or wannabe tech bro) creates a new tool.
I think plagiarism is unethical. I think the current way AI affects the environment is unethical. I think undermining small business and new trainers is unethical. I think offering people sub par support (and this WILL be sub par) just because it’s cheap is unethical.
So I cannot support this, regardless of how inevitable it may (or may not) be.
AI is not ontologically evil.
Tech bros are though, and I don’t understand why some animal behaviour professionals are trying to emulate them.