If every human on the planet died the planet would largely continue on, if every insect died then the planet would go through a large catastrophic change. "Animals" aren't irrelevant, and the human brain isn't everything.
I'm not sure what argument you're trying to make here. Insects are a class of which there are hundreds of thousands of species. Humans are a single species. If insects the class died that would be catastrophic. If mammals the class died that would be catastrophic as well.
I am not saying they are irrelevant, I am saying they are treated as being irrelevant because of the immense amount of power and control that the arrival of the human brain has allowed. The same thing will happen if ever strong AI emerged, humans will started being treated as irrelevant. Sort of like how America treats the citizens of some Arab countries, irrelevant. Sort of how we treat chickens and cows. Irrelevant. Power corrupts.
Understanding the workings of gross anatomy didn't make gross anatomy irrelevant.
It instead ushered in a new age of surgical intervention that saved a lot of lives and prolonged life in general. I know I'm not alone in having a relative that would have been dead twenty years ago if not for the ability to radically alter the mechanics of a malfunctioning heart.
We shouldn't cease to try to understand the world because of the risk new knowledge brings. Rather, we should also understand the risk and act to intercept and mitigate it. But don't condemn Alzheimer's patients to a slow twilight death because of the risk of strong AI; that's not an acceptable tradeoff.
The scenario for a lack of sufficient Alzheimer's research is currently what we're living with: 83,000 deaths a year, and approximately 5 million patients living with the disease, in the US alone. Probability: 1.0. I don't know if it's "worst-case," depends on whether the disease ratios are progressing year-over-year, and I don't have those numbers at my fingertips right now.
I have yet to hear a realistic strong AI nightmare scenario (i.e. one that doesn't presume a magically-smarter-than-all-of-us-combined AI as a McGuffin with no solid functional grounding) with a probability anywhere near offsetting that cost. Besides, stacking the constant chronic cost of a few tens of thousands of deaths against a nonzero-probability species-ending scenario is nearly an apples-to-oranges comparison---by the logic of simple probability-to-risk modeling, we should be stopping all disease research right now and pouring 100% of that money into fast asteroid detection and mitigation solutions.
Practically speaking, I think we have plenty of time while we are doing the research to cure a current and real disease to puzzle through the detection and mitigation strategies for a strong AI threat that is---at best---decades out (and, some would argue, will come to pass inevitably with or without our Alzheimer's research, yes?).
People just assume that superhuman AI will have human desires: safety, control, power. We want those things because any humans that didn't want them didn't survive to pass on their genes. But a human-built AI will be subject to very different evolutionary pressures. I don't think we can say anything for certain about what it will want.
In both cases, the planet would carry on. It depends on your perspective as to which matters more, human intelligence and knowledge, or nature. I personally believe the former is more important than the latter, although that doesn't mean I don't value the latter as well.
So? Human beings have the ability to make a huge impact on the environment, and therefore the value assigned by people to various outcomes and tradeoffs does matter.