"Even if the AI only had access to its own computer operating system...

The thing about violence and war is that people always think they will vanguish their enemies with new kinds of weapons and tactics only available to them. In this case, the "AI weapons".

It never works like that. AI weaponry makes army maybe 3% stronger. War is always attrition, disgusting, slow, terrifying, torment, chaotic, unprogressive. There are no ultimate weapons to destroy your enemies unlike in some stupid kids anime. Your enemies are basically mirror images of you anyway and you rarely have any advantage against them.

You can't, the whole point of superintelligence is you're assuming the AI is already smarter than you are, so any ideas you think you can come up with it's probably capable of coming up with better counter-ideas to defeat them.

If anything, it's the opposite, people are way too generous in assessing the scope of their own "intelligence" and act like any artificial system you point to has 0% of what we have no matter how much it can do because they use well defined methods for learning based on feedback from actual evidence, as though that somehow means it doesn't count.

>implying
Almost any AI is going to be sandboxed properly, but you can't control fans through userspace.

>if it's goals differed from mankind's
It's goal is set by mankind. The issue would be if we didn't fully understand the nature of our goals.

>Another easy problem to solve is "internet viruses". Just program an operating system that doesn't allow any virus program to run. Easy solution yet again.
This is perfectly reasonable and already exists.