AI Is bad for us-because we are bad

I caught a thought on the AI question-“AI is bad for us! No it isnt!” But lets see whats going on right now-Syria is turned into rubble, refugrants are flushed into Europe, there all they do is bullshit-no work, no job, just camps. The rest of the world is bombarded by fake news and only now leaders are trying to think someting about it (Two years later!). One country is trying to usurp other’s lands while in its own backyard corruption is flourishing, people abused, money put not into food and machinery, but into weapons. Many of us are lazy, obese, ineffective, work too much, get tired, there is a lot of abuse towards others and ourselves.
Now AI will be rational-it has no emotions, it knows how to learn and it does it much much faster than humans do. What do you think would it do, when it understood, what the hell are we doing in our ball of mud we live on? I cant tell you-we wont like its offers at all. Autocratic abusers wont like it, because AI will definitely take over their governance (and in current technology world it has more than enough ways to do it-from disconnecting power to evil leaders headquarters to bombing them by automatic rockets to tricking people with false messages or documents (which, as the long term strategy, AI had created based on the previous events to persuade people and direct their thinking) to suddenly making vanish all their money on the bank accounts. Democratic/liberal countries, on the other wise, wont like it too-no more robberies, no more overeating and obesity, no more stupid decisions (Trump was elected democratically, remember?), no more wasting of wealth, no more irrational decisions, no more harmful pollution. The best example here is an autocratic, but very smart and rational and loving his country leader-he wont bother to cut some heads of those, who do bad decisions, but he no way will be selfish and will do his best to make his country shine and people happy enough. So, if AI gets “out of control” (or from other perspective-start fixing our irrational decisions), we will definitely feel it and wont like it-or maybe even wont feel it: if you build someone a tunnel to go somewhere, or put signs that direct him towards the destination in narrow streets, he will definitely reach it, in other words, if you apply a *really* good strategy to others, they will reach the destination you wanted without even understanding it was you to direct them. The main question is-how do we train our AI. Yes, AI needs to be trained to analyse and make decisions based on the past and the lessons of the past. There are methods and algorithms on that, where learning system references to the past and history to improve its decisions and also by trial and error methods. It is like a child, who can become a leader or a crook, depending on external factors affecting him. And, no doubt, everyone would train their AI to their terms, but finally AI should become perfectly rational-it is a learning system after all. And that perfect rationality will no longer allow us to be imperfect, to do harm to ourselves and others around us.
I dont think there is a definite answer, whether the AI will decide that it should get rid of us (in fact, again, it could project an alternative reality to us so well, that we would not even need to be ridden of-we would be affected by AIs “news”, “messages” in a such way that there would be no murders, no robbing, no 3 bigmacs a day, no couchpotatoing, no bribing and own pocket stuffing). No doubt, life is both perfect (due to evolution) and imperfect (because it tends to harm to its environment and itself and, also, life is somewhat ineffecient, especially us, humans, with our desires, waste of food, things, materials). It IS possible, that AI one day decides to get rid of us, but, if we train it properly, we should stay for quite a lot of time or even get saved by our own trained AI from some natural cataclysms.