Ethics, not morals, but yes, this is a core part of "alignment" or making sure the machine wants the same thing we do. It turns out alignment is really hard because ethics is really hard. The classic AI doomsday story is based on an AI that took utilitarianism as the highest end goal (the best way to save humanity is to destroy it); that's an ethical framework used to justify genocide.
So shorter answer: "Yes, but ethics is hard". I really like Robert Miles' videos on this topic; here's one to get you started: https://www.youtube.com/watch?v=ZeecOKBus3Q
And here's another very related one for after: https://www.youtube.com/watch?v=hEUO6pjwFOo