This question has been exercising the minds of science fiction writers for nearly seventy years!
Most people will point to Isaac Asimov and his Three Laws of Robotics. (Asimov himself claimed that there were enough loopholes in the Three Laws to keep him profitably selling stories for the next forty years…) The Will Smith film based on Asimov’s robot stories, I, robot, posed a perfect example of the sort of conflict you have in mind, of something unethical happening for ethical reasons. The will Smith character had a major hangup about robots because he had been involved in an accident where two cars ended up sinking in a river. A robot went into the river to save the humans in the cars, based on the First Law. But the robot weighed up the likelihood of saving both humans and decided that was not possible. Instead, it prioritised saving one human based on best chance of survival and utility of the saved human to society. It saved the Will Smith character, a policeman, before attempting to save the other human, a child, despite Smith ordering the robot to save the child (Second Law trumped by First Law).
Interestingly, the film developed robot motivations to the point where they were prepared to restrict human freedoms because humans do things to themselves that are harmful. This reflected the work of another classic science fiction writer, Jack Williamson, whose “Humanoids” had only one directive: “To serve and obey, and to keep men from harm”. Taking this to its logical conclusion, the “Humanoids” ended up keeping the entire population under chemical lockdown for their own good.
You ask, “Should we really expect AI’s to have any ethics at all?”. Given that for thirty or forty years, we have been told that the role of business is wealth creation and ethics has no - or at best, a secondary - role to play in maximising shareholder value, the answer is sadly clear.