Social robots may provide a solution to various societal challenges (e.g. the aging society, unhealthy lifestyles, sustainability). In the current contribution, we argue that crucial in the interactions of social robots with humans is that social robots are always created to some extent to influence the human: Persuasive robots might (very powerfully) persuade human agents to behave in specific ways, by giving information, providing feedback and taking over actions, serving social values (e.g. sustainability) or goals of the user (e.g. therapy adherence), but they might also serve goals of their owners (e.g. selling products). The success of persuasive robots depends on the integration of sound technology, effective persuasive principles and careful attention to ethical considerations. The current chapter brings together psychological and ethical expertise to investigate how persuasive robots can influence human behaviour and thinking in a way that is (1) morally acceptable (focusing on user autonomy, using deontological theories as a starting point for ethical evaluation) and (2) psychologically effective (focusing on effectiveness of persuasive strategies). These insights will be combined in a case study analysing the moral acceptability of persuasive strategies that a persuasive robot might employ while serving as a clothing store clerk.