Self‐driving cars hold out the promise of being much safer than regular cars. Yet they cannot be 100% safe. Accordingly, we need to think about who should be held responsible when self‐driving cars crash and people are injured or killed. We also need to examine what new ethical obligations might be created for car users by the safety potential of self‐driving cars. The article first considers what lessons might be learned from the growing legal literature on responsibility for crashes with self‐driving cars. Next, worries about responsibility gaps and retribution gaps from the philosophical literature are introduced. This leads to a discussion of whether self‐driving cars are a form of agents that act independently of human agents. It is suggested that it is better to analyze their apparent agency in terms of human–robot collaborations, within which humans play the most important roles. The next topic is the idea that the safety potential of self‐driving cars might create a duty to either switch to self‐driving cars or seek means of making conventional cars safer. Lastly, there is a short discussion of ethical issues related to safe human–robot coordination within mixed traffic featuring both self‐driving cars and conventional cars.