John Harris discusses the problem of other minds, not as it relates to other human minds, but rather as it relates to artificial intelligences. He also discusses what might be called bilateral mind-reading: humans trying to read the minds of artificial intelligences and artificial intelligences trying to read the minds of humans. Lastly, Harris discusses whether super intelligent AI – if it could be created – should be afforded moral consideration, and also how we might convince super intelligent AI that we ourselves should be treated with moral consideration. In this commentary, I discuss these issues brought up by Harris. I focus specifically on robots (rather than AI in general), and I set aside future super intelligent AI to instead focus on more limited forms of AI. I argue that the human tendency to attribute minds even to robots with very limited AI and whether such robots should be given moral consideration are more pressing issues than those that Harris discusses, even though I certainly agree with Harris that the potential for super intelligent AI is a fascinating topic to speculate about.