In this paper we raise the question whether technological artifacts can properly speaking be trusted or said to be trustworthy. First, we set out some prevalent accounts of trust and trustworthiness and explain how they compare with the engineer’s notion of reliability. We distinguish between pure rational-choice accounts of trust, which do not differ in principle from mere judgments of reliability, and what we call "motivation-attributing" accounts of trust, which attribute specific motivations to trustworthy entities. Then we consider some examples of technological entities that are, at first glance, best suited to serve as the objects of trust: intelligent systems that interact with users, and complex socio-technical systems. We conclude that the motivation-attributing concept of trustworthiness cannot be straightforwardly applied to these entities. Any applicable notion of trustworthy technology would have to depart significantly from the full-blown notion of trustworthiness associated with interpersonal trust.