Trust in AI healthcare technologies is often treated as an obtainable end-state enforceable byregulation, in which developers can claim their product to be ‘trusted’. The article shows thelimits of this approach, arguing instead for a processual understanding in which trust isunderstood to be dynamic and forever a state ‘to come’. The argument is developed byconsidering several types of trust relations amongst key stakeholders in AI healthcare, includingwhere developers often distrust users. Drawing on political theory and Coactive Design, thearticle argues that trust relations as a negotiation are integral to a well-functioning designprocess that not only supports the moral acceptability of AI healthcare technologies but also theirinnovation and efficacy.