As the risks of artificial intelligence (AI) attract the spotlight of public attention, policymakers turn to academia to indicate which problems to prioritize. While some argue they should first deal with the prospect of a global catastrophe (AI safety), others believe AI’s current social impact bears greater urgency (AI ethics). This has led some to express the concern that one or the other "diverts the public’s attention." In this article, I sketch out the psychological landscape, in which these concerns arise as a logical reaction, but suggest that in the case of AI risks, they are misplaced. I ran a survey, in which students were asked regarding their attitudes towards the issues commonly labeled under "AI ethics" and "AI safety." The results suggest that when salience of AI safety is experimentally increased, people report higher support for solving the problems related to AI ethics. Secondly, the levels of concern for AI safety and AI ethics correlate positively. Therefore, in terms of public-facing communication, AI safety and AI ethics seem like memetic allies, rather than rivals.