Transparency policies aimed at disclosing AI-generated advice's source do not effectively mitigate the ethical risks associated with people following unethical AI recommendations, according to a study. When participants were informed that advice came from AI, they did not adjust their behavior compared to those unaware of the AI source. This suggests that transparency alone may not be sufficient to counteract unethical behavior resulting from AI guidance.