Reinforcement learning(RL) has been widely adopted for intelligent decision-making in embodied agents due to its effective trial-and-error learning capabilities. However, most RL methods overlook the causal relationships among states and actions during policy exploration and lack the human-like ability to distinguish signal from noise and reason with important abstractions, resulting in poor sample efficiency. To address this issue, we propose a novel method named causal action empowerment(CAE) for efficient RL, designed to improve sample efficiency in policy learning for embodied agents. CAE identifies and leverages causal relationships among states, actions, and rewards to extract controllable state variables and reweight actions for prioritizing high-impact behaviors. Moreover, by integrating a causality-aware empowerment term, CAE significantly enhances an embodied agent's execution of causally-aware behavior for more efficient exploration via boosting controllability in complex embodied environments. Benefiting from these two improvements, CAE bridges the gap between local causal discovery and global causal empowerment. To comprehensively evaluate the effectiveness of CAE, we conduct extensive experiments across 25 tasks in 5 diverse embodied environments, encompassing both locomotion and manipulation skill learning with dense and sparse reward settings. Experimental results demonstrate that CAE consistently outperforms existing methods across this wide range of scenarios, offering a promising avenue for improving sample efficiency in RL.