Reflection superimposition, resulting from the entanglement of transmission and reflection layers in images, remains a long-standing challenge in computational photography and computer vision due to its inherently ill-posed nature. This work presents a comprehensive framework that advances both the theoretical understanding and practical effectiveness of single-image reflection separation. We begin by surveying and systematically categorizing decades of existing methods, offering an organized taxonomy based on input modalities, prior constraints, and architectural paradigms. Considering the limitations of existing single- and dual-branch models, we introduce a \emph{generalized dual-stream interactive architecture} that facilitates multi-scale and high-dimensional feature exchange between transmission and reflection pathways. This design unifies activation-based, gate-based, and attention-based interaction mechanisms, and is compatible with both CNN and Transformer backbones. Furthermore, we challenge the conventional linear composition model in the sRGB space and introduce a learnable nonlinear interaction term, which more accurately captures real-world layer blending and substantially improves decomposition fidelity. Extensive experiments on multiple real-world benchmarks demonstrate that our method achieves state-of-the-art performance and strong generalization.