The key procedure of haze image synthesis with adversarial training lies in the disentanglement of the feature involved only in haze synthesis, i.e., the style feature , from the feature representing the invariant semantic content, i.e., the content feature . Previous methods introduced a binary classifier to constrain the domain membership from being distinguished through the learned content feature during the training stage, thereby the style information is separated from the content feature. However, we find that these methods cannot achieve complete content-style disentanglement. The entanglement of the flawed style feature with content information inevitably leads to the inferior rendering of haze images. To address this issue, we propose a self-supervised style regression model with stochastic linear interpolation that can suppress the content information in the style feature. Ablative experiments demonstrate the disentangling completeness and its superiority in density-aware haze image synthesis. Moreover, the synthesized haze data are applied to test the generalization ability of vehicle detectors. Further study on the relation between haze density and detection performance shows that haze has an obvious impact on the generalization ability of vehicle detectors and that the degree of performance degradation is linearly correlated to the haze density, which in turn validates the effectiveness of the proposed method.