There are a few methods for doing this. Most of they rely on taking the stars from the broadband data (typically the red channel) and putting in the Ha data. I particularly like this method. Josh's idea of doing starless tone-mapping will probably get you the best results though since you can play with the stars separately.
I'll add some more comments for the method in the link I provided, since that page is lacking many details on how to actually do everything. I didn't find it necessary to make multiple passes of continuum maps and pseudo red images if I got a good pass of noise reduction on the continuum map. To do that I used the Ha channel permanently stretched with STF auto stretch settings for local support in TGVDenoise (with default settings for shadows highlights and midtones). When generating the continuum map you also need to turn on the rescale option for PixelMath otherwise you'll get clipped data. It's either that or manually scale the data, for example C = B/N becomes C = m*B/N + b and B = N*C becomes B = N*(C-b)/m. You need to pick values for m & b such that no pixels are clipped, which you can check with the Statistics process. For the last data set I used I chose m=0.1 & b=0.1. The nice thing about this change is that the resulting image has the same intensity profile as the original, where as if you just rescale the data then you'll have to redo some kind of linear fit.
In the past I had used something like this: a*max(Ha, R) + ~a*((Ha+R)/2), where a is the blending factor between the average and max functions (this assumes a LinearFit was run prior so that the intensities of the images match up). It works reasonably well but you get some of the noise from the red channel showing up in weird ways due to the max function.
Edited by David Ault, 17 June 2015 - 10:29 AM.