I want to do an imaging project where I am capturing over multiple nights where I need to tear down my setup each night.
How close do I need to get the camera orientation to be from one session to the next for the integration step to correctly align the images?
I was planning on making a small tape mark and eyeballing it based on that. Do I need to be ultra-high precision with this or is aligning a tape mark fine?
PixInsight will register the frames and - if you check "Distortion correction" - even correct for different lenses being used. The main problem will be that you'll have to crop (throw away data, field of view) the areas that do not stack completely throughout all the frames (they will be more noisy than the rest).
I am on your same boat, as I have to setup and tear down my equipment for every different session. I often take exposures on two different subjects on the same night, depending on time available and/when the first object goes out of view because of trees/houses. I frame every subject to my likings, so camera rotation is something I do often. I don't have a graduated scale on the mechanical (and manual) camera rotator at the end of my telescope focuser, so I don't have a quick way of judging how far I am from the correct rotation of the session before.
My workaround is this: using plate solving.
With the suite I am using (KStars/EKOS) there's a command in the plate solver called "Load & Slew" (other programs have similar features, I know for sure AstroPhotography Tool has it, just with a different name). You basically load an image from the session before, the software plate solves it, calculates RA/DEC coordinates of the center and take you within decided tolerance (I go for 10 arcseconds) on target.
The beauty of it is that it also tells you the angle of rotation of the camera (in my case, expressed as degrees East of North). I make note of that angle, immediately after the image from "Load & Slew" is solved and before it's overwritten with the new solved image from the camera, in your current session (otherwise, the angle will be the new one). I then check the new angle and go from there. I manually rotate it and keep solving the field of view, until the new angle matches the old one, as closely as I can make it.
With this method, I can have the center of the image within 10 arcseconds from session to session and the angle usually down to half a degree of difference from one session to the other.