Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

Tony Hallas on Using a DSLR

This topic has been archived. This means that you cannot reply to this topic.
63 replies to this topic

#26 Ivo Jager

Ivo Jager

    Vendor ( Star Tools )

  • *****
  • Vendors
  • Posts: 400
  • Joined: 19 Mar 2011

Posted 17 November 2014 - 08:59 PM

 

Dithering is only effective to reduce fixed pattern noise; his color mottle is simply the result of transformed (by the debayering) random shot noise; the 'Color Mottle' is never in the same place, so dithering isn't required to fix that. Tony's solution to use dithering is just a very roundabout way of fixing up his initial error of debayering his frames with Adobe Camera RAW. It's really a less effective way of applying Bayer Drizzle (scroll down the full page) of the RAW frames. The latter procedure doesn't introduce the mottling in the first place.

 

Hi Ivo, great post, thanks for that.

I don't know if it's entirely accurate though to say that color mottle isn't in the same place.  As I understand it, color mottle is largely the result of dark thermal signal, which is a property of the sensor.  Sure, additional color mottle can be created by debayering prior to stacking, but most people avoid this.

Essentially, Tony was right to advocate for dithering, but it was overblown in his examples because of the debayering problem that he introduced in his workflow (using Adobe Camera RAW).  Dithering is still very useful in getting rid of color mottle regardless.

 

I'd happily stand corrected on this, but I don't think there is any reason why warm and/or hot pixels would correlated; they should not result in mottling by themselves (mottling being defined as spots and blotches made up of multiple co-located pixels with roughly the same color). This 'mottling' only comes into existence after applying some types of filtering and debayering.

 

These are two GIFs demonstrating the mottling is *not* a fixed pattern;

 

First GIF 1; notice that the hot pixels stay in the same place.

 

http://startools.org...lts/mottle1.gif

 

Now GIF 2; a median filter was applied, removing high frequency detail/noise and leaving low frequency detail and noise ('blotches' aka mottling). It is now abundantly clear that any perceived blotches appear in random locations. Dithering is not going to help one bit with those.

 

http://startools.org...lts/mottle2.gif

 

The source are two CR2s (300s each at ISO 800) taken in sequence from a Canon T3i converted with dcraw with parameters -T -6 (TIFF and 16-bit).

 

Also notice that in GIF 1, the hot pixels are never single pixels (particularly not the blue and red hot pixels). This is due to dcraw's AHD debayering algorithm which (necessarily) bleeds noise into neighbouring pixels through the interpolation.


Edited by Ivo Jager, 17 November 2014 - 09:10 PM.


#27 Agnotio

Agnotio

    Ranger 4

  • -----
  • Posts: 387
  • Joined: 29 Aug 2008

Posted 18 November 2014 - 12:26 AM

Ivo, I think your example images above are good examples of shot (or photon noise).  I agree that dithering will not help to reduce this noise.  However, dithering does do something.  Since it doesn't reduce random shot noise, it must be non-random pattern noise that it reduces. 

 

The question is I guess whether pattern noise (e.g. from thermal signal), also looks like color mottling. 


Edited by Agnotio, 18 November 2014 - 12:35 AM.


#28 Ivo Jager

Ivo Jager

    Vendor ( Star Tools )

  • *****
  • Vendors
  • Posts: 400
  • Joined: 19 Mar 2011

Posted 18 November 2014 - 05:25 PM



 

dithering does do something.  Since it doesn't reduce random shot noise, it must be non-random pattern noise that it reduces. 

Absolutely! And dithering between frames is one of two things I always recommend people do to improve their data. The other is taking flats.

 

 

The question is I guess whether pattern noise (e.g. from thermal signal), also looks like color mottling.

 

Pattern noise only comes into existence after stacking where it is turned into a pattern (e.g. you won't be able to see it by looking at a single frame) if you don't dither. Before that (e.g. in a single frame) it's simply a warm/hot pixel. As I showed in the example GIFs, the contributed 'mottling' due to warm/hot pixels is negligble *per frame* (at least at those exposure times and for that camera).

 

Regardless, what I'm trying to say, is that 'mottling' is an artifact and not inherent to data acquisition, sensors, physics, or otherwise, and that Tony is 'solving' a problem of his own making (yes, you're right that we can make hot pixels, in the absence of shot noise, look like mottling by applying the right filter). I'm not disputing that dithering works - it works very well indeed!


Edited by Ivo Jager, 18 November 2014 - 05:39 PM.


#29 G. Hatfield

G. Hatfield

    Surveyor 1

  • *****
  • Posts: 1,572
  • Joined: 31 Jan 2009

Posted 19 November 2014 - 10:09 PM

Well if no one else is going to speak up, I will.  Ivo it is rare to experience your level of arrogance.  We all know there is not just one way to process images.  I have been using Tony's methods for my Hutech modified Canon 6D for about a year now and I like the results.  I don't use darks; I don't use flats; I don't do a calibration; I use Camera Raw just as Tony described; I use Registar for registration and combination; I do a DDP in CCDStack and run deconvolution just as Adam Block describes in his LRGB method.  Then I go into Photoshop and PI and continue to do everything wrong in your, no so humble, opinion.  And it works.   It may not be perfect, but I'm proud of the result.  Sometimes a few compromises along the way is not so bad if you end up with a good result.  I'm not in this for perfection.  So Ivo, feel bad for me!  I have fallen into bad company!  But I like where I am....

 

George



#30 Ivo Jager

Ivo Jager

    Vendor ( Star Tools )

  • *****
  • Vendors
  • Posts: 400
  • Joined: 19 Mar 2011

Posted 19 November 2014 - 10:29 PM

Well if no one else is going to speak up, I will.  Ivo it is rare to experience your level of arrogance.  We all know there is not just one way to process images.  I have been using Tony's methods for my Hutech modified Canon 6D for about a year now and I like the results.  I don't use darks; I don't use flats; I don't do a calibration; I use Camera Raw just as Tony described; I use Registar for registration and combination; I do a DDP in CCDStack and run deconvolution just as Adam Block describes in his LRGB method.  Then I go into Photoshop and PI and continue to do everything wrong in your, no so humble, opinion.  And it works.   It may not be perfect, but I'm proud of the result.  Sometimes a few compromises along the way is not so bad if you end up with a good result.  I'm not in this for perfection.  So Ivo, feel bad for me!  I have fallen into bad company!  But I like where I am....

 

George

George, you misunderstand the nature of my (and others') objections to this 'method'. This is not a matter of aesthetics and not a matter of opinion. It's a matter of cold hard mathematics. Deconvolution makes no mathematical sense on non-linear data, nor does stacking by averaging of data that has been stretched (pure median stacking may work however, but loses its meaning again if you noise reduce first). Call me arrogant for calling Tony, you or anyone out on this if you must.

 

I'm happy for you that you like where you are. But for those who are not and want more from their hard-won signal (or those just starting out in AP), this misinformation is doing a lot of harm - it's a dead end.


Edited by Ivo Jager, 19 November 2014 - 10:31 PM.


#31 gdd

gdd

    Mercury-Atlas

  • -----
  • Posts: 2,550
  • Joined: 23 Nov 2005

Posted 20 November 2014 - 12:43 AM

Some mathematical operations can be done in any order and you get the same result in the end, maybe some of that is going on - I don't really know. But noise reduction actually destroys data so that must limit what can be done by a method that employs it early on.

 

Gale



#32 SKYGZR

SKYGZR

    Vanguard

  • *****
  • Posts: 2,016
  • Joined: 13 Aug 2009

Posted 20 November 2014 - 12:53 AM

To each their own...try it to find out if "it works for you"..if not, go back to the way you were doing it before, or perhaps combine attempts..processing data seems to be the most sought after "wisdom" when doing AP.

Personally..I'm kinda "stuck in a rut"..yet working my way out (I hope)...



#33 G. Hatfield

G. Hatfield

    Surveyor 1

  • *****
  • Posts: 1,572
  • Joined: 31 Jan 2009

Posted 20 November 2014 - 01:57 AM

Dead end?  Lets be clear, in my imaging efforts I am not doing science; it is more like art.  I may be wrong, but I think this applies to most of us on this forum.  I am trying to create an image that represents my mind's eye view of the galaxy or nebula or whatever, I'm imaging.  When I am finished processing an image, I'm done with it.  I don't care if I'm at a dead end..... in fact, usually I am since I can't go further.  One more application of curves or whatever makes the image look worse.  Even my mind's eye view of the object is pure fantasy in most cases.  For example, no matter how close you get to a typical emission nebula, it is not going to look like the images we create to represent it.  I once thought that I was trying to create a picture of the object as it would be seen flying near by it in some "starship."   But as was pointed out on this forum, that is a misconception.  The closer you get, the more diffuse the nebula, so you will never see the red glow of an emission nebula no matter how close you get.  My only criteria in using an image processing tool is...does it make the image look better? Noise reduction in Camera Raw is a good example.  It works to create a nice smooth image with minimal loss of detail.  From my experience none of the noise reduction tools in PI can produce a better end result with such a minimal effort.  I'm not looking for "100%" so to speak.  "90%" is fine in many cases if I can accomplish my goal (i.e., a nice looking image) with minimal effort.   Like Tony.... I am lazy.  Art is not rigorous, at least for me.

 

George



#34 Ivo Jager

Ivo Jager

    Vendor ( Star Tools )

  • *****
  • Vendors
  • Posts: 400
  • Joined: 19 Mar 2011

Posted 20 November 2014 - 02:33 AM

Some mathematical operations can be done in any order and you get the same result in the end, maybe some of that is going on - I don't really know. But noise reduction actually destroys data so that must limit what can be done by a method that employs it early on.

 

Gale

 

Gale, you hit the nail on the head.

 

From wikipedia (Commutative property);

 

Commutative operations in everyday life;

    Putting on socks resembles a commutative operation, since which sock is put on first is unimportant. Either way, the result (having both socks on), is the same.
    The commutativity of addition is observed when paying for an item with cash. Regardless of the order the bills are handed over in, they always give the same total.

 

Non-Commutative operations in everyday life;

    Washing and drying clothes resembles a noncommutative operation; washing and then drying produces a markedly different result to drying and then washing.
    Rotating a book 90° around a vertical axis then 90° around a horizontal axis produces a different orientation than when the rotations are performed in the opposite order.

 

Surely we can agree that raising the error of someone's clothes still being wet, because they dried them before they washed is not an act of 'arrogance', if the whole purpose of this forum is to have the cleanest, driest clothes possible.

 

G. Hatfield, on 20 Nov 2014 - 5:57 PM, said:

    Dead end?  Lets be clear, in my imaging efforts I am not doing science; it is more like art.  I may be wrong, but I think this applies to most of us on this forum.  I am trying to create an image that represents my mind's eye view of the galaxy or nebula or whatever, I'm imaging.  When I am finished processing an image, I'm done with it.  I don't care if I'm at a dead end..... in fact, usually I am since I can't go further.  One more application of curves or whatever makes the image look worse.  Even my mind's eye view of the object is pure fantasy in most cases.  For example, no matter how close you get to a typical emission nebula, it is not going to look like the images we create to represent it.  I once thought that I was trying to create a picture of the object as it would be seen flying near by it in some "starship."   But as was pointed out on this forum, that is a misconception.  The closer you get, the more diffuse the nebula, so you will never see the red glow of an emission nebula no matter how close you get.  My only criteria in using an image processing tool is...does it make the image look better? Noise reduction in Camera Raw is a good example.  It works to create a nice smooth image with minimal loss of detail.  From my experience none of the noise reduction tools in PI can produce a better end result with such a minimal effort.  I'm not looking for "100%" so to speak.  "90%" is fine in many cases if I can accomplish my goal (i.e., a nice looking image) with minimal effort.   Like Tony.... I am lazy.  Art is not rigorous, at least for me.

    George

 

George, I'm not arguing with you about art, as I totally agree with you! I have defended exactly that position many times here on this forum - no one has a monopoly on what looks right. This is, however about teaching someone how to create art on their own in the first place (I'm talking about the tools, not the envisioned end result which is, as you point out, totally subjective). This is which why most of us are here; to get to grip with the tools and improve ourselves. Tony is teaching a 'method' that instils bad practices, grossly misuses the tools we have at our disposal and (if one should adopt this way of working) prohibits people from getting the most out of their material. All without realizing (or at least telling us) that these short cuts are suboptimal at best for reasons that are mathematical and objective in nature. Yes, I consider that damaging and a dead-end; they're practices that need to be 'unlearned' before improvements - if desired - can be made again.

 

If you're happy with the results that is great! But you, and anyone else, deserve to know that some of the things Tony does, says and advocates in that video are demonstrably, objectively and mathematically incorrect. I'm not even complaining about the lack of reasoning for using a tool - it's a short video and those interested in knowing 'why' can learn that later. I'm simply complaining about incorrect use of tools and nonsensical sequences that damage data and set people up for frustration and disaster (not everyone can afford a modified 6D to compensate for the destruction displayed! Someone with a 1000D would be tearing his/her hair out!).

 

We're having a hard time as it is, teaching newbies the ropes and helping them use the tools at their disposal more effectively, and this is one video that we could've really done without. :(


Edited by Ivo Jager, 20 November 2014 - 02:34 AM.


#35 guyroch

guyroch

    Vendor (BackyardEOS)

  • *****
  • Vendors
  • Posts: 3,682
  • Joined: 22 Jan 2008

Posted 20 November 2014 - 09:11 AM

There is nothing arrogant in what Ivo said IMO.  I for one welcome when good arguments are provided on both sides of a conversation.

 

Ivo is coming from a pure science/mathematics perspective and he does with a solid understanding too; StarTools is a testament to that.

 

Tony is coming at from an aesthetic perspective with a "I'm lazy" twist to it and his images speaks for themselves.

 

These 2 camps usually don't mix because the desired outcome is different given the path one takes.  In the end it is a personal preference and whatever floats your boat is perfectly fine regardless of your process.

 

For me it is a little bit of both.  By that I mean I want a nice image to look and share with my friends and family.  Why?  Because they do not care about the process/steps that I use, they look at the image for a few seconds and then base an opinion on the aesthetic of it.  However, I'm personally satisfied only and only if I know I have preserved as much real data signal as possible, and this is where science/mathematics approach comes into play, for me at least.

 

To each our own, at that is fine in my book.

 

Guylain



#36 G. Hatfield

G. Hatfield

    Surveyor 1

  • *****
  • Posts: 1,572
  • Joined: 31 Jan 2009

Posted 20 November 2014 - 09:11 AM

Ivo we obviously disagree on the appropriateness of Tony's method.  All I can say is, it works for me.  For those that are interested, give it a try.  Unlike many approaches, it is easy to learn and does not take a lot of time and effort.  It may well be a compromise, but it could well lead to good outcomes for those starting out.  Nothing like a nice looking image to stimulate more interest in this often challenging hobby.  And just because one uses Tony's methods does not mean one is stuck in that pathway.  Why look at me.... I'm learning PixInsight.  

 

Some of the best images I have taken were processed using his method.  But, the phrase "your results may vary" should be kept in mind.  I have only used his method with images shot at a dark site at over 6000 feet above an Arizona desert (Kitt Peak).  I have only used a modified Canon 6D with good optics (Tak FSQ 106N or TEC 140).  So I started with good data which for me is more important than any processing method.  Those images can be seen here.  

 

http://www.geoandpat...images2014.html

 

These were all relatively short exposures....the details of each is provided on this page. 

 

It should also be said that Tony's presentation was only a brief review of the material he has on some of his instructional videos. But I do think he covered the main points pretty well.  

 

I think the Canon 6D is an indication of where we headed in DSLR Astrophotography.... cameras with much lower noise and better sensitivity.  I really do think that calibration will become a thing of the past, except where flats are needed to remove severe optical problems.  

 

George



#37 G. Hatfield

G. Hatfield

    Surveyor 1

  • *****
  • Posts: 1,572
  • Joined: 31 Jan 2009

Posted 20 November 2014 - 09:18 AM

Guylain...  I agree with most of your points.  I guess what I am objecting to is the position that Tony's method is totally without merit and is somehow dangerous to use for those starting out in image processing. 

 

George



#38 nodalpoint

nodalpoint

    Apollo

  • *****
  • Posts: 1,332
  • Joined: 03 Jun 2013

Posted 20 November 2014 - 12:39 PM

If someone has the software, it seems like it would be a fun experiment to give the technique a try. It would be easy to set up a way to do comparisons and judge if there are any benefits and/or drawbacks.

 

Regarding the "lazy" comment, it's something I understand and don't understand. If anyone has worked as a professional photographer and had to produce large numbers of photos, do something repetitive or been on a deadline, any way to make that easier isn't what I call lazy. Even if you love what you're doing, work is still work and you're often being pushed to do more so saving time is a priority. With astrophotography that's rarely if ever the case and in fact most people are looking to maximize what they have and, since it is a hobby or voluntary activity, taking time to do it should be part of the enjoyment. Shooting flats and bias takes almost no time at all, darks take time but not much attention. Stacking those together seems an equal effort to the method of using Camera Raw described in the video. If the results are equal, without losing quality, saving time is great. If there is a loss of quality, and you need to purchase software you don't have, what's to be gained by saving a few hours? Especially when people are shooting the same things over and over?



#39 Synon

Synon

    Mariner 2

  • *****
  • Posts: 261
  • Joined: 02 Jun 2012

Posted 20 November 2014 - 01:11 PM

I surely don't speak for all new imagers, but as one I really appreciate the points brought up about how Tony's methods affect your images from an objective standpoint. Collecting data is hard work, I don't see how Tony's method really saves much time by reorganizing the steps, all I can see now is that it destroys data before stacking occures which he did no disclose and I never would have known. I don't think his method is without merit, but I think it's important for new folks like myself to understand how their data is being affected by these steps, something I didn't feel like Tony was doing.



#40 entilza

entilza

    Soyuz

  • *****
  • Posts: 3,794
  • Joined: 06 Oct 2014

Posted 20 November 2014 - 01:22 PM

Can someone clarify to someone new: The key step in his method is to dither, so there are 2 ways to dither? Manually between each frame or few frames and automatically through PHD if guiding. But from what I read using the PHD method can potentially take extra long time to re-stablish guiding. So I assume typically most people don't use dithering, and just set their guide once and their intervalometer and do something else while their run is processing.

However, he mentioned needing roughly only 9 images in varied dither positions for the most effect. Again, for someone who has never used guiding, wouldn't this be very time consuming to adjust manually and reset the guiding every few frames, plus having to remember what pattern you used to keep it in a "dice 6" pattern around your object?

If someone would do the comparison experiment with 9 frames of each method would that be a good enough example?

Thanks!

#41 Footbag

Footbag

    Cosmos

  • *****
  • Posts: 9,115
  • Joined: 13 Apr 2009

Posted 20 November 2014 - 01:38 PM

I'm guessing this video offers similar advice as his DVD.  I looked at his "lazy" statement as a catch all.  He was saying there are better ways, but the gains are too time consuming for HIM.  The gains may not be too time consuming for others.  

 

Honestly, he lost me at "I don't take flats."  I've seen with my own eyes what they do.  All calibration frames for that matter.  

 

For me, it just comes down to the warning he gives you.  This isn't the only way, this isn't the best way, but it can work. 



#42 gdd

gdd

    Mercury-Atlas

  • -----
  • Posts: 2,550
  • Joined: 23 Nov 2005

Posted 20 November 2014 - 01:54 PM

I think most people who dither also autoguide which requires either a laptop and software or a standalone autoguider that supports dithering. I had never thought about the time it takes the autoguider to reestablish guiding, so like was pointed out before that makes intervalometers less effective.

 

People who do unguided imaging often unintentionally dither to some extent because of imperfect polar alignment. Usually there is one pixel of drift for my images from one subexposure to the next, most of that vertical direction. I get a very small amount of vertical drift caused my my framing not perfectly aligned with the RA direction. There could also be some DEC drift going on. The dithering won't be perfect because it is not randomized, but you can use the intervalometer because you have no guide star to worry about.

 

If you want to do unguided dithering correctly you should use DitherMaster, it allows but does not require use of an autoguider. You would need to use the laptop anyway for the DitherMaster, so there is probably no point in using the intervalometer.

 

An interesting thing about the dithering Tony was doing is that the color mottling he was trying to cancel out was on a much larger scale (10x10 pixels or so) than the bayer pattern (2x2 pixels). This was a side effect of his method. With normal processing you only need to randomize the 2x2 pixel bayer pattern.

 

Gale



#43 gdd

gdd

    Mercury-Atlas

  • -----
  • Posts: 2,550
  • Joined: 23 Nov 2005

Posted 20 November 2014 - 02:04 PM

I'm guessing this video offers similar advice as his DVD.  I looked at his "lazy" statement as a catch all.  He was saying there are better ways, but the gains are too time consuming for HIM.  The gains may not be too time consuming for others.  

 

Honestly, he lost me at "I don't take flats."  I've seen with my own eyes what they do.  All calibration frames for that matter.  

 

For me, it just comes down to the warning he gives you.  This isn't the only way, this isn't the best way, but it can work. 

My understanding is flats do 2 things:

1. Even out the illumination caused vignetting caused by the optical design.

2. Even out the illumination defects caused by dust on the sensor.

 

The vignetting can also be removed if the software knows the lens design or provides enough parameters the user can tweek until the illumination looks even.

 

The dust problem can be eliminated by maintaining a dust free environment. You can also use software to remove obvious dust spots, but that would be a time consumer.

 

Gale



#44 Footbag

Footbag

    Cosmos

  • *****
  • Posts: 9,115
  • Joined: 13 Apr 2009

Posted 20 November 2014 - 02:12 PM

 

I'm guessing this video offers similar advice as his DVD.  I looked at his "lazy" statement as a catch all.  He was saying there are better ways, but the gains are too time consuming for HIM.  The gains may not be too time consuming for others.  

 

Honestly, he lost me at "I don't take flats."  I've seen with my own eyes what they do.  All calibration frames for that matter.  

 

For me, it just comes down to the warning he gives you.  This isn't the only way, this isn't the best way, but it can work. 

My understanding is flats do 2 things:

1. Even out the illumination caused vignetting caused by the optical design.

2. Even out the illumination defects caused by dust on the sensor.

 

The vignetting can also be removed if the software knows the lens design or provides enough parameters the user can tweek until the illumination looks even.

 

The dust problem can be eliminated by maintaining a dust free environment. You can also use software to remove obvious dust spots, but that would be a time consumer.

 

Gale

 

 

Or if you give it a flat to figure out the unevenness of the illumination. :grin:

 

Doing it purely in software seems like it would be tough to figure out what the correct illumination would be.  Especially if there was LP.  Seems too seat of the pants to me.  Plus, how does it scale the signal?  



#45 gdd

gdd

    Mercury-Atlas

  • -----
  • Posts: 2,550
  • Joined: 23 Nov 2005

Posted 20 November 2014 - 02:18 PM

I do not see how flats will correct for LP which changes as you track across the sky. The software should know how the illumination spreads out if it knows the lens, telescope, obstructions, etc.

 

Gale



#46 Footbag

Footbag

    Cosmos

  • *****
  • Posts: 9,115
  • Joined: 13 Apr 2009

Posted 20 November 2014 - 02:30 PM

I do not see how flats will correct for LP which changes as you track across the sky. The software should know how the illumination spreads out if it knows the lens, telescope, obstructions, etc.

 

Gale

I'm not saying that they will correct for LP.  

 

Before I took flats, I tried using a few methods to fix vignetting.  I'd put down sample points where I knew there was background and create a reverse gradient, then try and apply that.  But, the gradient from LP would mess that up.  LP and lens illumination must be removed using different methods.  One is additive and one multiplicative(or divisive).  That's why I mentioned LP.  

 

As for the lens profiles, they can only be so good.  With an SCT's I'm thinking a lot may come into play.  Collimation of reflectors for example changes illumination,  flats fix that.  But, that goes to my question...  IS LR dividing the illumination by the signal?  Maybe since it's Adobe, they are doing it correctly?  Still, I cannot understand not using flats.  They are easy and give great results.  

 

I've noticed my CCD is too sensitive and easily record dust motes.  I never really dealt with them with my DSLR, but I took flats for vignetting.  I wouldn't dare eliminate them from my CCD workflow.  


Edited by Footbag, 20 November 2014 - 02:31 PM.


#47 Tonk

Tonk

    Cosmos

  • *****
  • Posts: 9,334
  • Joined: 19 Aug 2004

Posted 20 November 2014 - 02:36 PM

Honestly, he lost me at "I don't take flats."  I've seen with my own eyes what they do.  All calibration frames for that matter.

 

I agree with you here!

 

After 10 years of DSLR astro imaging I have least educated myself to know that good calibration BEFORE de-Bayering gets you much much further with down stream processing. Ivo is right in his "dead end" statement - the Hallas method limits the type of processing you can do without noise and artefact trouble hitting you when it could so easily be avoided (OK for the lazy maybe??). A good example is if you want do the best "star freeze" comet processing then clean accurate calibration before de-Bayering is essential.



#48 Tonk

Tonk

    Cosmos

  • *****
  • Posts: 9,334
  • Joined: 19 Aug 2004

Posted 20 November 2014 - 02:44 PM

I do not see how flats will correct for LP which changes as you track across the sky.

 

It doesn't!   But if you flat calibrate first then gradient removal is much easier and far more accurate.

 

 

Using a subtractive method to fix a gradient with vignetting still present introduces an incorrect result - your nebula will end up brighter/darker than it should be in various places. You may not care! - however if you are building mosaics you will find you have messed up when you try and join the images.

 

Flat calibration is highly recommended - mosaics and comet imaging certainly benefit



#49 Ivo Jager

Ivo Jager

    Vendor ( Star Tools )

  • *****
  • Vendors
  • Posts: 400
  • Joined: 19 Mar 2011

Posted 20 November 2014 - 05:07 PM

 

Ivo we obviously disagree on the appropriateness of Tony's method.  All I can say is, it works for me.  For those that are interested, give it a try.  Unlike many approaches, it is easy to learn and does not take a lot of time and effort.  It may well be a compromise, but it could well lead to good outcomes for those starting out.  Nothing like a nice looking image to stimulate more interest in this often challenging hobby.  And just because one uses Tony's methods does not mean one is stuck in that pathway.  Why look at me.... I'm learning PixInsight.

Again George, I'm happy that you're happy and even think your 6D images are really nice (though wondering if more would have been possible with your gear and location at the quoted exposure lengths), but let's not conflate this with appropriateness of the method.

 

"I leave my car in reverse gear (because I'm 'lazy') with a brick on the accelerator (because I'm 'lazy') so I just need to operate the clutch while I negotiate the footpaths (because I'm 'lazy') to get to the grocery store. See, I got a modified Hummer 6D which just rolls over any obstacles in my local neighborhood so I don't have to think about them. Hey, it gets me from A to B just like everyone else so 'it works for me'. Sure it might be a compromise, but this could well lead to good outcomes for those starting out. :undecided: "

 

Is this really a viable way to teach someone starting out *anything* about operating a motor vehicle effectively, efficiently and safely?

 

If you've really started learning PixInsight you will have found by now that you had to unlearn most of what you've been doing with Tony's method. If you thought I was bad, the folks at Pleiades are absolute sticklers for scientific rigour (which is even too much for me, as I'm with you regarding your astute remarks about 'art'!) and will flat out tell you they can't help you if want to do stuff like selectively process a part of the image or add some diffraction spikes (to each their own I say). You will have found out that there is a big difference between linear and non-linear processing and you will have found out calibration and stacking does not color calibrate your image. You will have found out there is no way to apply any sort of noise reduction to your frames prior to calibration/stacking and you will have found out that there is no way of sharpening your frames prior to calibration/stacking. In fact, you will have found out that virtually nothing that happens in Tony's video is made possible in PI, exactly because it's nonsensical and destructive (and certainly not because the fine folks and community at Pleiades didn't think of it!).

 

I really do think that calibration will become a thing of the past, except where flats are needed to remove severe optical problems. 

 

 

I don't think you understand what calibration does or the issues it counteracts. It is still very much necessary to take bias and/or dark frames and it still improves your signal, whether you're dithering or not. On the fainter objects it can make the difference between a usable data set and a wasted night. As for flats, there is not (and never can be) a simple selectable profile for all optical trains to get rid of vignetting (if any) - the amount of possible optical train configurations is infinite. If you call a small dust speck 'a severe optical problem' (because a flat is required) then many, many astrophotographers willingly and knowingly routinely image with such 'severe optical problems' in place (and don't care about it because it is adequately addressed by their flats).


Edited by Ivo Jager, 20 November 2014 - 05:22 PM.


#50 G. Hatfield

G. Hatfield

    Surveyor 1

  • *****
  • Posts: 1,572
  • Joined: 31 Jan 2009

Posted 20 November 2014 - 06:10 PM

Ivo... thanks for looking at my images.  Imaging on Kitt Peak has its limitations.  No guiding for one thing.  Most of those images where taken for clients that rent a scope for the night at the Visitor Center.  I ran the scope and camera fo them. Even though they have the scope all night, they expect to see 30 or more objects and image at least half of those.  So there is not time to set up guiding with the equipment we had available.  That is why most exposures are around 2 minutes and there are often not more than 10 frames.   Some of the client images were from single 2 minute images.....no stacking.  See some examples here:  http://www.geoandpat...o_canon_6D.html   Maybe that is what the future hold!  No calibration and no stacking.  

 

George




CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics






Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics