Skip Navigation
7 comments
  • So I shot the Bubble Nebula in true-color last year, but I decided to shoot it again this past month in false color. It really helps to show the extended nebulosity, and gives me and excuse to compare my image to Hubble's. This false color image uses the SHO palette, where the sulfur-ii wavelength is mapped to red, hydrogen-alpha to green, and oxygen-iii is blue. I'm really happy with how the colors turned out on this one. There's also a number of other nebulae and a star cluster in frame. Captured over 14 nights in Jan/Feb 2024 from a bortle 9 zone (I could only get a couple hours max per night on it.

    Places where I host my other images:

    Flickr | Instagram


    Equipment:

    • TPO 6" F/4 Imaging Newtonian

    • Orion Sirius EQ-G

    • ZWO ASI1600MM-Pro

    • Skywatcher Quattro Coma Corrector

    • ZWO EFW 8x1.25"/31mm

    • Astronomik LRGB+CLS Filters- 31mm

    • Astrodon 31mm Ha 5nm, Oiii 3nm, Sii 5nm

    • Agena 50mm Deluxe Straight-Through Guide Scope

    • ZWO ASI-290mc for guiding

    • Moonlite Autofocuser

    Acquisition: 37 hours 36 minutes (Camera at -15°C), Camera at unity gain.

    • Ha - 95x360"

    • Oiii - 140x360"

    • Sii - 141x360"

    • Darks- 30

    • Flats- 30 per filter

    Capture Software:

    • Captured using N.I.N.A. and PHD2 for guiding and dithering.

    PixInsight Preprocessing:

    • BatchPreProcessing

    • StarAlignment

    • Blink

    • ImageIntegration per channel

    • DrizzleIntegration (2x, Var β=1.5)

    • Dynamic Crop

    • DynamicBackgroundExtraction

    duplicated each image and removed stars via StarXterminator. Ran DBE with a shitload of points to generate background model. model subtracted from original pic using the following PixelMath (math courtesy of /u/jimmythechicken1)

    $T * med(model) / model

    Narrowband Linear:

    • Blur and NoiseXTerminator

    • Duplicated the images before stretching to be used for separate stars-only processing

    • Slight stretch using HistogramTransformation

    • iHDR 2.0 script to stretch each channel the rest of the way.

    This is a great new pixinsight script from Sketch on the discord. here's the link to the repo if you want to add it to your own PI install.

    Stars Only Processing:

    • PixelMath to combine star images (SHO palette)

    • SpectroPhotometricColorCalibration (narrowband working mode)

    • StarXTerminator to make stars only image form each channel

    • SCNR > invert > SCNR > invert to remove greens and magentas

    • ArcsinhStretch + HT to stretch nonlinear - to be combined later with starless pic

    Nonlinear:

    • PixelMath to combine stretched Ha, Oiii, and Sii images into color image (SHO palette)

    • StarXterminator to remove stars

    • HistogramTransformations to tone back the greens and apply a more aggressive stretch to red and blue channels

    • Shitloads of Curve Transformations to adjust lightness, hues, contrast, saturation, etc

    • LRGBCombination with stretched Ha as luminance

    • DeepSNR

    • more curves

    • ColorSaturation to bring up the blues in the bubble

    • LocalHistogramEqualization

    • even more curves

    • MLT for chrominance noise reduction

    • Pixelmath to add in the stretched stars only image from earlier

    This basically re-linearizes the two images, adds them together, and then stretches them back to before

    (Jimmy is a processing wizard when it comes to writing up this independent starless processing stuff)

    mtf(.005,

    mtf(.995,Stars)+

    mtf(.995,Starless))

    • A round of NoiseXterminator for good measure

    • Resample to 60%

    • Annotation

  • Wow... I will probably never be able to do something like this, but it's fucking awesome and interesting to see the amount of effort you've put into this one picture 😊

    One thing that might interest me is how a single, unstacked frame looks like, just in comparison.

    Is there a reason you use PixInsight instead of Siril?
    I'm very new to this and would like to know differences and experiences in software.

    • Here's what single Ha, Oiii, and Sii frames look like.

      I've never actually used Siril, but for the longest time Pixinsight has been considered the be all and end all for deep sky processing, and I decided to dive completely into it once I moved on from photoshop. There's also a number of processes and scripts made by the community just for pix which have become essential for some of my workflows, like BlurXterminator (paid), and a bunch of pixelmath expressions from the guys in the discord.

      • Here’s what single Ha, Oiii, and Sii frames look like.

        Awesome, thanks! For a newcomer like myself I always find it super fascinating to see how "uninteresting" the single take looks like, and then how blastingly colorful, detailled and amazing the final result is compared to that.


        I just shared my own story a few hours ago here on this community, where you can see my progress over the last few months.
        How long did it take for you to get to this stage of awesomeness? How much does the equipment cost, and at what spots do you shoot them? Do you travel explicitly to very remote areas (e.g. the desert or forest), or do you shoot them in your frontyard? :D

        Are you interested in being my "guide", in terms of telling me what maximum quality and detail I can achieve with my current, shitty and absolutely-not-comparable-to-yours, equipment? I mean, you had to start somewhere too, right? What were your last results of the stage when you decided to upgrade from your first camera to a better one? Would you mind sharing a similar story?

        I think, I will just make a post regarding that question, so it reaches a wider audience :D

You've viewed 7 comments.