Using Correlated-K Tables vs. Monochromatic Opacities

Throughout the tutorials, we have always used monochromatic opacities. If you are interested in switching to correlated-K tables versus using the currently provided opacities on Zenodo, that is possible. However, we currently are only supporting our pre-mixed correlated-k-tables that you can find on `Zenodo <>`__ and details are included in Marley et al. 2020.

Before completing this notebook you will have to:

  1. Download at least one or multiple of the k-table folder File should be of the format sonora_2020_feh+XXX_co_YYY, where XXX defines the Fe/H and YYY describes the C/O ratio. Inside that directory there should be at least two files: ascii_data, and full_abunds

  2. Download Sonora PT profiles (if this is unfamiliar please see Brown Dwarf Tutorial)

[1]:
#for number #1 above point to the directory
ck_db = '/data/kcoeff_2020_v3/sonora_2020_feh+000_co_100.data.196/'
#for number #2 above, point to the directory
sonora_profile_db = '/data/sonora_profile/'
[2]:
#picaso
from picaso import justdoit as jdi
from picaso import justplotit as jpi
jpi.output_notebook()
Loading BokehJS ...
[3]:
#let's do two calculations
#first, the regular monochromatic opacities
opacity_mono = jdi.opannection() #lets just use all defaults

Call opannection and supply a correlated-k table

Notice that when we call the opannection with the ck=True, we must also pass it one of the directories from Zenodo. The directory name describes the chemistry that this has been calculated for. The ck_db we have defined is for 1xSolar, Solar C/O.

[4]:
#second the correlated k table
opacity_ck = jdi.opannection(ck=True,
                             ck_db=ck_db
                            )
[5]:
#lets use the same example Jupiter PT profile
pt= jdi.pd.read_csv(jdi.jupiter_pt(), delim_whitespace=True,usecols=[0,1])
/tmp/ipykernel_105020/1301966426.py:2: FutureWarning: The 'delim_whitespace' keyword in pd.read_csv is deprecated and will be removed in a future version. Use ``sep='\s+'`` instead
  pt= jdi.pd.read_csv(jdi.jupiter_pt(), delim_whitespace=True,usecols=[0,1])

You should already be familiar with the following code block. Here we are loading the input class, setting the gravity, stellar parameters, and sonora profile.

[6]:
calc = {'ck':opacity_ck,'mono':opacity_mono}
cases = {i: jdi.inputs() for i in calc.keys()}

#phase angle
for i in calc.keys(): cases[i].phase_angle(0) #radians

#define gravity
for i in calc.keys(): cases[i].gravity(radius=1,
                                    radius_unit=jdi.u.Unit('R_jup'),
                                    mass=1,
                                    mass_unit=jdi.u.Unit('M_jup'))

#define star
for i in calc.keys(): cases[i].star(calc[i], 5000,0,4.0,
                                   radius=1,
                                   radius_unit=jdi.u.Unit('R_sun'),
                                   semi_major=5,
                                   semi_major_unit=jdi.u.Unit('au'))


#just grabbing any Teff so we can use a sonora pressure-temperature profile
Teff = 1000
for i in calc.keys():cases[i].sonora(sonora_profile_db, Teff, chem='grid')

The differences between the two procedures comes in when we specify the chemistry. Here we introduce a new function called premix_atmosphere. This is going to pull the precomputed chemistry that was used to compute the correlated k tables.

[7]:
cases['ck'].premix_atmosphere(calc['ck'],
            df = cases['ck'].inputs['atmosphere']['profile'].loc[:,['pressure','temperature']])

#now let's pass that to the monochromatic opacities for consistency
cases['mono'].atmosphere(df=cases['ck'].inputs['atmosphere']['profile'])

[8]:
df = {}
df['ck'] = cases['ck'].spectrum(calc['ck'],full_output=True, calculation='thermal')
df['mono'] = cases['mono'].spectrum(calc['mono'],full_output=True, calculation='thermal')

Compare the optical depth for various layers

One big difference you will notice between taugas for a monochromatic opacity calculation and a correlated-k calculation, is that taugas will have an extra dimension for CK. Those correspond to each of the gauss points. Now, be careful not to confuse the gauss points that we use for the disk integration and these gauss points. They are different!

Below is an example of summing the tau’s in order to compare with the monochromatic opacities.

[9]:
f = jpi.figure(y_axis_type='log',y_range=[1e-3,1e4],x_range=[1,14],
              x_axis_label='Wavelength (um)',
              y_axis_label='Optical Depth per Layer')

regridx = df['ck']['wavenumber'][df['ck']['wavenumber']<1e4/1]

for i in range(df['mono']['full_output']['taugas'].shape[0])[::15]:
    x,y = jdi.mean_regrid(df['mono']['wavenumber'],df['mono']['full_output']['taugas'][i,:,0], newx=regridx)
    f.line(1e4/x,y, line_dash='solid', line_width=4, color=jpi.Colorblind8[0], legend_label='mono')
    plot = 0

    #we have 8 gauss points so we need to sum these before plotting them
    for ig in range(8):
        plot +=  calc['ck'].gauss_wts[ig]*df['ck']['full_output']['taugas'][:,:,ig]
    f.line(1e4/df['ck']['wavenumber'],plot[i,:],color=jpi.Colorblind8[1]
           ,line_width=2, legend_label='CK')
jpi.show(f)

Compare the final result of the monochromatic and ck generated - spectra

If all was perfect and the Correlated-K tables and the picaso opacity file were derived from the same set of cross sections, then these should be identical. Small deviations might be from: 1) different line lists sources, or 2) the resampling factor might be too low. The latter could be resolved by not using the resampling parameter in opannection.

[10]:
#flip units just to show an example of unit converstion
wno_ck = df['ck']['wavenumber']
for i in list(calc.keys()):
    compare = 'thermal'
    x, y = df[i]['wavenumber'] , df[i][compare]
    if i == 'mono': x,y = jdi.mean_regrid(x, y, newx=wno_ck[jdi.np.where((wno_ck>1e4/14)&
                                                                      (wno_ck<1e4/1))]) #wavenumber, erg/cm2/s/Hz

    xmicron = 1e4/x

    flamy = y*1e-8 #per anstrom instead of per cm
    sp = jdi.psyn.ArraySpectrum(xmicron, flamy,
                                waveunits='um',
                                fluxunits='FLAM')
    sp.convert("um")
    sp.convert('Fnu') #erg/cm2/s/Hz

    x = sp.wave #micron
    y= sp.flux #erg/cm2/s/Hz
    df[i]['fluxnu'] = y

    df[i]['regridy'] =  y
    df[i]['regridx'] = x
[11]:
x = [1e4/df[i]['regridx'] for i in calc.keys()]
y = [df[i]['regridy'] for i in calc.keys()]
jpi.show(jpi.spectrum(x,
                  y,
                  legend=list(calc.keys()), plot_width=1000,
                  y_axis_type='log',x_range=[1,10],y_range=[1e-8,1e-5]
                 ,x_axis_type='log'))

Compare Chemistry

In this case we supplied it the same chemistry. However, there may be cases where you want to extract the chemistry from the pre-computed files. This is how you would go about doing so.

For the below code snippet, I am pulling out the abundances that are higher than 0.1 ppm.

[12]:
f = jpi.figure(y_range=[50,1e-4], y_axis_type='log',x_axis_type='log',
               x_range=[1e-8, 1], y_axis_label='Pressure (bar)',
              x_axis_label='Mixing Ratio (v/v)')

cols = jpi.pals.Category20[20]
ii=0
for i in cases['ck'].inputs['atmosphere']['profile'].keys():
    if i in ['pressure','temperature']:
        continue
    if i in cases['ck'].inputs['atmosphere']['profile'].keys():
        for j ,line in zip(['ck', 'mono'], ['dotted','solid']):
            try:
                x = cases[j].inputs['atmosphere']['profile'][i]
            except:
                pass
            if x.max()>1e-7:
                y = cases[j].inputs['atmosphere']['profile']['pressure']
                f.line(x,y, line_width=3,line_dash=line,color=cols[ii])
                if j == 'mono':ii+=1
jpi.show(f)
[ ]: