{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Initial Setup\n",
"\n",
"This tutorial has two big package dependencies: 1) `PandExo`, 2) `PICASO`\n",
"\n",
"Each of these depend on pretty hefty reference data packages that include things such as opacities, chemistry, _JWST_ reference data. \n",
"\n",
"\n",
"## Download Files and Fill in the Blank Below (Requires ~12 Gb)\n",
"\n",
"This is a lot of data so I recommend downloading it early, and ensuring you have enough storage space. \n",
"\n",
"1. [PandExo Reference Data - 2.2 Gb](https://stsci.app.box.com/v/pandeia-refdata-v1p5): will need to unzip\n",
"2. [Stellar Data Used in PandExo, PICASO - 877 Mb](ftp://ftp.stsci.edu/cdbs/tarfiles/synphot5.tar.gz): will need to unzip\n",
"4. [Cross sections for PICASO - 6 Gb](https://zenodo.org/record/3759675#.XwyRWJNKiHs)\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# STEP 1: PandExo Reference Data \n",
"# SET YOUR OWN PATH to \"/pandeia_data-1.5.1\"\n",
"# e.g. mine is \"/Users/meialua/Documents/data/pandeia_data-2.0\"\n",
"os.environ['pandeia_refdata'] = '/data/pandexo/pandeia_data-3.0rc3'\n",
"\n",
"# STEP 2\n",
"#SET YOUR OWN PATH to stellar data \"/grp/hst/cdbs\"\n",
"#e.g. mine is \"/Users/nbatalh1/Documents/data/grp/hst/cdbs\"\n",
"os.environ['PYSYN_CDBS']= '/data/grp/hst/cdbs'\n",
"\n",
"#STEP 3 picaso_refdata came in your clone so should be in this folder already\n",
"os.environ['picaso_refdata'] = '/data/reference_data/picaso/reference'\n",
"# SET YOUR OWN PATH to 'opacities.db'\n",
"# e.g. mine is \"/Users/meialua/Documents/data/opacities.db\"\n",
"#picaso_opacity_db =\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import warnings\n",
"warnings.filterwarnings(\"ignore\")\n",
"\n",
"#python tools\n",
"import requests\n",
"import numpy as np\n",
"import pandas as pd\n",
"from bokeh.plotting import figure, show\n",
"import bokeh.palettes as color\n",
"from bokeh.io import output_notebook \n",
"from bokeh.layouts import row, column\n",
"output_notebook()\n",
"#double check that you have loaded BokehJS 2.1.1"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# What accuracy in planet properties do I need?"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Using `Exoplanet Archive API` to Query Confirmed Targets\n",
"\n",
"We can query from Exoplanet Archive to get a list of targets. Once we have all the targets we will have to narrow down the sample to only those that have suitable planet properties. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"planets_df = pd.read_csv('https://exoplanetarchive.ipac.caltech.edu/TAP/sync?query=select+*+from+PSCompPars&format=csv')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# convert to float when possible\n",
"for i in planets_df.columns: \n",
" planets_df[i] = planets_df[i].astype(float,errors='ignore')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#voila\n",
"planets_df.head()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Trouble finding what you are looking for? "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#easy way to search through dataframe with many keys\n",
"[i for i in planets_df.keys() if 'mass' in i]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Get rid of everything without a mass "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#let's only grab targets with masses and errors \n",
"pl_mass_err = planets_df.loc[:,['pl_bmassjerr1', 'pl_bmassj']].astype(float)\n",
"pl_mass_err = pl_mass_err.dropna()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Percentage of planets in NexSci database with masses"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"100*pl_mass_err.shape[0]/planets_df.shape[0]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now let's get an idea of what those mass errors are "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"pl_mass_fig = figure(x_axis_type='log', width=550, height=400, x_axis_label='Mj',\n",
" y_axis_label='Mass Uncertainty',y_range=[0,150],x_range=[1e-3, 10])\n",
"\n",
"pl_mass_fig.circle(pl_mass_err['pl_bmassj'], \n",
" 100*pl_mass_err['pl_bmassjerr1']/pl_mass_err['pl_bmassj'], \n",
" color='black', size=5, alpha=0.5)\n",
"\n",
"pl_mass_fig.outline_line_color='black'\n",
"pl_mass_fig.outline_line_width=3\n",
"pl_mass_fig.xgrid.grid_line_alpha=0\n",
"show(pl_mass_fig)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We only want targets with mass precision better than ~20%. \n",
"\n",
"Let's also take only those targets that are brighter than J<12"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#mass err less better than 20%\n",
"potential_props = pl_mass_err.loc[100*pl_mass_err['pl_bmassjerr1']/pl_mass_err['pl_bmassj']<20]\n",
"potential_props = planets_df.loc[pl_mass_err.index,:]\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#brighter than 12\n",
"choose_from = potential_props.loc[potential_props['sy_jmag']<12]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"choose_from.shape[0]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# What tools do I need? How do I use them?\n",
"\n",
"## Use `PandExo` to run initial constant $(R_p/R_*)^2$ to determine approx precision\n",
"\n",
"Let's choose a single planet GJ 436 b as our example to evaluate observability. \n",
"\n",
"**Should'e already downloaded data**: We won't go through the details of installation and default data. The `conda` environment on here should have installed PandExo. In order for this to work you will need to download the [necessary default data](#download). You can also go to the [PandExo Installation Page](https://natashabatalha.github.io/PandExo/). "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#check to make sure it's in our list of nicely constained masses\n",
"choose_from.head()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# load pandexo\n",
"import pandexo.engine.justdoit as jdi\n",
"import pandexo.engine.justplotit as jpi\n",
"import pandexo.engine.bintools as bi"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"planet_name = 'GJ 436 b'\n",
"GJ436b = jdi.load_exo_dict(planet_name=planet_name)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For this initial run, the only 2 parameters you need to edit are:\n",
"\n",
"1. Saturation level within the `observation` key: This will determine how many groups can fit with in an integration. Let's keep this at 80% to be safe. \n",
"\n",
"2. Change planet type to constant so that we don't have to input an atmospheric model yet. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"GJ436b['planet']['type'] = 'constant'\n",
"GJ436b['planet']['f_unit'] = 'rp^2/r*^2'"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"GJ436b['observation']['sat_level'] = 80\n",
"GJ436b['observation']['sat_unit'] = '%'\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next pick an initial set of isntrument modes. I usually pick one at least one from each of the 4 (NIRISS, NIRSpec, NIRCam, and MIRI). \n",
"\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"to_run = ['NIRISS SOSS', 'NIRSpec G395M', 'NIRCam F444W','MIRI LRS']\n",
"result = jdi.run_pandexo(GJ436b, to_run)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### What is the approximate precision achieved by each observing mode in a single transit at native resolution??"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"prec_fig = figure(x_axis_type='log',y_range=[0,400],\n",
" height=300, x_axis_label='Wavelength(Micron)',\n",
" y_axis_label='Spectra Precision per pix (ppm)')\n",
"\n",
"for i, inst in enumerate(to_run): \n",
" x = result[i][inst]['FinalSpectrum']['wave']\n",
" e = result[i][inst]['FinalSpectrum']['error_w_floor']*1e6 #PPM UNITS!\n",
" prec_fig.line(x,e, line_width=4, color = color.Colorblind6[i],legend_label=inst)\n",
"prec_fig.legend.location='top_left'\n",
"prec_fig.outline_line_color='black'\n",
"prec_fig.outline_line_width=3\n",
"prec_fig.xgrid.grid_line_alpha=0\n",
"prec_fig.ygrid.grid_line_alpha=0\n",
"show(prec_fig)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### What is the approximate precision achieved by each observing mode in a single transit at $R = 100$ ??\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from bokeh.io import reset_output\n",
"\n",
"prec_fig = figure(x_axis_type='log',y_range=[0,200], \n",
" height=300, x_axis_label='Wavelength(Micron)',\n",
" y_axis_label='Spectral Precision ber R=100 bin (ppm)')\n",
"ES = {} # let's use this to store the values \n",
"XS = {}\n",
"for i, inst in enumerate(to_run): \n",
" if 'MIRI' not in inst:#LRS is already at around 100, so let's keep it at native R\n",
" \n",
" x,y,e = jpi.jwst_1d_spec(result[i][inst], plot=False, model=False, R=100)\n",
" \n",
" # if you check out your plot below you'll see the noise budget blows up at the \n",
" # detector edges so I usually make a 5*median cut to get rid of the crap \n",
" x = x[0][np.where(e[0]<5*np.min(e[0]))] \n",
" e = e[0][np.where(e[0]<5*np.min(e[0]))]*1e6 #PPM UNITS!\n",
" \n",
" #plot\n",
" prec_fig.line(x,e, line_width=4, color = color.Colorblind6[i],legend_label=inst)\n",
"\n",
" else:\n",
" x = result[i][inst]['FinalSpectrum']['wave']\n",
" e = result[i][inst]['FinalSpectrum']['error_w_floor']\n",
" x = x[np.where(e<5*np.min(e))]\n",
" e = e[np.where(e<5*np.min(e))]*1e6 #PPM UNITS!\n",
"\n",
" prec_fig.line(x,e, line_width=4, color = color.Colorblind6[i],legend_label=inst)\n",
" \n",
" #lets save these for later\n",
" XS[inst] = x\n",
" ES[inst] = e\n",
"prec_fig.legend.location='top_left'\n",
"prec_fig.outline_line_color='black'\n",
"prec_fig.outline_line_width=3\n",
"prec_fig.xgrid.grid_line_alpha=0\n",
"prec_fig.ygrid.grid_line_alpha=0\n",
"reset_output()\n",
"output_notebook()\n",
"show(prec_fig)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Use `PICASO` to determine first guess atmospheric transmission signal\n",
"\n",
"From the `PandExo` exercise we've learned we can achieve approximately 30-50 ppm precision in a single transit. This is a great place to start. Now, we want to get a sense of _what this data precision gives us, scientifically speaking_. \n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Black box codes == there are many model parameters I am sweeping under the rug for the purposes of a clean and concise demo. With that cautionary note, we can proceed. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Unfortunately NexSci doesn't always have all the necessary planet properties. If you get an error below, that is likely why. For GJ 436b, this is the case. NexSci is missing two values : Stellar Fe/H and Stellar effective temperature. I have added those two as `kwargs` which I found from [Exo.MAST](https://exo.mast.stsci.edu/)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#load picaso \n",
"# load picaso\n",
"import picaso.justdoit as pj\n",
"import picaso.justplotit as pp"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#load opacities \n",
"opas = pj.opannection(wave_range=[1,12]) #defined in code 1\n",
"\n",
"#load planet in the same way as before\n",
"gj436_trans = pj.load_planet(choose_from.loc[choose_from['pl_name']==planet_name],\n",
" opas,\n",
" pl_eqt=667,st_metfe = 0.02, st_teff=3479)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### The simplest, most optimistic case: Cloud free, Solar Metallicity, Solar C/O \n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#run picaso \n",
"log_mh = 0 #this is relative to solar, so 10**log_mh = 1xsolar\n",
"co = 1 # this is relative to solar, so 1xsolar\n",
"gj436_trans.chemeq_visscher(co,log_mh)\n",
"\n",
"df_picaso = gj436_trans.spectrum(opas, calculation='transmission', full_output=True)\n",
"wno,cloud_free = pj.mean_regrid(df_picaso['wavenumber'], df_picaso['transit_depth'] , R=150)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"spec = figure(x_axis_type='log', y_axis_label='Relative Transit Depth',\n",
" x_axis_label='Wavelength(micron)',\n",
" height=300)\n",
"\n",
"#mean subtracting your spectrum will help you understand your spectrum in the context of \n",
"#your noise simulations above. Here I am choosing to normalize to 1 micron. \n",
"cf_norm = 1e6*(cloud_free - cloud_free[np.argmin(np.abs(1e4/wno-1))]) \n",
"\n",
"spec.line(1e4/wno, cf_norm , color='black',line_width=3)\n",
"\n",
"for i, inst in enumerate(to_run):\n",
" fake_y =np.interp( XS[inst], 1e4/wno[::-1],cf_norm[::-1] )\n",
" spec.varea(x=XS[inst],\n",
" y1=fake_y + ES[inst],\n",
" y2=fake_y - ES[inst] , alpha=0.7, color=color.Colorblind6[i])\n",
"show(spec)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This looks GREAT but highly sus. A water amplitude of 200pm at 1.4 $\\mu$m means we could've easily detected something with _Hubble_. Was that the case? For this particular planet, we can check `Exo.MAST` to see if there is a transmission spectrum available. \n",
"\n",
"## Check `Exo.MAST` for available data so we can validate our assumptions\n",
"\n",
"First, let's check to see if a transmission spectrum exists. We can use `Exo.MAST`! \n",
"\n",
"\n",
"\n",
"__TO DO:__\n",
"\n",
"Navigate to `Exo.Mast`, search for GJ 436 b (or your planet) and you should see a tab on the figure for \"transmission spectra\". __Download the following file__."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"IRL = pd.read_csv('/data2/observations/GJ-436b/HST/GJ436b_transmission_Knutson2014.txt', skiprows=8, \n",
" names = ['micron','dmicron','tdep','etdep'], delim_whitespace=True)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's compare our real life data to what our optimistic `1xSolar` cloud free case is. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#as we go through this, let's save our models in a dictionary\n",
"all_models = {}\n",
"spec = figure(x_axis_type='log', y_axis_label='Relative Transit Depth',\n",
" x_axis_label='Wavelength(micron)',\n",
" height=300)\n",
"\n",
"#normalizing your spectrum will help you understand your spectrum in the context of \n",
"#your noise simulations above\n",
"cf_norm = 1e6*(cloud_free - cloud_free[np.argmin(np.abs(1e4/wno-1))])\n",
"all_models['1x'] = cf_norm\n",
"all_models['um'] = 1e4/wno\n",
"spec.line(1e4/wno, cf_norm , color='black',line_width=3)\n",
"\n",
"#again mean subtract from value at 1 micron so they line up, and convert to ppm\n",
"y =1e6*( IRL['tdep'] - IRL.iloc[(IRL['micron']-1).abs().argsort()[0],2])\n",
"\n",
"#plot the data\n",
"pp.plot_errorbar( IRL['micron'], IRL['tdep'] \n",
" , 1e6*IRL['etdep'] , spec,\n",
" #formatting\n",
" point_kwargs={'size':5,'color':'black'}, error_kwargs={'line_width':1,'color':'black'})\n",
"\n",
"for i, inst in enumerate(to_run):\n",
" fake_y =np.interp( XS[inst], 1e4/wno[::-1],cf_norm[::-1] )\n",
" spec.varea(x=XS[inst],\n",
" y1=fake_y+ ES[inst],\n",
" y2=fake_y - ES[inst] , alpha=0.7, color=color.Colorblind6[i])\n",
"show(spec)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The model is way off. \n",
"\n",
"### Proposals with 1xSolar cloud free claims are *not realistic*. What can we do to bolster our argument? \n",
"\n",
"The data precision required to rule a 1xSolar CF model is not high enough. Therefore, if one bases their total observing time on a 1xSolar model, they would be largely _under-asking_ for time. And, a TAC member might call out your proposal for having _over simplified_ your model results. \n",
"\n",
"Let's see what we can do to rectify this. \n",
"\n",
"### Try following the Mass-M/H plot to get a more \"realistic\" prediction of what the of M/H should be \n",
"\n",
"\n",
"\n",
" \n",
" \n",
" Wakeford & Dalba 2020\n",
" \n",
"\n",
"\n",
"In this case we have an estimate for GJ 436b already. But, let's pretend we don't have that yet :)\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#run picaso \n",
"log_mh = 2 #this is relative to solar, so 10**log_mh = 1xsolar\n",
"co = 1 # this is relative to solar, so 1xsolar\n",
"gj436_trans.chemeq_visscher(co,log_mh)\n",
"\n",
"df_picaso = gj436_trans.spectrum(opas, calculation='transmission', full_output=True)\n",
"wno,cloud_free = pj.mean_regrid(df_picaso['wavenumber'], df_picaso['transit_depth'] , R=150)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Repeat the last exercise"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#as we go through this, let's save our models\n",
"spec = figure(x_axis_type='log', y_axis_label='Relative Transit Depth',\n",
" x_axis_label='Wavelength(micron)',\n",
" height=300)\n",
"\n",
"#normalizing your spectrum will help you understand your spectrum in the context of \n",
"#your noise simulations above\n",
"cf_norm = 1e6*(cloud_free - cloud_free[np.argmin(np.abs(1e4/wno-1))])\n",
"\n",
"all_models['100x'] = cf_norm\n",
"spec.line(1e4/wno, cf_norm , color='black',line_width=3, legend_label='100xSolar')\n",
"spec.line(1e4/wno, all_models['1x'] , color='grey',line_width=3, legend_label='1xSolar') \n",
"\n",
"#again mean subtract from value at 1 micron so they line up, and convert to ppm \n",
"\n",
"y =1e6*( IRL['tdep'] - IRL.iloc[(IRL['micron']-1).abs().argsort()[0],2])\n",
"\n",
"#plot the data\n",
"pp.plot_errorbar(IRL['micron'], IRL['tdep'] ,\n",
" 1e6*IRL['etdep'] , spec, \n",
" #formatting\n",
" point_kwargs={'size':5,'color':'black'}, error_kwargs={'line_width':3,'color':'black'})\n",
"\n",
"for i, inst in enumerate(to_run):\n",
" fake_y =np.interp( XS[inst], 1e4/wno[::-1],cf_norm[::-1] )\n",
" spec.varea(x=XS[inst],\n",
" y1=fake_y+ ES[inst],\n",
" y2=fake_y - ES[inst] , alpha=0.7, color=color.Colorblind6[i])\n",
"\n",
"spec.legend.location='top_left'\n",
"show(spec)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Looks much better already. Unfortunately we aren't done quite yet. \n",
"\n",
"Let's revisit our first hierarchical question: \n",
"\n",
"1. Can you detect an atmosphere? \n",
"\n",
"The answer seems obvious: **Of course!!!** Two important questions must be addressed: \n",
"\n",
" - At what level of significance? \n",
" - What about clouds?\n",
" - And other physical parameters? Temperature? C to O ratio? \n",
"\n",
"We will visit these questions in [Section 3: How can I prove observability](#section3).\n",
"\n",
"But first, let's run some first guess emission cases. \n",
"\n",
"## Use `PICASO` to determine first guess atmospheric emission signal\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"planet_name = 'GJ 436 b'"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#load opacities \n",
"opas = pj.opannection(wave_range=[1,12]) #defined in code 1\n",
"\n",
"#load planet in the same way as before\n",
"gj436_emis = pj.load_planet(choose_from.loc[choose_from['pl_name']==planet_name],\n",
" opas,\n",
" pl_eqt=667,st_metfe = 0.02, st_teff=3479)\n",
"\n",
"# right now picaso and chimera are using a PT profile parameterization \n",
"# I've made sure they are the same for the purposes of this demo\n",
"T = gj436_emis.inputs['atmosphere']['profile']['temperature'].values\n",
"P = gj436_emis.inputs['atmosphere']['profile']['pressure'].values"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### The simplest case: Cloud free, Solar Metallicity, Solar C/O \n",
"\n",
"\n",
"For this demo we are going to use the same chemistry as chimera. Our little black body function contains the necessary function to grab the chemistry"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"all_emis = {}\n",
"\n",
"# let's get the same chemistry as chimera for consistency \n",
"logMH = 0 #log relative to solar\n",
"CO = 1 #relative to solar\n",
"\n",
"#add in chemistry to picaso bundle\n",
"gj436_emis.chemeq_visscher(CO, logMH)\n",
"\n",
"#run picaso \n",
"df_picaso = gj436_emis.spectrum(opas, calculation='thermal', full_output=True)\n",
"wno, fpfs = pj.mean_regrid(df_picaso['wavenumber'], df_picaso['fpfs_thermal'] , R=150)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Plot with our error bars from `PandExo` "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"spec = figure(x_axis_type='log', y_axis_label='Eclipse Spectrum (Fplanet/Fstar)',\n",
" x_axis_label='Wavelength(micron)',\n",
" height=300)\n",
"\n",
"all_emis['1xSolar'] = fpfs*1e6\n",
"spec.line(1e4/wno, fpfs*1e6 , color='black',line_width=3)\n",
"\n",
"for i, inst in enumerate(to_run):\n",
" fake_y =np.interp( XS[inst], 1e4/wno[::-1],fpfs[::-1]*1e6 )\n",
" spec.varea(x=XS[inst],\n",
" y1=fake_y + ES[inst],\n",
" y2=fake_y - ES[inst] , alpha=0.7, color=color.Colorblind6[i])\n",
"show(spec)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Wahoo! Since we don't have similar transmission data for emission spectroscopy, let's use our initial 200xSolar guess that we had for transmission to see the difference. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"logMH = 2 #log relative to solar\n",
"CO = 1 #relative to solar\n",
"\n",
"#add in chemistry to picaso bundle\n",
"gj436_emis.chemeq_visscher(CO, logMH)\n",
"\n",
"#run picaso \n",
"df_picaso = gj436_emis.spectrum(opas, calculation='thermal', full_output=True)\n",
"wno, fpfs = pj.mean_regrid(df_picaso['wavenumber'], df_picaso['fpfs_thermal'] , R=150)\n",
"\n",
"spec = figure(x_axis_type='log', y_axis_label='Eclipse Spectrum (Fplanet/Fstar)',\n",
" x_axis_label='Wavelength(micron)',\n",
" height=300)\n",
"\n",
"spec.line(1e4/wno, all_emis['1xSolar'], color='grey',line_width=3, legend_label='1xSolar')\n",
"\n",
"all_emis['100xSolar'] = fpfs*1e6\n",
"spec.line(1e4/wno, fpfs*1e6 , color='black',line_width=3, legend_label='200xSolar')\n",
"\n",
"for i, inst in enumerate(to_run):\n",
" fake_y =np.interp( XS[inst], 1e4/wno[::-1],fpfs[::-1]*1e6 )\n",
" spec.varea(x=XS[inst],\n",
" y1=fake_y + ES[inst],\n",
" y2=fake_y - ES[inst] , alpha=0.7, color=color.Colorblind6[i])\n",
"\n",
"spec.legend.location='top_left'\n",
"show(spec)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### My first guess transmission and/or emission signals look promising. What's next?\n",
"\n",
"Based on our initial 1x and 200x Solar models of transmission and emission, our simulations look great. So what is next? \n",
"\n",
"We have to take our analysis one step further to determine at what level of significance we can make these detections. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"# How can I \"prove\" observability? \n",
"\n",
"The classic hierarchical science levels are:\n",
"\n",
"1. Can you detect an atmosphere?\n",
"2. Can you detect a specific molecule? \n",
"3. Can you detect another physical process? \n",
"4. Can you _constrain_ a molecular abundance? A metallicity? A climate profile? An aerosol species? \n",
"\n",
"This section will walk you through steps to determine where you stand on in this hierarchy. \n",
"\n",
"## Can an atmosphere be detected: Addressing cloud concerns and quantifying statistical significance in transmission\n",
"\n",
"First, let's expand our initial 1xSolar M/H case. Let's add a few cloud cases (e.g. no cloud, medium cloud, high cloud) and a few M/H cases and show first show how these relate to the expected data precision. Next we will discuss how to make these address our ability to detect (or not) each case. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"logMHs = [0.5,1.0,1.5,2.0]\n",
"\n",
"cloud_bottom = 1 #10 bars \n",
"cloud_thickness = [2,3,4,5] #these are log cross sections which\n",
"\n",
"all_models_trans = {i:{} for i in logMHs}\n",
"fig = figure(height= 300, \n",
" x_axis_label='Wavelength(micron)', \n",
" y_axis_label='Relative Transit Depth (PPM)', x_axis_type='log')\n",
"colors = color.viridis(len(logMHs)*len(cloud_thickness))\n",
"\n",
"ic = 0 \n",
"for logMH in logMHs:\n",
" for log_dp in cloud_thickness:\n",
" \n",
" CO = 1 #solar carbon to oxygen ratio \n",
" gj436_trans.chemeq_visscher(CO, logMH)\n",
" \n",
" #add in the cloud \n",
" gj436_trans.clouds(g0=[0.9], w0=[0.9], opd=[10], p=[cloud_bottom], dp=[log_dp])\n",
" \n",
" #run picaso \n",
" df_picaso = gj436_trans.spectrum(opas, calculation='transmission', full_output=True)\n",
" wno, model = pj.mean_regrid(df_picaso['wavenumber'], df_picaso['transit_depth'] , R=150)\n",
" \n",
" all_models_trans[logMH][log_dp] = 1e6*((model)- model[np.argmin(np.abs(1e4/wno-1))])\n",
" \n",
" fig.line(1e4/wno, all_models_trans[logMH][log_dp] , color=colors[ic], line_width=3, \n",
" alpha=0.7,legend_label=str(logMH)+str(log_dp))\n",
" ic +=1\n",
"show(fig)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### \"Can an atmosphere be detected in transmission\" usually translates to \"can a $y=mx+b$ model be rejected\" "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"fig = [figure(title = i, height= 250, width=300, x_axis_label='log Metallicity', \n",
" y_axis_label='log Cloud Cross Section', #y_range=[0,4],x_range=[0,2.5]\n",
" ) for i in to_run]\n",
"\n",
"#for our line model \n",
"from scipy.optimize import curve_fit\n",
"from scipy.stats.distributions import chi2\n",
"def line(x, a, b):\n",
" return a * x + b\n",
"\n",
"all_ps = {i:[] for i in to_run}\n",
"for i, inst in enumerate(to_run): \n",
" for MH in logMHs:\n",
" for CLD in cloud_thickness:\n",
" y = all_models_trans[MH][CLD]\n",
" #wno is at lower res than our observation so we have to finagle them onto the same axis \n",
" binned_obs = bi.binning(XS[inst], XS[inst]*0, dy =ES[inst] ,newx=1e4/wno[::-1])\n",
" binned_model = pp.mean_regrid(wno, y, newx=1e4/binned_obs['bin_x'][::-1])[1]\n",
" \n",
" #add random noise to your binned model which is now our \"fake data\" \n",
" fake_data = binned_model + np.random.randn(len(binned_obs['bin_dy']))*binned_obs['bin_dy']\n",
" \n",
" #step 1) fit a line through each model \n",
" popt, pcov = curve_fit(line, binned_obs['bin_x'], fake_data, sigma =binned_obs['bin_dy'])\n",
" \n",
" #now we have:\n",
" y_measured = fake_data \n",
" y_error = binned_obs['bin_dy']\n",
" y_line_model = line(binned_obs['bin_x'], popt[0], popt[1])\n",
" # .. and we can use tradition chi square p values to do some hypothesis testing\n",
" \n",
" #compute chi square of each model \n",
" line_chi = np.sum((y_measured-y_line_model)**2/y_error**2)\n",
" p = chi2.sf(line_chi, len(y_measured) - 2)#2 because there are 2 free model params\n",
" all_ps[inst] += [[MH, CLD, p]]\n",
" \n",
" #some crafy finagling to get a good color scheme \n",
" if p>0.05: \n",
" col = color.RdBu3[-1] #red if it cant be rejected\n",
" legend = 'Cannot Reject'\n",
" elif ((p<=0.05) & (p>=0.006)): \n",
" col = color.RdBu3[1] #white for medium/low rection\n",
" legend= 'Weak/Med Reject'\n",
" else: \n",
" col = color.RdBu3[0] #blue is strong rejection of line\n",
" legend='Strong Favor'\n",
" #hacked \"heat map\"\n",
" fig[i].square(MH, CLD, color = col, size=30,line_color='black',legend_label=legend)\n",
"\n",
"show(column(row(fig[0:2]), row(fig[2:4]))) \n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"From these figures we can immediately come to a few conclusions \n",
"\n",
"1. NIRISS is most strongly impacted by clouds and has the least potential in being able to \"detect an atmosphere\" (e.g. reject a flat line hypothesis)\n",
"2. NIRSpec and NIRCam are best at rejecting the flat line \n",
"3. MIRI LRS struggles at rejecting flat line hypothesis\n",
"\n",
"You can _always_ create scenarios where an atmosphere is not detectable. For your proposal, it's only important to justify the level at which an atmosphere could be robustly detected. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Can an atmosphere be detected: Addressing unknown climate and quantifying statistical significance in emission\n",
"\n",
"First, let's expand our initial 1xSolar M/H case. In transmission we focused on M/H and cloud cross sectional strength. Although clouds will have an ultimate effect on the emission spectra, the first order affect is that of the pressure-temperature profile. So here, instead of running through M/H-cloud parameter space, we are going to run through a few M/H-temperature cases. \n",
"\n",
"**Theorists Caution:** To simplify this exercise we are parameterizing our pressure-temperature profiles. We urge groups to talk to their local theorist about the validity of this hack for your specific problem."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"logMHs = [0.5,1.0,1.5,2.0]\n",
"DTs = [-100,0,100] #these are delta Kelvin amounts that we will use to perturb our PT profile\n",
"\n",
"all_models_emis = {i:{} for i in logMHs}\n",
"fig = figure(height= 300, \n",
" x_axis_label='Wavelength(micron)', \n",
" y_axis_label='Fp/Fs (PPM)', x_axis_type='log')\n",
"colors = color.viridis(len(logMHs)*len(DTs))\n",
"\n",
"ic = 0 \n",
"for MH in logMHs:\n",
" for DT in DTs:\n",
" CO = 1 #solar carbon to oxygen ratio \n",
"\n",
" #add in chemistry to picaso bundle T+DT, P\n",
" \n",
" gj436_emis.inputs['atmosphere']['profile']['temperature'] = T + DT\n",
" gj436_emis.chemeq_visscher(CO,MH)\n",
"\n",
" #run picaso \n",
" df_picaso = gj436_emis.spectrum(opas, calculation='thermal', full_output=True)\n",
" wno_pic, fpfs = pj.mean_regrid(df_picaso['wavenumber'], df_picaso['fpfs_thermal'] , R=150) \n",
" #let's also save raw thermal flux (you'll see why next)\n",
" wno_pic, thermal = pj.mean_regrid(df_picaso['wavenumber'], df_picaso['thermal'] , R=150) \n",
" \n",
" all_models_emis[MH][DT] = [fpfs*1e6,thermal]\n",
" \n",
" fig.line(1e4/wno_pic, all_models_emis[MH][DT][0] , color=colors[ic], line_width=3, \n",
" alpha=0.7,legend_label=str(MH)+str(DT))\n",
" ic +=1\n",
"fig.legend.location='top_left' \n",
"show(fig)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### \"Can an atmosphere be detected in emission\" usually translates to \"can a blackbody model be rejected\" "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"fig = [figure(title = i, height= 250, width=300, x_axis_label='log Metallicity', \n",
" y_axis_label='Delta T Perturbation', \n",
" ) for i in to_run]\n",
"\n",
"\n",
"all_ps = {i:[] for i in to_run}\n",
"\n",
"for i, inst in enumerate(to_run): \n",
" for MH in logMHs:\n",
" for DT in DTs:\n",
" y_fpfs = all_models_emis[MH][DT][0]\n",
" y_fp = all_models_emis[MH][DT][1]\n",
" #wno is at lower res than our observation so we have to finagle them onto the same axis \n",
" binned_obs = bi.binning(XS[inst], XS[inst]*0, dy =ES[inst] ,newx=1e4/wno_pic[::-1])\n",
" #first get the binned Fp/Fs\n",
" binned_y_fpfs = pp.mean_regrid(wno_pic, y_fpfs, newx=1e4/binned_obs['bin_x'][::-1])[1]\n",
"\n",
" #how just the Fp (since we will be replacing this with a blackbody)\n",
" binned_y_fp = pp.mean_regrid(wno_pic, y_fp, newx=1e4/binned_obs['bin_x'][::-1])[1]\n",
"\n",
" #add random noise to your binned model which is now our \"fake data\" \n",
" fake_data = binned_y_fpfs + np.random.randn(len(binned_obs['bin_dy']))*binned_obs['bin_dy']\n",
" \n",
" #just the star! \n",
" fstar_component = binned_y_fpfs / binned_y_fp \n",
"\n",
" #blackbody function to fit for \n",
" def blackbody(x, t):\n",
" #x is wave in micron , so convert to cm\n",
" x = 1e-4*x\n",
" h = 6.62607004e-27 # erg s \n",
" c = 2.99792458e+10 # cm/s\n",
" k = 1.38064852e-16 #erg / K\n",
" flux_planet = np.pi * ((2.0*h*c**2.0)/(x**5.0))*(1.0/(np.exp((h*c)/(t*x*k)) - 1.0)) \n",
" fpfs = flux_planet *fstar_component\n",
" return fpfs\n",
" \n",
" #step 1) fit a line through each model \n",
" popt, pcov = curve_fit(blackbody, binned_obs['bin_x'], fake_data,p0=[100], sigma =binned_obs['bin_dy'])\n",
"\n",
" #now we have:\n",
" y_measured = fake_data \n",
" y_error = binned_obs['bin_dy']\n",
" y_bb_model = blackbody(binned_obs['bin_x'], popt[0])\n",
" # .. and we can use tradition chi square p values to do some hypothesis testing\n",
" \n",
" #compute chi square of each model \n",
" line_chi = np.sum((y_measured-y_bb_model)**2/y_error**2)\n",
" p = chi2.sf(line_chi, len(y_measured) - 1)#1 because we have t as the model param\n",
" all_ps[inst] += [[MH, DT, p]]\n",
" \n",
" #some crafy finagling to get a good color scheme \n",
" if p>0.05: \n",
" col = color.RdBu3[-1] #red if it cant be rejected\n",
" legend = 'Cannot Reject'\n",
" elif ((p<=0.05) & (p>=0.006)): \n",
" col = color.RdBu3[1] #white for medium/low rection\n",
" legend= 'Weak/Med Reject'\n",
" else: \n",
" col = color.RdBu3[0] #blue is strong rejection of line\n",
" legend='Strong Favor'\n",
"\n",
" #hacked heatmap\n",
" fig[i].square(MH, DT, color = col, size=30,line_color='black',legend_label=legend) \n",
" \n",
"show(column(row(fig[0:2]), row(fig[2:4]))) "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"From these figures we can immediately come to a few conclusions \n",
"\n",
"1. NIRISS doesn't cover adequate wavelengths to capture our planet's emission spectrum\n",
"2. NIRSpec and NIRCam are best at rejecting the blackbody \n",
"3. MIRI LRS is unable to differentiate high metallicity cases from a blackbody. This is a surprising result as you might suspect MIRI would always be best at thermal emission. Remember that MIRI precision was a bit higher in our first PandExo exercises. \n",
"\n",
"### Revisiting the 4 questions\n",
"\n",
"Now that we've gone through our first observability test, we can revisit our 4 questions. \n",
"\n",
"1. Can you detect an atmosphere? **Yes!** And we've provided rigorous tests to determine the relevant parameter space. \n",
"2. Can you detect a specific molecule? \n",
"3. Can you detect another physical process? \n",
"4. Can you _constrain_ a molecular abundance? A metallicity? A climate profile? An aerosol species? \n",
"\n",
"## Can a specific molecule be detected?\n",
"\n",
"There are a few ways to tackle this question. We are going to determine if it is possible to solely detect CH4 in a 100xSolar model. The easiest way to do is to simply remove the opacity contribution from the molecule in question. \n",
"\n",
"**Theorists Caution**: You can always expand this to determine molecule detectability with different M/H, clouds, temperatures, etc. \n",
"\n",
"### Detecting CH4 Molecules in Transmission versus Emission"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"logMH = np.log10(100) #Solar metallicity taken by eye balling the Solar System fit #science\n",
"CO = 1 #solar carbon to oxygen ratio \n",
"gj436_trans= pj.load_planet(choose_from.loc[choose_from['pl_name']==planet_name],\n",
" opas,\n",
" pl_eqt=667,st_metfe = 0.02, st_teff=3479)\n",
"gj436_trans.chemeq_visscher(CO, logMH)\n",
"\n",
"#now we can remove the abundance of CH4 using exclude mol \n",
"df = jdi.copy.deepcopy(gj436_trans.inputs['atmosphere']['profile'])\n",
"\n",
"gj436_trans.atmosphere(df=gj436_trans.inputs['atmosphere']['profile'], exclude_mol='CH4')\n",
"out = gj436_trans.spectrum(opas\n",
" ,calculation='transmission', full_output=True)\n",
"\n",
"wno, trans_no_ch4 = pp.mean_regrid(out['wavenumber'],out['transit_depth'],R=150)\n",
"\n",
"trans_no_ch4 = 1e6*((trans_no_ch4)- trans_no_ch4[np.argmin(np.abs(1e4/wno-1))])\n",
"\n",
"figt = figure(height= 250, width=600,\n",
" x_axis_label='Wavelength(micron)', \n",
" y_axis_label='Relative Transit Depth (PPM)', x_axis_type='log')\n",
"figt.line(1e4/wno, trans_no_ch4, line_width=3, color='pink',\n",
" legend_label='Remove CH4')\n",
"figt.line(1e4/wno, all_models_trans[2.0][2], line_width=3, color='black',\n",
" legend_label='Full Model')\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now (similar to our line analysis) we determine if the chi-square of our no-CH4 model "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"out = gj436_trans.spectrum(opas\n",
" ,calculation='thermal', full_output=True)\n",
"wno_pic, fpfs_no_CH4 = pp.mean_regrid(out['wavenumber'],out['fpfs_thermal'],R=150)\n",
"\n",
"fige = figure(height= 250,width=600, \n",
" x_axis_label='Wavelength(micron)', \n",
" y_axis_label='Relative Transit Depth (PPM)', x_axis_type='log')\n",
"fige.line(1e4/wno_pic, 1e6*fpfs_no_CH4, line_width=3, color='pink',\n",
" legend_label='Remove CH4')\n",
"fige.line(1e4/wno_pic, all_models_emis[2.0][0][0], line_width=3, color='black',\n",
" legend_label='Full Model')\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"show(column(figt, fige))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To attach statistical significance to how well CH4 can be detected in 100xSolar model, one can now repeat the line/blackbody analysis to compute and compare the $\\chi^2$ of the \"removed CH4\" model against the \"fake data\" -- in this case, the fake data would include the contribution from CH4. If the \"removed CH4\" model cannot be strongly rejected then that molecule cannot be detected. \n",
"\n",
"### Revisiting the 4 questions\n",
"\n",
"1. Can you detect an atmosphere? **Yes!** And we've provided rigorous tests to determine the relevant parameter space. \n",
"2. Can you detect a specific molecule? **Yes!** \n",
"3. Can you detect another physical process? \n",
"4. Can you _constrain_ a molecular abundance? A metallicity? A climate profile? An aerosol species? \n",
"\n",
"Question (3) can be approached in an identical manner to that of question (2). One interesting analysis would be to determine if a temperature inversion is detectable in emission. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Can any physical parameters be constrained? Information content theory for initial constraint estimates\n",
"\n",
"The content below was created from the methodology described in Batalha & Line (2017) and Batalha et al. 2018. This methodology **should not** be used to replace retrievals as it cannot capture non-Gaussian posteriors (e.g. degenerate solutions, upper limits). However, it does offer a first order look at how well parameters can be constrained.\n",
"\n",
"### IC Theory Terminology \n",
"\n",
"- **model, F(x)**: In this case the models we are looking at are spectral models. In particular, `CHIMERA` and `PICASO` produce:\n",
"\n",
" $F(x)$ = $(\\frac{R_p}{R_s})_\\lambda^2$ or $(\\frac{F_p}{F_s})_\\lambda^2$\n",
" \n",
" \n",
"- **state vector**: this is the **x** in **F(x)**. Usually there are dozens of values that go into this vector: e.g. planet radius, stellar radius, cloud properties, opacities, chemical abundances. In **Batalha & Line 2017** we made the simple assumption that: \n",
"\n",
" $x = [T_{iso}, log C/O, log M/H, \\times R_p]$\n",
"\n",
"Of course, this is an over simplification, but usually these are the properties we are interested in retrieving. I would encourage the user to start simple like this and expand from there. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Computing Jacobians via finite differencing \n",
"\n",
"**Jacobians** (K) describe how sensitive the mode is to slight perturbations in each state vector $x_a$ parameter at each wavelength place. Given our model above the **Jacobian** can be computed via:\n",
"\n",
"\n",
"$K_{ij} = \\frac{\\partial F_i(\\mathbf{x})}{\\partial x_j} = \\begin{bmatrix}\n",
" \\partial F_{\\lambda,1}/ \\partial T & \\partial F_{\\lambda,1}/ \\partial logC/O & \\partial F_{\\lambda,1}/ \\partial logM/H & \\dots & \\dots \\\\\n",
" \\partial F_{\\lambda,2}/ \\partial T & \\partial F_{\\lambda,2}/ \\partial logC/O & \\dots & \\dots & \\dots \\\\\n",
" \\dots & \\dots & \\dots & \\dots & \\dots \\\\\n",
" \\partial F_{\\lambda,i}/ \\partial T & \\dots & \\dots & \\dots & \\partial F_{\\lambda,m}/ \\partial x_j \n",
"\\end{bmatrix}$\n",
"\n",
"\n",
"Since we can't compute these partial derivatives analytically, we must do it numerically. Let's use a simple **center finite differencing** method. Going back to calculus, we have to calculate our partial derivatives about one state vector. Which, makes sense intuitively. You should expect your model to behave differently for hot temperature, high C/O planets than for low temperature, low C/O planets. \n",
"\n",
"For example, let's say our center point is: \n",
"\n",
"$x_0 = [T_{0}, log C/O_0, log M/H_0, xR_p]$ = [500, -.26, 0,1]\n",
"\n",
"$\\frac{\\partial F_{\\lambda,1}}{ \\partial T }$ = $\\frac{F([T_{iso,+1}, log C/O_0, log M/H_0, \\times R_{p0}]) - F([T_{iso,-1}, log C/O_0, log M/H_0, \\times R_{p0}])}{T_{+1} - T_{-1}}$ \n",
"\n",
"Where $T_{+1}$ and $T_{-1}$ are usually computed around $T_{-1} = T_{0} - (0.001*T_{-1})$\n",
"\n",
"Because these jacobians are so specific to the $x_0$ about which they are computed. It's important to do your analysis for a wide range in temperatures, C/Os, M/Hs or whatever else you are interested in"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Compute IC and other useful stats\n",
"\n",
"- **Posterior Covaraince**, $\\mathbf{\\hat{S}}$, describes uncertanties and correlations of the atmospheric state vector after the measurement is made\n",
"\n",
"$\\mathbf{\\hat{S}} = (\\mathbf{K^{T}S_e^{-1}K} + \\mathbf{S_a^{-1}})^{-1}$\n",
"\n",
"- **Gain**, $G$, describes the sensitivity of the retrieval to the observation (e.g. if $G=0$ the measurements contribute no new information \n",
"\n",
"$\\mathbf{G} = \\mathbf{\\hat{S}}\\mathbf{K^{T}S_e^{-1}}$ \n",
"\n",
"- **Averaging Kernel**, $A$, tells us which of the parameters have the greatest impact on the retrieval \n",
"\n",
"$\\mathbf{A} = \\mathbf{GK}$\n",
"\n",
"- **Degrees of Freedom**, is the sum of the diagonal elements of **A** tells us how many independent parameters can be determined from the observations \n",
"\n",
"- **Total information content**, $H$, is measured in bits and describes how our state of knowledge has increased due to a measurement, relative to the prior. This quantity is best used to compare and contrast modes. \n",
"\n",
"$H = \\frac{1}{2} \\ln (|\\mathbf{\\hat{S}}^{-1}\\mathbf{S_a}|)$"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.5"
},
"toc": {
"base_numbering": 1,
"nav_menu": {},
"number_sections": true,
"sideBar": true,
"skip_h1_title": false,
"title_cell": "Table of Contents",
"title_sidebar": "Contents",
"toc_cell": true,
"toc_position": {},
"toc_section_display": true,
"toc_window_display": false
},
"varInspector": {
"cols": {
"lenName": 16,
"lenType": 16,
"lenVar": 40
},
"kernels_config": {
"python": {
"delete_cmd_postfix": "",
"delete_cmd_prefix": "del ",
"library": "var_list.py",
"varRefreshCmd": "print(var_dic_list())"
},
"r": {
"delete_cmd_postfix": ") ",
"delete_cmd_prefix": "rm(",
"library": "var_list.r",
"varRefreshCmd": "cat(var_dic_list()) "
}
},
"types_to_exclude": [
"module",
"function",
"builtin_function_or_method",
"instance",
"_Feature"
],
"window_display": false
}
},
"nbformat": 4,
"nbformat_minor": 4
}