Gen TSO estimates signal-to-noise ratios for transit/eclipse depths through an interactive graphical interface, similar to the JWST Exposure Time Calculator (ETC). This interface leverages the ETC by combining its noise simulator, Pandeia, with additional exoplanet resources from the NASA Exoplanet Archive, the Gaia DR3 catalog, and the TrExoLiSTS database of JWST programs. Gen TSO calculates S/Ns for all JWST instruments for the spectroscopic time-series modes available as of the Cycle 4 GO call. It also simulates target acquisition on the science targets or, when needed, on nearby stellar targets.
The excalibuhr end-to-end pipeline extracts high-resolution spectra designed for VLT/CRIRES+. The package preprocesses raw calibration files, including darks, flats, and lamp frames, and can trace spectral orders on 2D detector images. It applies calibrations to science frames, can remove the sky background by nodding subtraction, and combines frames per nodding position. excalibuhr can also extract 1D spectrum and perform wavelength and flux calibration.
DART-Vetter distinguishes planetary candidates from false positives detected in any transiting survey, and is tailored for photometric data collected from space-based missions. The Convolutional Neural Network is trained on Kepler and TESS Threshold Crossing Events (TCEs), and processes only light curves folded on the period of the relative signal. DART-Vetter has a simple and compact architecture; it is lightweight enough to be executed on personal laptops.
GRIP (Generic data Reduction for nulling Interferometry Package) reduces nulling data with enhanced statistical self-calibration methods from any nulling interferometric instrument within a single and consistent framework. The toolbox self-calibrates null depth measurements by fitting a model of the instrumental perturbations to histograms of data. The model is generated using a simulator of the instrument built into the package for the main operating nullers or provided by the user. GRIP handles baseline discrimination and spectral dispersion and features several optimizing strategy, including least squares, maximum likelihood, and MCMC with emcee (ascl:1303.002), and works on GPU using the cupy library.
easyCHEM calculates chemical equilibrium abundances (including condensation) and adiabatic gradients by minimization of the so-called Gibbs free energy. Ancillary outputs are the atmospheric adiabatic temperature gradient and mean molar mass. Because easyCHEM incorporates the dgesv routine from LAPACK (ascl:2104.020) for fast matrix inversion,external math libraries are not required.
The PLUTO CLOUDY Interface (TPCI) combines the PLUTO (ascl:1010.045) and CLOUDY (ascl.net:9910.001) simulation codes to simulate hydrodynamic evolution under irradiation from a source. The code solves the photoionization and chemical network of the 30 lightest elements. By combining an equilibrium photoionization solver with a general MHD code, TPCI provides an advanced simulation tool applicable to a variety of astrophysical problems.
The Python wrapper pyTPCI couples newer versions of the hydrodynamics code PLUTO (ascl:1010.045) and the gas microphysics code CLOUDY (ascl:9910.001) to self-consistently simulate escaping atmospheres in 1D. Following TPCI (ascl:2506.012), on which pyTPCI is based, CLOUDY is modified to read in depth-dependent wind velocities, and to output useful physical quantities (including mass density, number density, and mean molecular weight as a function of depth).
This One-Class Support Vector Machine (SVM) model detects exoplanet transit events. One-class SVMs fit data and make predictions faster than simple CNNs, and do not require specialized equipment such as Graphics Processing Units (GPU). The code uses a Gaussian kernel to compute a nonlinear decision boundary. After training, OCSVM-Transit-Detection requires that lightcurves classified as containing a transit have features very similar to the lightcurves in the training dataset, thus limiting misclassifications.
M_-M_K- converts absolute 2MASS Ks-band magnitude (or a distance and a Ks-band magnitude) into an estimate of the stellar mass using the empirical relation derived from the resolved photometry and orbits of astrometric binaries. The code requires scalar values for K, distance, and corresponding uncertainties. M_-M_K- outputs errors based on the relationship's scatter and errors in the provided distance and Ks magnitude.
Octofitter performs Bayesian inference against a wide variety of exoplanet and binary star data. It is highly modular and allows users to easily adjust priors, change parameterizations, and specify arbitrary function relations between the parameters of one or more planets. Octofitter further supplies tools for examining model outputs including prior and posterior predictive checks and simulation based calibration.
SBI++ is a complete methodology based on simulation-based (likelihood-free) inference that is customized for astronomical applications. Specifically, the code retains the fast inference speed of ∼1 sec for objects in the observational training set distribution, and additionally permits parameter inference outside of the trained noise and data at ~1 min per object. The package includes scripts for training and implementing SBI++ and is dependent on sbi (ascl:2306.002).
Hydromass analyzes galaxy cluster mass profiles from X-ray and/or Sunyaev-Zel’dovich observations. It provides a global Bayesian framework for deprojection and mass profile reconstruction, including mass model fitting, forward fitting with parametric and polytropic models, and non-parametric log-normal mixture reconstruction. Hydromass easily loads public X-COP data products and applies reconstruction tools directly within a Jupyter notebook.
The Multi-Mission Maximum Likelihood framework (3ML) provides a common high-level interface and model definition for coherent and intuitive modeling of sources using all the available data, no matter their origin. Astrophysical sources are observed by different instruments at different wavelengths with an unprecedented quality, and each instrument and data type has its own ad-hoc software and handling procedure. 3ML's architecture is based on plug-ins; the package uses the official software of each instrument under the hood, thus guaranteeing that 3ML is always using the best possible methodology to deal with the data of each instrument. Though Maximum Likelihood is in the name for historical reasons, 3ML is an interface to several Bayesian inference algorithms such as MCMC and nested sampling as well as likelihood optimization algorithms.
Astromodels defines models for likelihood or Bayesian analysis of astrophysical data. Though designed for analysis in the spectral domain, it can also be used as a toolbox containing functions of any variable. Astromodels is not a modeling package; it provides the tools to build a model as complex as one needs. A separate package such as 3ML (ascl:2506.018) is needed to fit the model to the data.
pynchrotron implements synchrotron emission from cooling electrons. It removes the need for GSL which was originally relied on for a quick computation of the synchrotron kernel. The code has been ported from GSL and written directly in python as well as accelerated with numba. pynchrotron also includes an astromodels (ascl:2506.019) function for direct use in 3ML (ascl:2506.018).
Bjet_MCMC automatically models multiwavelength spectral energy distributions of blazars, considering one-zone synchrotron-self-Compton (SSC) model with or without the addition of external inverse-Compton process from the thermal emission of the nucleus. The code also contains manual fitting functionalities for multi-zone SSC modeling. Bjet_MCMC is built as an MCMC python wrapper around the C++ code Bjet.
CLUES (CLustering UnsupErvised with Sequencer) analyzes spectral and IFU data. This fully interpretable clustering tool uses machine learning to classify and reduce the effective dimensionality of data sets. It combines multiple unsupervised clustering methods with multiscale distance measures using Sequencer (ascl:2105.006) to find representative end-member spectra that can be analyzed with detailed mineralogical modeling and follow-up observations. CLUES has been used on Spitzer IRS data and debris disk science, and can be applied to other high-dimensional spectral data sets, including mineral spectroscopy in general areas of astrophysics and remote sensing.
OK Binaries is a tool for identifying suitable calibration binaries from the Washington Double Star (WDS) Sixth Orbit Catalog. It calculates orbital positions at any epoch, propagates uncertainties using Monte Carlo sampling, and generates orbit plots. The web app includes automated daily updates of binary positions and a searchable interface with filters for position, magnitude, separation, and other orbital parameters. OK Binaries can be used online, as a standalone offline browser app, or via the command line.
pinc ("profiles in cosmology") computes profile likelihoods in cosmology; it can also determine the (boundary-corrected) confidence intervals with the graphical construction. The code uses a simulated annealing scheme and interfaces with MontePython (ascl:1805.027). pinc consists of three short scripts; these automatically set the relevant parameters in MontePython, submit the minimization chains, and analyze the results.
CAMEL (Cosmological Analysis with Minuit Exploration of the Likelihood) performs cosmological parameters estimations using best fits, Monte-Carlo Markov Chains, and profile-likelihoods. Widely used in Planck satellite data analysis, by default it employs CLASS (ascl:1106.020) to compute all relevant cosmological quantities, but any other Boltzmann solver can easily be plugged in.
Procoli extracts profile likelihoods in cosmology. It wraps MontePython (ascl:1805.027), the fast sampler written specifically for CLASS (ascl:1106.020). All likelihoods available for use with MontePython are hence immediately available for use. Procoli is based on a simulated-annealing optimizer to find the global maximum likelihoods value as well as the maximum likelihood points along the profile of any use input parameter.
SAUSERO processes raw science frames to address noise, cosmetic defects, and pixel heterogeneity, preparing them for photometric analysis for OSIRIS+ (Gran Telescopio Canarias). Correcting these artifacts is a critical prerequisite for reliable scientific analysis. The software applies observation-specific reduction steps, ensuring optimized treatment for different data types. Developed with a focus on simplicity and efficiency, SAUSERO streamlines the reduction pipeline, enabling researchers to obtain calibrated data ready for photometric studies.
Would you like to view a random code?