ASCL.net

Astrophysics Source Code Library

Making codes discoverable since 1999

Browsing Codes

Results 1-1281 of 3616 (3521 ASCL, 95 submitted)

Previous
123Next
Order
Title Date
 
Mode
Abstract Compact
Per Page
50100250All
[ascl:2306.019] realfast: Real-time interferometric data analysis for the VLA

The transient search pipeline realfast integrates with the real-time environment at the Very Large Array (VLA) to look for fast radio bursts, pulsars, and other rare astrophysical transients. The software monitors multicast messages, catches visibility data, and defines a fast transient search pipeline with rfpipe (ascl:1710.002). It indexes candidate transients and other metadata for the search interface, and writes and archives new visibility files for candidate transients. realfast provides support for GPU algorithms, manages distributed futures, and performs blind injection and management of mock transients, among other tasks, and rapidly distributes data products and transient alerts to the public.

[ascl:1604.006] 2-DUST: Dust radiative transfer code

2-DUST is a general-purpose dust radiative transfer code for an axisymmetric system that reveals the global energetics of dust grains in the shell and the 2-D projected morphologies of the shell that are strongly dependent on the mixed effects of the axisymmetric dust distribution and inclination angle. It can be used to model a variety of axisymmetric astronomical dust systems.

[ascl:2103.001] 21cmDeepLearning: Matter density map extractor

21cmDeepLearning extracts the underlying matter density map from a 21 cm intensity field by making use of a convolutional neural network (CNN) with the U-Net architecture; the software is implemented in Pytorch. The astrophysical parameters of the simulations can be predicted with a secondary CNN. The simulations of matter density and 21 cm maps are performed with the code 21cmFAST (ascl:1102.023).

[ascl:2312.013] 21cmEMU: 21cmFAST summaries emulator

21cmEMU emulates 21cmFAST (ascl:1102.023) summary statistics, among them the 21-cm power spectrum, 21-cm global brightness temperature, IGM spin temperature, and neutral fraction. It also emulates the Thomson scattering optical depth and UV luminosity functions. With 21cmFAST installed, parameters can be supplied direction to 21cmEMU, and 21cmEMU can be used for, for example, analytic calculations of taue and UV luminosity functions. The code is included as an alternative simulator in 21cmMC (ascl:1608.017).

[ascl:1102.023] 21cmFAST: A Fast, Semi-Numerical Simulation of the High-Redshift 21-cm Signal

21cmFAST is a powerful semi-numeric modeling tool designed to efficiently simulate the cosmological 21-cm signal. The code generates 3D realizations of evolved density, ionization, peculiar velocity, and spin temperature fields, which it then combines to compute the 21-cm brightness temperature. Although the physical processes are treated with approximate methods, the results were compared to a state-of-the-art large-scale hydrodynamic simulation, and the findings indicate good agreement on scales pertinent to the upcoming observations (>~ 1 Mpc). The power spectra from 21cmFAST agree with those generated from the numerical simulation to within 10s of percent, down to the Nyquist frequency. Results were shown from a 1 Gpc simulation which tracks the cosmic 21-cm signal down from z=250, highlighting the various interesting epochs. Depending on the desired resolution, 21cmFAST can compute a redshift realization on a single processor in just a few minutes. The code is fast, efficient, customizable and publicly available, making it a useful tool for 21-cm parameter studies.

[ascl:2408.014] 21cmFirstCLASS: Generate initial conditions at recombination

21cmFirstCLASS extends 21cmFAST (ascl:1102.023) and interfaces with CLASS (ascl:1106.020) to generate initial conditions at recombination that are consistent with the input cosmological model. These initial conditions can be set during the time of recombination, allowing one to compute the 21cm signal (and its spatial fluctuations) throughout the dark ages, as well as in the proceeding cosmic dawn and reionization epochs, just as in the standard 21cmFAST. 21cmFirstCLASS tracks both the CDM density field δc as well as the baryons density field δb. In addition, the user interface in 21cmFirstCLASS has been improved and allows one to easily plot the 21cm power spectrum while including noise from the output of 21cmSense (ascl:1609.013).

[ascl:1608.017] 21CMMC: Parallelized Monte Carlo Markov Chain analysis tool for the epoch of reionization (EoR)

21CMMC is an efficient Python sampler of the semi-numerical reionization simulation code 21cmFAST (ascl:1102.023). It can recover constraints on astrophysical parameters from current or future 21 cm EoR experiments, accommodating a variety of EoR models, as well as priors on individual model parameters and the reionization history. By studying the resulting impact on the EoR astrophysical constraints, 21CMMC can be used to optimize foreground cleaning algorithms; interferometer designs; observing strategies; alternate statistics characterizing the 21cm signal; and synergies with other observational programs.

[ascl:1609.013] 21cmSense: Calculating the sensitivity of 21cm experiments to the EoR power spectrum

21cmSense calculates the expected sensitivities of 21cm experiments to the Epoch of Reionization power spectrum. Written in Python, it requires NumPy, SciPy, and AIPY (ascl:1609.012).

[ascl:2307.008] 21cmvFAST: Adding dark matter-baryon relative velocities to 21cmFAST

21cmvFAST demonstrates that including dark matter (DM)-baryon relative velocities produces velocity-induced acoustic oscillations (VAOs) in the 21-cm power spectrum. Based on 21cmFAST (ascl:1102.023) and 21CMMC (ascl:1608.017), 21cmvFAST accounts for molecular-cooling haloes, which are expected to drive star formation during cosmic dawn, as both relative velocities and Lyman-Werner feedback suppress halo formation. This yields accurate 21-cm predictions all the way to reionization (z>~10).

[ascl:2402.010] 2cosmos: Monte Python modification for two independent instances of CLASS

2cosmos is a modification of Monte Python (ascl:1307.002) and allows the user to write likelihood modules that can request two independent instances of CLASS (ascl:1106.020) and separate dictionaries and structures for all cosmological and nuisance parameters. The intention is to be able to evaluate two independent cosmological calculations and their respective parameters within the same likelihood. This is useful for evaluating a likelihood using correlated datasets (e.g. mutually exclusive subsets of the same dataset for which one wants to take into account all correlations between the subsets).

[ascl:2006.004] 2D-FFTLog: Generalized FFTLog algorithm for non-Gaussian covariance matrices

2D-FFTLog takes the FFTLog algorithm for 1D Hankel transforms and generalizes it for 2D Hankel transforms. The algorithm is useful for efficiently computing non-Gaussian covariance matrices of cosmological 2-point statistics in configuration space from Fourier space covariances. Fast bin-averaging method is also developed for both the logarithmic binning and general binning choices. C and Python versions of the code are available.

[ascl:2005.012] 2DBAT: 2D Bayesian Automated Tilted-ring fitter

2DBAT implements Bayesian fits of 2D tilted-ring models to derive rotation curves of galaxies. It performs 2D tilted-ring analysis based on a Bayesian Markov Chain Monte Carlo (MCMC) technique, thus quantifying the kinematic geometry of galaxy discs, and deriving high-quality rotation curves that can be used for mass modeling of baryons and dark matter halos.

[ascl:1505.015] 2dfdr: Data reduction software

2dfdr is an automatic data reduction pipeline dedicated to reducing multi-fibre spectroscopy data, with current implementations for AAOmega (fed by the 2dF, KOALA-IFU, SAMI Multi-IFU or older SPIRAL front-ends), HERMES, 2dF (spectrograph), 6dF, and FMOS. A graphical user interface is provided to control data reduction and allow inspection of the reduced spectra.

[ascl:1608.015] 2DFFT: Measuring Galactic Spiral Arm Pitch Angle

2DFFT utilizes two-dimensional fast Fourier transformations of images of spiral galaxies to isolate and measure the pitch angles of their spiral arms; this provides a quantitative way to measure this morphological feature and allows comparison of spiral galaxy pitch angle to other galactic parameters and test spiral arm genesis theories. 2DFFT requires fourn.c from Numerical Recipes in C (Press et al. 1989).

P2DFFT (ascl:1806.011) is a parallelized version of 2DFFT.

[ascl:2211.013] 2DFFTUtils: 2DFFT Utilities implementation

The Python module 2DFFTUtils implements tasks associated with measuring spiral galaxy pitch angle with 2DFFT (ascl:1608.015). Since most of the 2DFFT utilities are implemented in one place, it makes preparing images for 2DFFT and dealing with 2DFFT data interactively or in scripts event easier.

[ascl:1808.007] 2DSF: Vectorized Structure Function Algorithm

The vectorized physical domain structure function (SF) algorithm calculates the velocity anisotropy within two-dimensional molecular line emission observations. The vectorized approach is significantly faster than brute force iterative algorithms and is very efficient for even relatively large images. Furthermore, unlike frequency domain algorithms which require the input data to be fully integrable, this algorithm, implemented in Python, has no such requirements, making it a robust tool for observations with irregularities such as asymmetric boundaries and missing data.

[ascl:1201.005] 2LPTIC: 2nd-order Lagrangian Perturbation Theory Initial Conditions

Setting initial conditions in numerical simulations using the standard procedure based on the Zel'dovich approximation (ZA) generates incorrect second and higher-order growth and therefore excites long-lived transients in the evolution of the statistical properties of density and velocity fields. Using more accurate initial conditions based on second-order Lagrangian perturbation theory (2LPT) reduces transients significantly; initial conditions based on 2LPT are thus much more appropriate for numerical simulations devoted to precision cosmology. The 2LPTIC code provides initial conditions for running cosmological simulations based on second-order Lagrangian Perturbation Theory (2LPT), rather than first-order (Zel'dovich approximation).

[ascl:1303.016] 2MASS Kit: 2MASS Catalog Server Kit

2MASS Kit is an open source software for use in easily constructing a high performance search server for important astronomical catalogs. It is tuned for optimal coordinate search performance (Radial Search, Box Search, Rectangular Search) of huge catalogs, thus increasing the speed by more than an order of magnitude when compared to simple indexing on a single table. Optimal conditions enable more than 3,000 searches per second for radial search of 2MASS PSC. The kit is best characterized by its flexible tuning. Each table index is registered in one of six table spaces (each resides in a separate directory), thus allowing only the essential parts to be easily moved onto fast devices. Given the terrific evolution that has taken place with recent SSDs in performance, a very cost-effective way of constructing high-performance servers is moving part of or all table indices to a fast SSD.

[submitted] 3D texturized model of MARS (MOLA) regions

The Matlab Tool generates a 3D model (WRL, texturized in height false color map) of a defined region of the Mars surface. It defines the region of interest of the Mars surface (by Lat Long), a resolution of the MOLA DTMs to be considered (with a minimum px onground of 468 m), a scale factor to be multiplied to the height of the surface to improve features visibility for bumping or shadowing effect.

[ascl:1507.001] 3D-Barolo: 3D fitting tool for the kinematics of galaxies

3D-Barolo (3D-Based Analysis of Rotating Object via Line Observations) or BBarolo is a tool for fitting 3D tilted-ring models to emission-line datacubes. BBarolo works with 3D FITS files, i.e. image arrays with two spatial and one spectral dimensions. BBarolo recovers the true rotation curve and estimates the intrinsic velocity dispersion even in barely resolved galaxies (about 2 resolution elements) if the signal to noise of the data is larger than 2-3. It has source-detection and first-estimate modules, making it suitable for analyzing large 3D datasets automatically, and is a useful tool for deriving reliable kinematics for both local and high-redshift galaxies.

[ascl:1803.010] 3D-PDR: Three-dimensional photodissociation region code

3D-PDR is a three-dimensional photodissociation region code written in Fortran. It uses the Sundials package (written in C) to solve the set of ordinary differential equations and it is the successor of the one-dimensional PDR code UCL_PDR (ascl:1303.004). Using the HEALpix ray-tracing scheme (ascl:1107.018), 3D-PDR solves a three-dimensional escape probability routine and evaluates the attenuation of the far-ultraviolet radiation in the PDR and the propagation of FIR/submm emission lines out of the PDR. The code is parallelized (OpenMP) and can be applied to 1D and 3D problems.

[ascl:1805.005] 3DCORE: Forward modeling of solar storm magnetic flux ropes for space weather prediction

3DCORE forward models solar storm magnetic flux ropes called 3-Dimensional Coronal Rope Ejection (3DCORE). The code is able to produce synthetic in situ observations of the magnetic cores of solar coronal mass ejections sweeping over planets and spacecraft. Near Earth, these data are taken currently by the Wind, ACE and DSCOVR spacecraft. Other suitable spacecraft making these kind of observations carrying magnetometers in the solar wind were MESSENGER, Venus Express, MAVEN, and even Helios.

[ascl:1111.011] 3DEX: Fast Fourier-Bessel Decomposition of Spherical 3D Surveys

High precision cosmology requires analysis of large scale surveys in 3D spherical coordinates, i.e. Fourier-Bessel decomposition. Current methods are insufficient for future data-sets from wide-field cosmology surveys. 3DEX (3D EXpansions) is a public code for fast Fourier-Bessel decomposition of 3D all-sky surveys which takes advantage of HEALPix for the calculation of tangential modes. For surveys with millions of galaxies, computation time is reduced by a factor 4-12 depending on the desired scales and accuracy. The formulation is also suitable for pre-calculations and external storage of the spherical harmonics, which allows for further speed improvements. The 3DEX code can accommodate data with masked regions of missing data. It can be applied not only to cosmological data, but also to 3D data in spherical coordinates in other scientific fields.

[ascl:1804.018] 3DView: Space physics data visualizer

3DView creates visualizations of space physics data in their original 3D context. Time series, vectors, dynamic spectra, celestial body maps, magnetic field or flow lines, and 2D cuts in simulation cubes are among the variety of data representation enabled by 3DView. It offers direct connections to several large databases and uses VO standards; it also allows the user to upload data. 3DView's versatility covers a wide range of space physics contexts.

[ascl:2101.001] 3LPT-init: Initial conditions with third-order Lagrangian perturbation for cosmological N-body simulations

In cosmological N-body simulations, higher-order Lagrangian perturbation on the initial condition affects the formation of nonlinear structure. Using this code, the initial condition generated by Zel'dovich approximation (Lagrangian linear perturbation) for Gadget-2 code to initial condition with second- or third-order Lagrangian perturbation (2LPT, 3LPT).

[ascl:1708.020] 4DAO: DAOSPEC interface

4DAO launches DAOSPEC (ascl:1011.002) for a large sample of spectra. Written in Fortran, the software allows one to easily manage the input and output files of DAOSPEC, optimize the main DAOSPEC parameters, and mask specific spectral regions. It also provides suitable graphical tools to evaluate the quality of the solution and provides final, normalized, zero radial velocity spectra.

[ascl:1104.014] A Correction to the Standard Galactic Reddening Map: Passive Galaxies as Standard Crayons

We present corrections to the Schlegel, Finkbeiner, Davis (SFD98) reddening maps over the Sloan Digital Sky Survey northern Galactic cap area. To find these corrections, we employ what we dub the "standard crayon" method, in which we use passively evolving galaxies as color standards by which to measure deviations from the reddening map. We select these passively evolving galaxies spectroscopically, using limits on the H alpha and O II equivalent widths to remove all star-forming galaxies from the SDSS main galaxy catalog. We find that by correcting for known reddening, redshift, color-magnitude relation, and variation of color with environmental density, we can reduce the scatter in color to below 3% in the bulk of the 151,637 galaxies we select. Using these galaxies we construct maps of the deviation from the SFD98 reddening map at 4.5 degree resolution, with 1-sigma error of ~ 1.5 millimagnitudes E(B-V). We find that the SFD98 maps are largely accurate with most of the map having deviations below 3 millimagnitudes E(B-V), though some regions do deviate from SFD98 by as much as 50%. The maximum deviation found is 45 millimagnitudes in E(B-V), and spatial structure of the deviation is strongly correlated with the observed dust temperature, such that SFD98 underpredicts reddening in regions of low dust temperature. The maps of these deviations, as well as their errors, are made available to the scientific community as supplemental correction to SFD98 at the URL below.

[submitted] A Neural Network for the Identification of Dangerous Planetesimals (Including scripts for data generation)

Two neural networks were designed to identify hazardous planetesimals that were trained on object trajectories calculated in a cloud computing environment. The first neural network was fully-connected and was trained on the orbital elements (OEs) of real/simulated planetesimals, while the second was a 1-dimensional convolutional neural network that was trained on the position Cartesian coordinates of real/simulated planetesimals. Ultimately, the network trained on OEs had a better performance by identifying one-third of known potentially hazardous objects including the 3 asteroids with the highest chance of impact with Earth (2009 FD, 1999 RQ36, 1950 DA) as established by NASA's Monte Carlo based Sentry system.

[submitted] A pseudo GUI with pyplot

Working with a GUI, or adding interaction in plotting, will help a lot in data analysis. However, the common GUI of Python is OS-dependent, while manually adding interactive codes is too complex. A pseudo-GUI tool is introduced in this work. It will help to add buttons/checkers in the graph and assign callback functions to them. The remaining problem is that the documents in this package are in Chinese and will be in English in the next version. This program is published to the PyPI, and can be installed by 'pip install pltgui'.

[submitted] a scheme that combines 'Srun job submission mode', 'Sbatch job submission mode' and Monitor function (SSM)

1. Configure the basic environment
2. Modify the config file
3. Run auto_task_scheduler.py

[ascl:1312.011] A_phot: Photon Asymmetry

Photon asymmetry is a novel robust substructure statistic for X-ray cluster observations with only a few thousand counts; it exhibits better stability than power ratios and centroid shifts and has a smaller statistical uncertainty than competing substructure parameters, allowing for low levels of substructure to be measured with confidence. A_phot computes the photon asymmetry (A_phot) parameter for morphological classification of clusters and allows quantifying substructure in samples of distant clusters covering a wide range of observational signal-to-noise ratios. The python scripts are completely automatic and can be used to rapidly classify galaxy cluster morphology for large numbers of clusters without human intervention.

[ascl:2209.001] A-SLOTH: Semi-analytical model to connect first stars and galaxies to observables

A-SLOTH (Ancient Stars and Local Observables by Tracing Halos) connects the formation of the first stars and galaxies to observables. The model is based on dark matter merger trees, on which A-SLOTH applies analytical recipes for baryonic physics to model the formation of both metal-free and metal-poor stars and the transition between them. The software samples individual stars and includes radiative, chemical, and mechanical feedback. A-SLOTH has versatile applications with moderate computational requirements. It can be used to constrain the properties of the first stars and high-z galaxies based on local observables, predicts properties of the oldest and most metal-poor stars in the Milky Way, can serve as a subgrid model for larger cosmological simulations, and predicts next-generation observables of the early Universe, such as supernova rates or gravitational wave events.

[ascl:1704.010] A-Track: Detecting Moving Objects in FITS images

A-Track is a fast, open-source, cross-platform pipeline for detecting moving objects (asteroids and comets) in sequential telescope images in FITS format. The moving objects are detected using a modified line detection algorithm.

[ascl:1910.003] a3cosmos-gas-evolution: Galaxy cold molecular gas evolution functions

a3cosmos-gas-evolution calculates galaxies' cold molecular gas properties using gas scaling functions derived from the A3COSMOS project. By known galaxies' redshifts or cosmic age, stellar masses, and star formation enhancement to galaxies' star-forming main sequence (Delta MS), the gas scaling functions predict their stellar mass ratio (gas fraction) and gas depletion time.

[ascl:2406.013] AAD: ALeRCE Anomaly Detector

The ALeRCE anomaly detector cross-validates six anomaly detection algorithms for three classes (transient, periodic, and stochastic) of anomalous sources within the Zwicky Transient Facility (ZTF) data stream using the ALeRCE light curve features. A machine and deep learning-based framework is used for anomaly detection. For each class, a distinct anomaly detection model is constructed using only information about the known objects (i.e., inliers) for training. An anomaly score is computed using the probabilities to determine whether the light curve corresponds to a transient, stochastic, or periodic nature.

[ascl:1110.009] AAOGlimpse: Three-dimensional Data Viewer

AAOGlimpse is an experimental display program that uses OpenGL to display FITS data (and even JPEG images) as 3D surfaces that can be rotated and viewed from different angles, all in real-time. It is WCS-compliant and designed to handle three-dimensional data. Each plane in a data cube is surfaced in the same way, and the program allows the user to travel through a cube by 'peeling off' successive planes, or to look into a cube by suppressing the display of data below a given cutoff value. It can blink images and can superimpose images and contour maps from different sources using their world coordinate data. A limited socket interface allows communication with other programs.

[ascl:2406.023] AARD: Automatic detection of solar active regions

This python code automatically detects solar active regions (AR). Based on morphological operation and region growing, it uses synoptic magnetograms from SOHO/MDI and SDO/HMI and calculates the parameters that characterize each AR, including the latitude and longitude of the flux-weighted centroid of two polarities and the whole AR, the area, and the flux of each polarity, and the initial and final dipole moments.

[ascl:2302.023] AART: Adaptive Analytical Ray Tracing

AART (Adaptive Analytical Ray Tracing) exploits the integrability properties of the Kerr spacetime to compute high-resolution black hole images and their visibility amplitude on long interferometric baselines. It implements a non-uniform adaptive grid on the image plane suitable to study black hole photon rings (narrow ring-shaped features, predicted by general relativity but not yet observed). The code implements all the relevant equations required to compute the appearance of equatorial sources on the (far) observer's screen.

[ascl:2305.013] aartfaac2ms: Aartfaac datasets converter

aartfaac2ms converts raw Aartfaac correlator files to the casacore (ascl:1912.002) measurement set format. It phase rotates the data to a common phase center, and (optionally) flags, averages, and compresses the data. The code includes a tool, afedit, to splice a raw Aartfaac set based on LST.

[ascl:2405.016] ABBHI: Autoregressive binary black hole inference

autoregressive-bbh-inference, written in Python, models the distributions of binary black hole masses, spins, and redshifts to identify physical features appearing in these distributions without the need for strongly-parametrized population models. This allows not only agnostic study of the “known unknowns” of the black hole population but also reveals the “unknown unknowns," the unexpected and impactful features that may otherwise be missed by the standard building-block method.

[ascl:1504.014] abcpmc: Approximate Bayesian Computation for Population Monte-Carlo code

abcpmc is a Python Approximate Bayesian Computing (ABC) Population Monte Carlo (PMC) implementation based on Sequential Monte Carlo (SMC) with Particle Filtering techniques. It is extendable with k-nearest neighbour (KNN) or optimal local covariance matrix (OLCM) pertubation kernels and has built-in support for massively parallelized sampling on a cluster using MPI.

[ascl:1507.007] abo-cross: Hydrogen broadening cross-section calculator

Line broadening cross sections for the broadening of spectral lines by collisions with neutral hydrogen atoms have been tabulated by Anstee & O’Mara (1995), Barklem & O’Mara (1997) and Barklem, O’Mara & Ross (1998) for s–p, p–s, p–d, d–p, d–f and f–d transitions. abo-cross, written in Fortran, interpolates in these tabulations to make these data more accessible to the end user. This code can be incorporated into existing spectrum synthesis programs or used it in a stand-alone mode to compute line broadening cross sections for specific transitions.

[ascl:1401.007] abundance: High Redshift Cluster Abundance

abundance, written in Fortran, provides driver and fitting routines to compute the predicted number of clusters in a ΛCDM cosmology that agrees with CMB, SN, BAO, and H0 measurements (up to 2010) at some specified parameter confidence and the mass that would rule out that cosmology at some specified sample confidence. It also computes the expected number of such clusters in the light cone and the Eddington bias factor that must be applied to observed masses.

[ascl:2212.016] AbundanceMatching: Subhalo abundance matching with scatter

The AbundanceMatching Python module creates (interpolates and extrapolates) abundance functions and also provides fiducial deconvolution and abundance matching.

[ascl:1303.026] ACORNS-ADI: Algorithms for Calibration, Optimized Registration and Nulling the Star in Angular Differential Imaging

ACORNS-ADI, written in python, is a parallelized software package which reduces high-contrast imaging data. Originally written for imaging data from Subaru/HiCIAO, it requires minimal modification to reduce data from other instruments. It is efficient, open-source, and includes several optional features which may improve performance.

[ascl:2003.003] acorns: Agglomerative Clustering for ORganising Nested Structures

acorns generates a hierarchical system of clusters within discrete data by using an n-dimensional unsupervised machine-learning algorithm that clusters spectroscopic position-position-velocity data. The algorithm is based on a technique known as hierarchical agglomerative clustering. Although acorns was designed with the analysis of discrete spectroscopic position-position-velocity (PPV) data in mind (rather than uniformly spaced data cubes), clustering can be performed in n-dimensions and the algorithm can be readily applied to other data sets in addition to PPV measurements.

[ascl:1302.003] ACS: ALMA Common Software

ALMA Common Software (ACS) provides a software infrastructure common to all ALMA partners and consists of a documented collection of common patterns and components which implement those patterns. The heart of ACS is based on a distributed Component-Container model, with ACS Components implemented as CORBA objects in any of the supported programming languages. ACS provides common CORBA-based services such as logging, error and alarm management, configuration database and lifecycle management. Although designed for ALMA, ACS can and is being used in other control systems and distributed software projects, since it implements proven design patterns using state of the art, reliable technology. It also allows, through the use of well-known standard constructs and components, that other team members whom are not authors of ACS easily understand the architecture of software modules, making maintenance affordable even on a very large project.

[ascl:2011.024] ACStools: Python tools for Hubble Space Telescope Advanced Camera for Surveys data

The ACStools package contains Python tools to work with data from the Hubble Space Telescope (HST) Advanced Camera for Surveys (ACS). The package has several calibration utilities and a zeropoints calculator, can detect satellite trails, and offers destriping, polarization, and photometric tools.

[ascl:1908.003] ActSNClass: Active learning for supernova photometric classification

ActSNClass uses a parametric feature extraction method, Random Forest classifier and two learning strategies (uncertainty sampling and random sampling) to performs active learning for supernova photometric classification.

[ascl:1502.004] ADAM: All-Data Asteroid Modeling

ADAM (All-Data Asteroid Modeling) models asteroid shape reconstruction from observations. Developed in MATLAB with core routines in C, its features include general nonconvex and non-starlike parametric 3D shape supports and reconstruction of asteroid shape from any combination of lightcurves, adaptive optics images, HST/FGS data, disk-resolved thermal images, interferometry, and range-Doppler radar images. ADAM does not require boundary contour extraction for reconstruction and can be run in parallel.

[ascl:2011.001] AdaMet: Adaptive Metropolis for Bayesian analysis

AdaMet (Adaptive Metropolis) performs efficient Bayesian analysis. The user-friendly Python package is an implementation of the Adaptive Metropolis algorithm. In many real-world applications, it is more efficient and robust than emcee (ascl:1303.002), which warm-up phase scales linearly with the number of walkers. For this reason, and because of its didactic value, the AdaMet code is provided as an alternative.

[ascl:1305.004] AdaptaHOP: Subclump finder

AdaptaHOP is a structure and substructure detector. It reads an input particle distribution file and can compute the mean square distance between each particle and its nearest neighbors or the SPH density associated to each particle + the list of its nearest neighbors. It can also read an input particle distribution and a neighbors file (output from a previous run) and output the tree of the structures in structures.

[ascl:1609.024] AdaptiveBin: Adaptive Binning

AdaptiveBin takes one or more images and adaptively bins them. If one image is supplied, then the pixels are binned by fractional error on the intensity. If two or more images are supplied, then the pixels are fractional binned by error on the combined color.

[ascl:1010.024] ADAPTSMOOTH: A Code for the Adaptive Smoothing of Astronomical Images

ADAPTSMOOTH serves to smooth astronomical images in an adaptive fashion in order to enhance the signal-to-noise ratio (S/N). The adaptive smoothing scheme allows taking full advantage of the spatially resolved photometric information contained in an image in that at any location the minimal smoothing is applied to reach the requested S/N. Support is given to match more images on the same smoothing length, such that proper estimates of local colors can be done, with a big potential impact on multi-wavelength studies of extended sources (galaxies, nebulae). Different modes to estimate local S/N are provided. In addition to classical arithmetic-mean averaging mode, the code can operate in median averaging mode, resulting in a significant enhancement of the final image quality and very accurate flux conservation.

[ascl:2204.015] ADBSat: Aerodynamic Database for Satellites

ADBSat computes aerodynamic coefficient databases for satellite geometries in free-molecular flow (FMF) conditions. Written in MATLAB, ADBSat imports body geometry from .stl or .obj mesh files, calculates aerodynamic force and moment coefficient for different gas-surface interaction models, and calculates solar radiation pressure force and moment coefficient. It also takes multiple surface and material characteristics into consideration. ADBSat is a panel-method tool that is able to calculate aerodynamic or solar force and moment coefficient sets for satellite geometries by applying analytical (closed-form) expressions for the interactions to discrete flat-plate mesh elements. The panel method of ADBSat assumes FMF conditions. The code analyzes basic shadowing to identify panels that are shielded from the flow by other parts of the body and will therefore not experience any surface interactions. However, this method is dependent on the refinement of the input mesh and can be sensitive to the orientation and arrangement of the mesh elements with respect to the oncoming flow direction.

[ascl:2307.039] adiabatic-tides: Tidal stripping of dark matter (sub)haloes

adiabatic-tides evaluates the tidal stripping of dark matter (sub)haloes in the adiabatic limit. It exactly reproduces the remnant of an NFW halo that is exposed to a slowly increasing isotropic tidal field and approximately reproduces the remnant for an anisotropic tidal field. adiabatic-tides also predicts the asymptotic mass loss limit for orbiting subhaloes and differently concentrated host-haloes with and without baryonic components, and can be used to improve predictions of dark matter annihilation.

[ascl:1109.002] ADIPLS: Aarhus Adiabatic Oscillation Package (ADIPACK)

The goal of the development of the Aarhus Adiabatic Oscillation Package was to have a simple and efficient tool for the computation of adiabatic oscillation frequencies and eigenfunctions for general stellar models, emphasizing also the accuracy of the results. The Fortran code offers considerable flexibility in the choice of integration method as well as ability to determine all frequencies of a given model, in a given range of degree and frequency. Development of the Aarhus adiabatic pulsation code started around 1978. Although the main features have been stable for more than a decade, development of the code is continuing, concerning numerical properties and output. The code has been provided as a generally available package and has seen substantial use at a number of installations. Further development of the package, including bringing the documentation closer to being up to date, is planned as part of the HELAS Coordination Action.

[ascl:1203.001] AE: ACIS Extract

ACIS Extract (AE), written in the IDL language, provides innovative and automated solutions to the varied challenges found in the analysis of X-ray data taken by the ACIS instrument on NASA's Chandra observatory. AE addresses complications found in many Chandra projects: large numbers of point sources (hundreds to several thousand), faint point sources, misaligned multiple observations of an astronomical field, point source crowding, and scientifically relevant diffuse emission. AE can perform virtually all the data processing and analysis tasks that lie between Level 2 ACIS data and publishable LaTeX tables of point-like and diffuse source properties and spectral models.

[ascl:1212.009] Aegean: Compact source finding in radio images

Aegean, written in python, finds compact sources within radio images by seeking out islands of pixels above a given threshold and then using the curvature of the image to determine how many Gaussian components should be used to describe the island. The Gaussian fitting is initiated with parameters determined from the curvature and intensity maps, and makes use of mpfit to perform a constrained fit. Aegean has been optimized for compact radio sources in images that have no diffuse background emission, but by pre-processing the images with a spatial filter, or by convolving an optical image with an appropriately small PSF, Aegean is able to produce excellent results in a range of applications.

[ascl:1812.004] aesop: ARC Echelle Spectroscopic Observation Pipeline

aesop (ARC Echelle Spectroscopic Observation Pipeline) analyzes echelle spectra for observations made by the Astrophysics Research Consortium (ARC) Echelle Spectrograph on the ARC 3.5 m Telescope at Apache Point Observatory. It is a high resolution spectroscopy software toolkit that picks up where the traditional IRAF reduction scripts leave off, and offers blaze function normalization by polynomial fits to observations of early-type stars, a robust least-squares normalization method, and radial velocity measurements (or offset removals) via cross-correlation with model spectra, including barycentric radial velocity calculations. It also concatenates multiple echelle orders into a simple 1D spectrum and provides approximate flux calibration.

[ascl:2405.017] AFINO: Automated Flare Inference of Oscillations

AFINO (Automated Flare Inference of Oscillations) finds oscillations in time series data using a Fourier-based model comparison approach. The code analyzes the date and generates a results file in either JSON or Pickle format, which contains numerous properties of the data and analysis, and a summary plot.

[ascl:1509.003] AFR (ASPFitsReader): A pulsar FITS file reader and analysis package

AFR, or ASPFitsReader, reduces, processes, and manipulates pulsar data, including calibration, template profile creation, and interactive excision of radio frequency interference from pulsar profile data. It also creates times-of-arrival compatible with Tempo (ascl:1509.002) and Tempo2 (ascl:1210.015) timing software.

[ascl:1805.008] AGAMA: Action-based galaxy modeling framework

The AGAMA library is a collection of tools for constructing and analyzing models of galaxies. It computes gravitational potential and forces, performs orbit integration and analysis, and can convert between position/velocity and action/angle coordinates. It offers a framework for finding best-fit parameters of a model from data and self-consistent multi-component galaxy models, and contains useful auxiliary utilities such as various mathematical routines. The core of the library is written in C++, and there are Python and Fortran interfaces. AGAMA may be used as a plugin for the stellar-dynamical software packages galpy (ascl:1411.008), AMUSE (ascl:1107.007), and NEMO (ascl:1010.051).

[ascl:1804.020] Agatha: Disentangling period signals from correlated noise in a periodogram framework

Agatha is a framework of periodograms to disentangle periodic signals from correlated noise and to solve the two-dimensional model selection problem: signal dimension and noise model dimension. These periodograms are calculated by applying likelihood maximization and marginalization and combined in a self-consistent way. Agatha can be used to select the optimal noise model and to test the consistency of signals in time and can be applied to time series analyses in other astronomical and scientific disciplines. An interactive web implementation of the software is also available at http://agatha.herts.ac.uk/.

[ascl:1607.001] AGNfitter: SED-fitting code for AGN and galaxies from a MCMC approach

AGNfitter is a fully Bayesian MCMC method to fit the spectral energy distributions (SEDs) of active galactic nuclei (AGN) and galaxies from the sub-mm to the UV; it enables robust disentanglement of the physical processes responsible for the emission of sources. Written in Python, AGNfitter makes use of a large library of theoretical, empirical, and semi-empirical models to characterize both the nuclear and host galaxy emission simultaneously. The model consists of four physical emission components: an accretion disk, a torus of AGN heated dust, stellar populations, and cold dust in star forming regions. AGNfitter determines the posterior distributions of numerous parameters that govern the physics of AGN with a fully Bayesian treatment of errors and parameter degeneracies, allowing one to infer integrated luminosities, dust attenuation parameters, stellar masses, and star formation rates.

[ascl:2203.019] agnpy: Modeling jetted Active Galactic Nuclei radiative processes with Python

agnpy focuses on the numerical computation of the photon spectra produced by leptonic radiative processes in jetted Active Galactic Nuclei (AGN). It includes classes describing the galaxy components responsible for line and thermal emission and calculates the absorption due to gamma-gamma pair production on soft (IR-UV) photon fields.

[ascl:2307.007] AGNvar: Model spectral timing properties in active galactic nuclei

AGNvar calculates the expected reverberation signal in any given energy band, for a given spectral energy distribution (SED), assuming variable X-ray emission. The code predicts the shape of the re-processed continuum by modeling the time-averaged SED according to input parameters, which include geometry, mass, and mass accretion rate; generally the input parameters are based off typical XSPEC (ascl:9910.005) models. It evaluates the SED response to an input driving light-curve (assumed to originate in the X-ray corona) and creates a set of time-dependent SEDs. It then takes the results from the set of time-dependent SEDs and extracts the light-curve in a given band pass.

[ascl:1102.009] AHF: Amiga's Halo Finder

Cosmological simulations are the key tool for investigating the different processes involved in the formation of the universe from small initial density perturbations to galaxies and clusters of galaxies observed today. The identification and analysis of bound objects, halos, is one of the most important steps in drawing useful physical information from simulations. In the advent of larger and larger simulations, a reliable and parallel halo finder, able to cope with the ever-increasing data files, is a must. In this work we present the freely available MPI parallel halo finder AHF. We provide a description of the algorithm and the strategy followed to handle large simulation data. We also describe the parameters a user may choose in order to influence the process of halo finding, as well as pointing out which parameters are crucial to ensure untainted results from the parallel approach. Furthermore, we demonstrate the ability of AHF to scale to high-resolution simulations.

[ascl:2310.011] AI-Feynman: Symbolic regression algorithm

AI-Feynman fits analytical expressions to data sets via symbolic regression, mapping the target variable to different features supplied in the data array. Using a neural network with constraints in the number of parameters utilized, the code provides the ability to obtain analytical expressions for normalized features that are used to predict a Pareto-optimal target. AI-Feynman is robust in handling noisy data, recursively generating multidimensional symbolic expressions that match data from an unknown functions.

[ascl:1310.003] AIDA: Adaptive Image Deconvolution Algorithm

AIDA is an implementation and extension of the MISTRAL myopic deconvolution method developed by Mugnier et al. (2004) (see J. Opt. Soc. Am. A 21:1841-1854). The MISTRAL approach has been shown to yield object reconstructions with excellent edge preservation and photometric precision when used to process astronomical images. AIDA improves upon the original MISTRAL implementation. AIDA, written in Python, can deconvolve multiple frame data and three-dimensional image stacks encountered in adaptive optics and light microscopic imaging.

[ascl:1611.014] AIMS: Asteroseismic Inference on a Massive Scale

AIMS (Asteroseismic Inference on a Massive Scale) estimates stellar parameters and credible intervals/error bars in a Bayesian manner from a set of seismic frequency data and so-called classic constraints. To achieve reliable parameter estimates and computational efficiency it searches through a grid of pre-computed models using an MCMC algorithm; interpolation within the grid of models is performed by first tessellating the grid using a Delaunay triangulation and then doing a linear barycentric interpolation on matching simplexes. Inputs for the modeling consists of individual frequencies from peak-bagging, which can be complemented with classic spectroscopic constraints.

[ascl:2306.014] AIOLOS: Planetary atmosphere accretion and escape simulations

AIOLOS solves differential equations for hydrodynamics, friction, (thermal) radiation transport and (photo)chemistry for simulating accretion onto, and hydrodynamic escape from, planetary atmospheres. The 1-D multispecies, multiphysics hydrodynamics code, written in C++, compiles in a flexible mode that runs problems with any number of input species, and can be sped up by setting the number of species at compile time, and allows the user to provide initial conditions or boundary conditions if desired. AIOLOS provides output and diagnostic files that give snapshots in time of the state of the simulation. Output files are specific to each species, and diagnostic files contain summary as well as detailed information for, for example, the radiation transport, opacities for all species, and optical cell depths per band, in addition to other information.

[ascl:9911.003] AIPS: Astronomical Image Processing System

AIPS ("Classic") is a software package for interactive and batch calibration and editing of astronomical data, typically radio interferometric data. AIPS can be used for the calibration, construction, enhancement, display, and analysis of astronomical images made from data using Fourier synthesis methods. Design and development of the package begin in 1978. AIPS presently consists of over 1,000,000 lines of code and 400,000 lines of documentation, representing over 65 person-years of effort.

[ascl:1310.006] AIPSLite: ParselTongue extension for distributed AIPS processing

AIPSLite is an extension for ParselTongue (ascl:1208.020) that allows machines without an AIPS (ascl:9911.003) distribution to bootstrap themselves with a minimal AIPS environment. This allows deployment of AIPS routines on distributed systems, which is useful when data can be easily be split into smaller chunks and handled independently.

[ascl:1609.012] AIPY: Astronomical Interferometry in PYthon

AIPY collects together tools for radio astronomical interferometry. In addition to pure-python phasing, calibration, imaging, and deconvolution code, this package includes interfaces to MIRIAD (ascl:1106.007) and HEALPix (ascl:1107.018), and math/fitting routines from SciPy.

[ascl:1107.006] AIRES: AIRshower Extended Simulations

The objective of this work is to report on the influence of muon interactions on the development of air showers initiated by astroparticles. We make a comparative study of the different theoretical approaches to muon bremsstrahlung and muonic pair production interactions. A detailed algorithm that includes all the relevant characteristics of such processes has been implemented in the AIRES air shower simulation system. We have simulated ultra high energy showers in different conditions in order to measure the influence of these muonic electromagnetic interactions. We have found that during the late stages of the shower development (well beyond the shower maximum) many global observables are significantly modified in relative terms when the mentioned interactions are taken into account. This is most evident in the case of the electromagnetic component of very inclined showers. On the other hand, our simulations indicate that the studied processes do not induce significant changes either in the position of the shower maximum or the structure of the shower front surface.

[ascl:1310.004] AIRY: Astronomical Image Restoration in interferometrY

AIRY simulates optical and near-infrared interferometric observations; it can also perform subsequent image restoration or deconvolution. It is based on the CAOS (ascl:1106.017) Problem Solving Environment. Written in IDL, it consists of a set of specific modules, each handling a particular task.

[ascl:1402.005] Aladin Lite: Lightweight sky atlas for browsers

Aladin Lite is a lightweight version of the Aladin tool, running in the browser and geared towards simple visualization of a sky region. It allows visualization of image surveys (JPEG multi-resolution HEALPix all-sky surveys) and permits superimposing tabular (VOTable) and footprints (STC-S) data. Aladin Lite is powered by HTML5 canvas technology and is easily embeddable on any web page and can also be controlled through a Javacript API.

[ascl:1112.019] Aladin: Interactive Sky Atlas

Aladin is an interactive software sky atlas allowing the user to visualize digitized astronomical images, superimpose entries from astronomical catalogues or databases, and interactively access related data and information from the Simbad database, the VizieR service and other archives for all known sources in the field.

Created in 1999, Aladin has become a widely-used VO tool capable of addressing challenges such as locating data of interest, accessing and exploring distributed datasets, visualizing multi-wavelength data. Compliance with existing or emerging VO standards, interconnection with other visualisation or analysis tools, ability to easily compare heterogeneous data are key topics allowing Aladin to be a powerful data exploration and integration tool as well as a science enabler.

[ascl:2306.009] Albatross: Stellar stream parameter inference with neural ratio estimation

Albatross analyzes Milky Way stellar streams. This Simulation-Based Inference (SBI) library is built on top of swyft (ascl:2302.016), which implements neural ratio estimation to efficiently access marginal posteriors for all parameters of interest. Using swyft for its internal Truncated Marginal Neural Ratio Estimation (TMNRE) algorithm and sstrax (ascl:2306.008) for fast simulation and modeling, Albatross provides a modular inference pipeline to support parameter inference on all relevant parts of stellar stream models.

[ascl:1708.008] ALCHEMIC: Advanced time-dependent chemical kinetics

ALCHEMIC solves chemical kinetics problems, including gas-grain interactions, surface reactions, deuterium fractionization, and transport phenomena and can model the time-dependent chemical evolution of molecular clouds, hot cores, corinos, and protoplanetary disks.

[ascl:2307.004] ALF: Absorption line fitter

alf fits the absorption line optical—NIR spectrum. Initially written to constrain the stellar IMF in old massive galaxies, the code now also offers theoretical age and metallicity-dependent response functions covering 19 elements, nuisance parameters to capture uncertainties in stellar evolution, and parameters to capture uncertainties in the data, including modeling telluric absorption and sky line residuals. alf can fit stellar populations with metallicities from approximately -2.0 to +0.3 and performs well when fitting stellar populations ranging from metal-poor globular clusters to brightest cluster galaxies. The software works in continuum-normalized space and so does not make any use of the shape of the continuum (nor of corresponding photometry). Fitting is handled with emcee (ascl:1303.002); the code is MPI parallelized and runs efficiently on many processors, though fitting data with alf is time intensive.

[ascl:1512.005] ALFA: Automated Line Fitting Algorithm

ALFA fits emission line spectra of arbitrary wavelength coverage and resolution, fully automatically. It uses a catalog of lines which may be present to construct synthetic spectra, the parameters of which are then optimized by means of a genetic algorithm. Uncertainties are estimated using the noise structure of the residuals. An emission line spectrum containing several hundred lines can be fitted in a few seconds using a single processor of a typical contemporary desktop or laptop PC. Data cubes in FITS format can be analysed using multiple processors, and an analysis of tens of thousands of deep spectra obtained with instruments such as MUSE will take a few hours.

[ascl:2107.011] AlignBandColors: Inter-color-band image alignment tool

AlignBandColors (ABC) aligns inter-color-band astronomical images to a 100th of a pixel accuracy using surrounding stars as guiding points. It has currently been tested with Sloan Digital Sky Survey (SDSS) Data Release 12 images, but is designed to be survey-independent. The code is part of the SpArcFiRe (ascl:2107.010) method.

[ascl:1804.021] allantools: Allan deviation calculation

allantools calculates Allan deviation and related time & frequency statistics. The library is written in Python and has a GPL v3+ license. It takes input data that is either evenly spaced observations of either fractional frequency, or phase in seconds. Deviations are calculated for given tau values in seconds. Several noise generators for creating synthetic datasets are also included.

[ascl:1903.003] allesfitter: Flexible star and exoplanet inference from photometry and radial velocity

allesfitter provides flexible and robust inference of stars and exoplanets given photometric and radial velocity (RV) data. The software offers a rich selection of orbital and transit models, accommodating multiple exoplanets, multi-star systems, star spots, stellar flares, and various noise models. It features both parameter estimation and model selection. A graphical user interface is used to specify input parameters, and to easily run a nested sampling or Markov Chain Monte Carlo (MCMC) fit, producing publication-ready tables, LaTex code, and plots. allesfitter provides an inference framework that unites the versatile packages ellc (ascl:1603.016), aflare (flare model; Davenport et al. 2014), dynesty (ascl:1809.013), emcee (ascl:1303.002) and celerite (ascl:1709.008).

[ascl:2201.005] AllStarFit: R package for source detection, PSF and multi-component galaxy fitting

AllStarFit analyzes optical and infrared images and includes functions for:
- object detection and image segmentation using the ProFound package (ascl:1804.006);
- PSF determination using the ProFit package (ascl:1612.004) to fit multiple stars in the field simultaneously; and
- galaxy modelling with ProFit, using the previously determined PSF and user-specified models.

AllStarFit supports a variety of optimization methods (provided by external packages), including maximum-likelihood and Markov chain Monte Carlo (MCMC).

[ascl:2301.029] ALMA3: plAnetary Love nuMbers cAlculator

ALMA3 computes loading and tidal Love numbers for a spherically symmetric, radially stratified planet. Both real (time-domain) and complex (frequency-domain) Love numbers can be computed. The planetary structure can include an arbitrary number of layers, and each layer can have a different rheological law. ALMA3 can model numerous linear rheologies, including Elastic, Maxwell visco-elastic, Newtonian viscous fluid, Kelvin-Voigt solid, Burgers and Andrade transient rheologies.

[ascl:2306.025] ALminer: ALMA archive mining and visualization toolkit

ALminer queries, analyzes, and visualizes the ALMA Science Archive. Users can programmatically query the archive for positions, target names, or other keywords in the archive metadata (such as proposal title, abstract, or scientific category). ALminer's plotting routines allow the query results to be visualized, and its analysis functions allow users to filter the results and check whether certain frequencies of interest are covered in the queried observations. The code also allows users to directly download ALMA data products in FITS format and/or the raw data that can be used for manual image processing. ALminer has been designed to make mining the ALMA archive as simple as possible, while being flexible to be customized according to the user's scientific interests. The code is released with a detailed tutorial Jupyter notebook, introducing ALminer's common functions as well as some of its more advanced options.

[ascl:2109.002] alpconv: Calculating alp-photon conversion

alpconv calculates the alp-photon conversion by calculating the degree of irregularity of the spectrum, in contract to some other methods that fit the source's spectrum with both null and ALP models and then compare the goodness of fit between the two.

[ascl:2201.009] AltaiPony: Flare finder for Kepler, K2, and TESS light curves

AltaiPony de-trend light curves from Kepler, K2, and TESS missions, and searches them for flares. The code also injects and recovers synthetic flares to account for de-trending and noise loss in flare energy and determines energy-dependent recovery probability for every flare candidate. AltaiPony uses K2SC (ascl:1605.012), AstroPy (ascl:1304.002) and lightkurve (ascl:1812.013) in addition to other common codes, and extensive documentation and tutorials are provided for the software.

[ascl:1106.001] AlterBBN: A program for calculating the BBN abundances of the elements in alternative cosmologies

AlterBBN evaluates the abundances of the elements generated by Big-Bang nucleosynthesis (BBN). This program computes the abundances of the elements in the standard model of cosmology and allows the user to alter the assumptions of the cosmological model to study their consequences on the abundances of the elements. In particular the baryon-to-photon ratio and the effective number of neutrinos, as well as the expansion rate and the entropy content of the Universe during BBN can be modified in AlterBBN. Such features allow the user to test the cosmological models by confronting them to BBN constraints.

[ascl:2205.002] am: Microwave through submillimeter-wave propagation tool for the terrestrial atmosphere

am performs optical depth, radiative transfer, and refraction computations involving propagation through the terrestrial atmosphere and other media at microwave through submillimeter wavelengths. The program is used in radio astronomy, atmospheric radiometry, and radio spectrum management.

[ascl:2312.031] AM3: Astrophysical Multi-Messenger Modeling

AM3 simulates lepto-hadronic interactions in astrophysical environments. It solves the time-dependent partial differential equations for the energy spectra of electrons, positrons, protons, neutrons, photons, neutrinos as well as charged secondaries (pions and muons), immersed in an isotropic magnetic field. The code accounts for the emission of photons and charged secondaries in electromagnetic and hadronic interactions feed back into the interaction rates in a time-dependent manner, therefore grasping non-linear effects including electromagnetic cascades. AM3 is computationally efficient, making it possible to scan vast source parameter scans and fit the observational data, and has been deployed to explain multi-wavelength observations from blazars, gamma-ray bursts and tidal disruption events.

[ascl:1503.006] AMADA: Analysis of Multidimensional Astronomical DAtasets

AMADA allows an iterative exploration and information retrieval of high-dimensional data sets. This is done by performing a hierarchical clustering analysis for different choices of correlation matrices and by doing a principal components analysis in the original data. Additionally, AMADA provides a set of modern visualization data-mining diagnostics. The user can switch between them using the different tabs.

[submitted] amber_meta

amber_meta integrates a few routines to launch AMBER (ascl:2209.007) in a systematic manner. To avoid typing a string in the command line manually with all parameters required to launch AMBER, amber_meta generates the command from configuration files, and can directly launch AMBER instances.

[ascl:2211.003] AMBER: Abundance Matching Box for the Epoch of Reionization

AMBER (Abundance Matching Box for the Epoch of Reionization) models the cosmic dawn. The semi-numerical code allows users to directly specify the reionization history through the redshift midpoint, duration, and asymmetry input parameters. The reionization process is further controlled through the minimum halo mass for galaxy formation and the radiation mean free path for radiative transfer. The parallelized code is over four orders of magnitude faster than radiative transfer simulations and will efficiently enable large-volume models, full-sky mock observations, and parameter-space studies.

[ascl:1010.003] AMBER: Data Reduction Software

AMBER data reduction software has an optional graphic interface in a high level language, allowing the user to control the data reduction step by step or in a completely automatic manner. The software has a robust calibration scheme that make use of the full calibration sets available during the night. The output products are standard OI-FITS files, which can be used directly in high level software like model fitting or image reconstruction tools.

[ascl:2209.007] AMBER: Fast pipeline for detecting single-pulse radio transients

AMBER (Apertif Monitor for Bursts Encountered in Real-time) detects single-pulse radio phenomena, such as pulsars and fast radio bursts, in real time. It is a fully auto-tuned pipeline that offloads compute-intensive kernels to many-core accelerators; the software automatically tunes these kernels to achieve high performance on different platforms.

[ascl:1404.007] AMBIG: Automated Ambiguity-Resolution Code

AMBIG is a fast, automated algorithm for resolving the 180° ambiguity in vector magnetic field data, including those data from Hinode/Spectropolarimeter. The Fortran-based code is loosely based on the Minimum Energy Algorithm, and is distributed to provide ambiguity-resolved data for the general user community.

[ascl:2302.021] AMICAL: Aperture Masking Interferometry Calibration and Analysis Library

AMICAL (Aperture Masking Interferometry Calibration and Analysis Library) processes Aperture Masking Interferometry (AMI) data from major existing facilities, such as NIRISS on the JWST, SPHERE and VISIR from the European Very Large Telescope (VLT) and VAMPIRES from SUBARU telescope. The library cleans the reduced datacube from the standard instrument pipelines, extracts the interferometrical quantities (visibilities and closure phases) using a Fourier sampling approach, and calibrates those quantities to remove the instrumental biases. In addition, two external packages (CANDID and Pymask) are included to analyze the final outputs obtained from a binary-like sources (star-star or star-planet); these stand-alone packages are interfaced with AMICAL to quickly estimate scientific results (e.g., separation, position angle, contrast ratio, and contrast limits) using different approaches.

[ascl:1007.006] AMIGA: Adaptive Mesh Investigations of Galaxy Assembly

AMIGA is a publicly available adaptive mesh refinement code for (dissipationless) cosmological simulations. It combines an N-body code with an Eulerian grid-based solver for the full set of magnetohydrodynamics (MHD) equations in order to conduct simulations of dark matter, baryons and magnetic fields in a self-consistent way in a fully cosmological setting. Our numerical scheme includes effective methods to ensure proper capturing of shocks and highly supersonic flows and a divergence-free magnetic field. The high accuracy of the code is demonstrated by a number of numerical tests.

[ascl:1502.017] AMIsurvey: Calibration and imaging pipeline for radio data

AMIsurvey is a fully automated calibration and imaging pipeline for data from the AMI-LA radio observatory; it has two key dependencies. The first is drive-ami, included in this entry. Drive-ami is a Python interface to the specialized AMI-REDUCE calibration pipeline, which applies path delay corrections, automatic flags for interference, pointing errors, shadowing and hardware faults, applies phase and amplitude calibrations, Fourier transforms the data into the frequency domain, and writes out the resulting data in uvFITS format. The second is chimenea, which implements an automated imaging algorithm to convert the calibrated uvFITS into science-ready image maps. AMIsurvey links the calibration and imaging stages implemented within these packages together, configures the chimenea algorithm with parameters appropriate to data from AMI-LA, and provides a command-line interface.

[ascl:2108.013] AMOEBA: Automated Gaussian decomposition

AMOEBA (Automated Molecular Excitation Bayesian line-fitting Algorithm) employs a Bayesian approach to Gaussian decomposition, resulting in an objective and statistically robust identification of individual clouds along the line-of-sight. It uses the Python implementation of Goodman & Weare's Affine Invariant Markov chain Monte Carlo (MCMC) Ensemble sampler emcee (ascl:1303.002) to sample the posterior probability distribution and numerically evaluate the integrals required to compute the Bayes Factor. Amoeba takes as input a set of OH optical depth spectra and a set of expected brightness temperature spectra that are obtained by measuring the brightness temperature towards the bright background continuum source (the "on-source" observations), and in a pattern surrounding the continuum source (the "off-source" observations). Amoeba can also take as input a set of OH optical depth spectra only, and also allows input of an arbitrary number of spectra to be fit simultaneously.

[ascl:2005.015] AMPEL: Alert Management, Photometry, and Evaluation of Light curves

AMPEL provides an analysis framework for high-throughput surveys and is suited for streamed data. The package combines the functionality of an alert broker with a generic framework capable of hosting user-contributed code; it encourages provenance and keeps track of the varying information states that a transient displays. The latter concept includes information gathered over time and data policies such as access or calibration levels.

[ascl:2307.032] AmpF: Amplification factor for solar lensing

AmpF numerically calculates the amplification factor for solar lensing. The import parameters are the gravitational-wave frequency and the source angular position with respect to the solar center; the code outputs are the amplification factor and its geometrical-optics limit. AmpF accepts variables for several attributes and the overall amplitude of the lensing potential can be changed as needed. The method has been implemented in both C and Python.

[ascl:2409.012] AMReX: Software framework for block structured AMR

The software framework AMReX is designed for building massively parallel block-structured adaptive mesh refinement (AMR) applications. Key features of AMReX include C++ and Fortran interfaces; 1-, 2- and 3-D support; and support for cell-centered, face-centered, edge-centered, and nodal data. The framework also supports hyperbolic, parabolic, and elliptic solves on hierarchical adaptive grid structure, optional subcycling in time for time-dependent PDEs, and parallelization via flat MPI, OpenMP, hybrid MPI/OpenMP, or MPI/MPI, and parallel I/O. AMReX supports the plotfile format with AmrVis, VisIt (ascl:1103.007), ParaView (ascl:1103.014), and yt (ascl:1011.022).

[ascl:1107.007] AMUSE: Astrophysical Multipurpose Software Environment

AMUSE is an open source software framework for large-scale simulations in astrophysics, in which existing codes for gravitational dynamics, stellar evolution, hydrodynamics and radiative transport can be easily coupled and placed in the appropriate observational context.

[ascl:1708.028] ANA: Astrophysical Neutrino Anisotropy

ANA calculates the likelihood function for a model comprised of two components to the astrophysical neutrino flux detected by IceCube. The first component is extragalactic. Since point sources have not been found and there is increasing evidence that one source catalog cannot describe the entire data set, ANA models the extragalactic flux as isotropic. The second component is galactic. A variety of catalogs of interest are also provided. ANA takes the galactic contribution to be proportional to the matter density of the universe. The likelihood function has one free parameter fgal that is the fraction of the astrophysical flux that is galactic. ANA finds the best fit value of fgal and scans over 0<fgal<1.

[ascl:1402.019] ANAigm: Analytic model for attenuation by the intergalactic medium

ANAigm offers an updated version of the Madau model for the attenuation by the intergalactic neutral hydrogen against the radiation from distant objects. This new model is written in Fortran90 and predicts, for some redshifts, more than 0.5--1 mag different attenuation magnitudes through usual broad-band filters relative to the original Madau model.

[ascl:1908.015] Analysator: Quantitative analysis of Vlasiator files

Analysator analyzes vlsv files produced by Vlasiator (ascl:1908.014). The code facilitates studies of particle paths, pitch angle distributions, velocity distributions, and more. It can read and write VLSV files and do calculations with the data, plot the real space from VLSV files with Mayavi (ascl:1205.008), and plot the velocity space (both blocks and iso surface) from VLSV files. It can also take cut-throughs, pitch angle distributions, gyrophase angle, and 3d slices, plot variables with sub plots in a clean format, and fit 1D polynomials to data.

[ascl:2207.030] Analysis of dipole alignment in large-scale distribution of galaxy spin directions

This code analyzes a dipole axis in the distribution of galaxy spin directions. The code takes as input a list of galaxies, their equatorial coordinates, and their spin directions. It then determines the statistical significance of possible dipole axis at any point in the sky by comparing the cosine dependence of the spin directions to the mean and standard deviation of the cosine dependence after 2000 runs with random spin directions. A code to analyze the binomial distribution of the spin directions using Monte Carlo simulation is also available.

[ascl:1110.001] analytic_infall: A Molecular Line Infall Fitting Program

This code contains several simple radiative transfer models used for fitting the blue-asymmetric spectral line signature often found in infalling molecular cloud cores. It attempts to provide a direct measure of several physical parameters of the infalling core, including infall velocity, excitation temperature, and line of site optical depth. The code includes 6 radiative transfer models, however the conclusion of the associated paper is that the 5 parameter "hill" model (hill5) is most likely the best match to the physical excitation conditions of real infalling Bonnor-Ebert type clouds.

[ascl:2302.007] AnalyticLC: Dynamical modeling of planetary systems

AnalyticLC generates an analytic light-curve, and optionally RV and astrometry data, from a set of initial (free) orbital elements and simultaneously fits these data. Written in MATLAB, the code is fast and efficient, and provides insight into the motion of the orbital elements, which is difficult to obtain from numerical integration. A Python wrapper for AnalyticLC is available separately.

[ascl:1912.007] anesthetic: Nested sampling visualization

anesthetic brings together tools for processing nested sampling chains, leveraging standard scientific python libraries. The code provides computation of Bayesian evidences, Kullback-Liebler divergences and Bayesian model dimensionalities, marginalized 1d and 2d plots, and dynamic replaying of nested sampling. anesthetic was designed primarily for use with nested sampling outputs, although it can be used for normal MCMC chains.

[ascl:1807.012] AngPow: Fast computation of accurate tomographic power spectra

AngPow computes the auto (z1 = z2) and cross (z1 ≠ z2) angular power spectra between redshift bins (i.e. Cℓ(z1,z2)). The developed algorithm is based on developments on the Chebyshev polynomial basis and on the Clenshaw-Curtis quadrature method. AngPow is flexible and can handle any user-defined power spectra, transfer functions, bias functions, and redshift selection windows. The code is fast enough to be embedded inside programs exploring large cosmological parameter spaces through the Cℓ(z1,z2) comparison with data.

[ascl:9909.002] ANGSIZ: A general and practical method for calculating cosmological distances

The calculation of distances is of fundamental importance in extragalactic astronomy and cosmology. However, no practical implementation for the general case has previously been available. We derive a second-order differential equation for the angular size distance valid not only in all homogeneous Friedmann-Lemaitre cosmological models, parametrised by $lambda_{0}$ and $Omega_{0}$, but also in inhomogeneous 'on-average' Friedmann-Lemaitre models, where the inhomogeneity is given by the (in the general case redshift-dependent) parameter $eta$. Since most other distances can be obtained trivially from the angular size distance, and since the differential equation can be efficiently solved numerically, this offers for the first time a practical method for calculating distances in a large class of cosmological models. We also briefly discuss our numerical implementation, which is publicly available.

[submitted] AnisoCADO

A python package created around Eric Gendron’s code for analytically (and quickly) generating field-varying SCAO PSFs for the ELT.

[ascl:1411.019] Anmap: Image and data analysis

Anmap analyses and processes images and spectral data. Originally written for use in radio astronomy, much of its functionality is applicable to other disciplines; additional algorithms and analysis procedures allow direct use in, for example, NMR imaging and spectroscopy. Anmap emphasizes the analysis of data to extract quantitative results for comparison with theoretical models and/or other experimental data. To achieve this, Anmap provides a wide range of tools for analysis, fitting and modelling (including standard image and data processing algorithms). It also provides a powerful environment for users to develop their own analysis/processing tools either by combining existing algorithms and facilities with the very powerful command (scripting) language or by writing new routines in FORTRAN that integrate seamlessly with the rest of Anmap.

[ascl:1209.009] ANNz: Artificial Neural Networks for estimating photometric redshifts

ANNz is a freely available software package for photometric redshift estimation using Artificial Neural Networks. ANNz learns the relation between photometry and redshift from an appropriate training set of galaxies for which the redshift is already known. Where a large and representative training set is available, ANNz is a highly competitive tool when compared with traditional template-fitting methods.

For a newer implementation of this package, please see ANNz2 (ascl:1910.014).

[ascl:1910.014] ANNz2: Estimating photometric redshift and probability density functions using machine learning methods

ANNz2, a newer implementation of ANNz (ascl:1209.009), utilizes multiple machine learning methods such as artificial neural networks, boosted decision/regression trees and k-nearest neighbors to measure photo-zs based on limited spectral data. The code dynamically optimizes the performance of the photo-z estimation and properly derives the associated uncertainties. In addition to single-value solutions, ANNz2 also generates full probability density functions (PDFs) in two different ways. In addition, estimators are incorporated to mitigate possible problems of spectroscopic training samples which are not representative or are incomplete. ANNz2 is also adapted to provide optimized solutions to general classification problems, such as star/galaxy separation.

[submitted] AntabGMVA: A Python tool for managing GMVA metadata

Global mm-VLBI Array (GMVA) observations are accompanied by a lot of metadata (i.e., the so-called 'ANTAB' files) that contain the system temperature (Tsys) and the gain values of the individual GMVA antennas. These data are required for the amplitude calibration of GMVA data which is an essential part in the data reduction. Unfortunately, Tsys measurements in the ANTAB files are not perfect and there are almost always erroneous values in some of the ANTAB files (particularly in the VLBA data). This could lead to incorrect results in the amplitude calibration and thus need to be corrected with proper data inspection/treatment. However, every GMVA station provides the ANTAB file in their own data format which makes the examination tricky. AntabGMVA was designed to resolve these issues and allows GMVA users to manage the GMVA ANTAB files easily and efficiently. Using AntabGMVA, one can perform extraction/inspection/visualization/correction of the Tsys data from the ANTAB files and finally generate one single ANTAB file which includes all the final products.

[ascl:1802.008] AntiparticleDM: Discriminating between Majorana and Dirac Dark Matter

AntiparticleDM calculates the prospects of future direct detection experiments to discriminate between Majorana and Dirac Dark Matter (i.e., to determine whether Dark Matter is its own antiparticle). Direct detection event rates and mock data generation are dealt with by a variation of the WIMpy code.

[ascl:2406.006] anzu: Measurements and emulation of Lagrangian bias models for clustering and lensing cross-correlations

The anzu package offers two independent codes for hybrid Lagrangian bias models in large-scale structure. The first code measures the hybrid "basis functions"; the second takes measurements of these basis functions and constructs an emulator to obtain predictions from them at any cosmology (within the bounds of the training set). anzu is self-contained; given a set of N-body simulations used to build emulators, it measures the basis functions. Alternatively, given measurements of the basis functions, anzu should in principle be useful for constructing a custom emulator.

[ascl:1010.017] AOFlagger: RFI Software

The radio frequency interference code AOFlagger automatically flags data and can be used to analyze the data in a measurement. The purpose of flagging is to mark samples that are affected by interfering sources such as radio stations, airplanes, electrical fences or other transmitting interferers.

The tools in the package are meant for offline use. The software package contains a graphical interface ("rfigui") that can be used to visualize a measurement set and analyze mitigation techniques. It also contains a console flagger ("rficonsole") that can execute a script of mitigation functions without the overhead of a graphical environment. All tools were written in C++.

The software has been tested extensively on low radio frequencies (150 MHz or lower) produced by the WSRT and LOFAR telescopes. LOFAR is the Low Frequency Array that is built in and around the Netherlands. Higher frequencies should work as well. Some of the methods implemented are the SumThreshold, the VarThreshold and the singular value decomposition (SVD) method. Included also are several surface fitting algorithms.

The software is published under the GNU General Public License version 3.

[ascl:1910.021] AOtools: Adaptive optics modeling and analysis toolkit

The AOtools package offers generic adaptive optics processing tools in addition to astronomy-specific methods; among these are analyzing data in the pupil plane, images and point spread functions in the focal plane, wavefront sensors, modeling of atmospheric turbulence, physical optical propagation of wavefronts, and conversion functions to convert stellar brightness into photon flux for a given waveband. The software also calculates integrated atmospheric parameters, such as coherence time and isoplanatic angle from atmospheric turbulence and wind speed profile.

[ascl:1910.012] AOTOOLS: Reduce IR images from Adaptive Optics

AOTOOLS reduces IR images from adaptive optics. It uses effective dithering, either sky subtraction or dark-subtration, and flat-fielding techniques to determine the effect of the instrument on an image of an object. It also performs bad pixel masking, degrades an AO on-axis PSF due to effects of anisoplanicity, and corrects an AO on-axis PSF due to effects of seeing.

[ascl:1103.011] AP3M: Adaptive Particle-particle, Particle-mesh Code

AP3M is an adaptive particle-particle, particle-mesh code. It is older than Hydra (ascl:1103.010) but faster and more memory-efficient for dark-matter only calculations. The Adaptive P3M technique (AP3M) is built around the standard P3M algorithm. AP3M produces fully equivalent forces to P3M but represents a more efficient implementation of the force splitting idea of P3M. The AP3M program may be used in any of the three modes with an appropriate choice of input parameter.

[ascl:2002.010] Apercal: Pipeline for the Westerbork Synthesis Radio Telescope Apertif upgrade

Apercal is a dedicated, automated data reduction and analysis pipeline written for the Apertif (APERture Tile In Focus) upgrade to the Westerbork Synthesis Radio Telescope. This upgrade dramatically increases the field of view and survey speed of the telescope and is being used for survey observations that can produce 5 terabytes of data for each observation. Apercal uses existing and new tools and parallelization to provide the performance needed for the large volume of data produced Apertif surveys. The software is written entirely in Python and uses third–party astronomical software, such as AOFlagger (ascl:1010.017), CASA (ascl:1107.013), and Miriad (ascl:1106.007), for certain tasks. Apercal is modular, making it possible to run specific modules manually instead of the full pipeline, and information can be exchanged between modules because status parameters are written and read from a python pickled dictionary file. The pipeline can also run fully automatically.

[ascl:2211.019] APERO: A PipelinE to Reduce Observations

APERO (A PipelinE to Reduce Observations) performs data reduction for the Canada-France-Hawaii Telescope's near-infrared spectropolarimeter SPIRou and offers different recipes or modules for performing specific tasks. APERO can individually run recipes or process a set of files, such as cleaning a data file of detector effects, collecting all dark files and creating a master dark image to use for correction, and creating a bad pixel mask for identifying and dealing with bad pixels. It can extract out flat images to measure the blaze and produced blaze correction and flat correction images, extract dark frames to provide correction for the thermal background after extraction of science or calibration frames, and correct extracted files for leakage coming from a FP (for OBJ_FP files only). It can also take a hot star and calculate telluric transmission, and then use the telluric transmission to calculate principle components (PCA) for correcting input images of atmospheric absorption, among many other tasks.

[ascl:1208.017] APLpy: Astronomical Plotting Library in Python

APLpy (the Astronomical Plotting Library in Python) is a Python module for producing publication-quality plots of astronomical imaging data in FITS format. The module uses Matplotlib, a powerful and interactive plotting package. It is capable of creating output files in several graphical formats, including EPS, PDF, PS, PNG, and SVG. Plots can be made interactively or by using scripts, and can generate co-aligned FITS cubes to make three-color RGB images. It also offers different overlay capabilities, including contour sets, markers with customizable symbols, and coordinate grids, and a range of other useful features.

[ascl:2101.010] apogee: Tools for APOGEE data

The apogee package works with SDSS-III APOGEE and SDSS-IV APOGEE-2 data. It reads various data products and applies cuts, works with APOGEE bitmasks, and plots APOGEE spectra. It can generate model spectra for APOGEE spectra, and APOGEE model grids can be used to fit spectra. apogee includes some simple stacking functions and implements the effective selection function for APOGEE.

[ascl:2306.022] apollinaire: Helioseismic and asteroseismic peakbagging frameworks

apollinaire provides functions and a framework for helioseismic and asteroseismic instruments data managing and analysis, and includes all the tools necessary to analyze the acoustic oscillations of solar-like stars. The core of the package is the peakbagging library, which provides a full framework to extract oscillation modes parameters from solar and stellar power spectra.

[ascl:2307.058] APOLLO: Radiative transfer and atmosphere spectroscopic retrieval for exoplanets

APOLLO forward models the radiative transfer of light through a planetary (or brown dwarf) atmosphere; it also forward models transit and emission spectra and retrieves atmospheric properties of extrasolar planets. The code has two operational modes: one to compute a planetary spectrum given a set of parameters, and one to retrieve those parameters based on an observed spectrum. The package uses emcee (ascl:1303.002) to find the best fit to a spectrum for a given parameter set. APOLLO is modular and offers many options that may be turned on and off, including the type of observations, a flexible molecular composition, multiple cloud prescriptions, multiple temperature-pressure profile prescriptions, multiple priors, and continuum normalization.

[ascl:1608.003] appaloosa: Python-based flare finding code for Kepler light curves

The appaloosa suite automates flare-finding in every Kepler light curves. It builds quiescent light curve models that include long- and short-cadence data through iterative de-trending and includes completeness estimates via artificial flare injection and recovery tests.

[ascl:1804.017] APPHi: Automated Photometry Pipeline for High Cadence Large Volume Data

APPHi (Automated Photometry Pipeline) carries out aperture and differential photometry of TAOS-II project data. It is computationally efficient and can be used also with other astronomical wide-field image data. APPHi works with large volumes of data and handles both FITS and HDF5 formats. Due the large number of stars that the software has to handle in an enormous number of frames, it is optimized to automatically find the best value for parameters to carry out the photometry, such as mask size for aperture, size of window for extraction of a single star, and the number of counts for the threshold for detecting a faint star. Although intended to work with TAOS-II data, APPHi can analyze any set of astronomical images and is a robust and versatile tool to performing stellar aperture and differential photometry.

[ascl:1810.018] APPLawD: Accurate Potentials in Power Law Disks

APPLawD (Accurate Disk Potentials for Power Law Surface densities) determines the gravitational potential in the equatorial plane of a flat axially symmetric disk (inside and outside) with finite size and power law surface density profile. Potential values are computed on the basis of the density splitting method, where the residual Poisson kernel is expanded over the modulus of the complete elliptic integral of the first kind. In contrast with classical multipole expansions of potential theory, the residual series converges linearly inside sources, leading to very accurate potential values for low order truncations of the series. The code is easy to use, works under variable precision, and is written in Fortran 90 with no external dependencies.

[ascl:2304.002] Applefy: Robust detection limits for high-contrast imaging

Applefy calculates detection limits for exoplanet high contrast imaging (HCI) datasets. The package provides features and functionalities to improve the accuracy and robustness of contrast curve calculations. Applefy implements the classical approach based on the t-test, as well as the parametric boostrap test for non-Gaussian residual noise. Applefy enables the comparison of imaging results across instruments with different noise characteristics.

[ascl:1308.005] APPSPACK: Asynchronous Parallel Pattern Search

APPSPACK is serial or parallel, derivative-free optimization software for solving nonlinear unconstrained, bound-constrained, and linearly-constrained optimization problems, with possibly noisy and expensive objective functions.

[ascl:1408.021] APS: Active Parameter Searching

APS finds Frequentist confidence limits on high-dimensional parameter spaces by using Gaussian Process interpolation to identify regions of parameter space for which chisquared is less than or equal to some specified limit. The code is written in C++, is robust against multi-modal chisquared functions and converges comparably fast to Monte Carlo methods. Code is also provided to draw Bayesian credible limits using the outputs of APS, though this code does not converge as well. APS requires the linear algebra libraries LAPACK, BLAS, and ARPACK (ascl:1311.010) to run.

[ascl:1208.003] APT: Aperture Photometry Tool

Aperture Photometry Tool (APT) is software for astronomers and students interested in manually exploring the photometric qualities of astronomical images. It has a graphical user interface (GUI) which allows the image data associated with aperture photometry calculations for point and extended sources to be visualized and, therefore, more effectively analyzed. Mouse-clicking on a source in the displayed image draws a circular or elliptical aperture and sky annulus around the source and computes the source intensity and its uncertainty, along with several commonly used measures of the local sky background and its variability. The results are displayed and can be optionally saved to an aperture-photometry-table file and plotted on graphs in various ways using functions available in the software. APT is geared toward processing sources in a small number of images and is not suitable for bulk processing a large number of images, unlike other aperture photometry packages (e.g., SExtractor). However, APT does have a convenient source-list tool that enables calculations for a large number of detections in a given image. The source-list tool can be run either in automatic mode to generate an aperture photometry table quickly or in manual mode to permit inspection and adjustment of the calculation for each individual detection. APT displays a variety of useful graphs, including image histogram, and aperture slices, source scatter plot, sky scatter plot, sky histogram, radial profile, curve of growth, and aperture-photometry-table scatter plots and histograms. APT has functions for customizing calculations, including outlier rejection, pixel “picking” and “zapping,” and a selection of source and sky models. The radial-profile-interpolation source model, accessed via the radial-profile-plot panel, allows recovery of source intensity from pixels with missing data and can be especially beneficial in crowded fields.

[ascl:1007.005] Arcetri Spectral Code for Thin Plasmas

The Arcetri spectral code allows to evaluate the spectrum of the radiation emitted by hot and optically thin plasmas in the spectral range 1 - 2000 Angstroms. The database has been updated including atomic data and radiative and collisional rates to calculate level population and line emissivities for a number of ions of the minor elements; a critical compilation of the electron collision excitation for these elements has been performed. The present version of the program includes the CHIANTI database for the most abundant elements, the minor elements data, and Fe III atomic model, radiative and collisional data.

[ascl:1107.011] ARCHANGEL: Galaxy Photometry System

ARCHANGEL is a Unix-based package for the surface photometry of galaxies. While oriented for large angular size systems (i.e. many pixels), its tools can be applied to any imaging data of any size. The package core contains routines to perform the following critical galaxy photometry functions: sky determination; frame cleaning; ellipse fitting; profile fitting; and total and isophotal magnitudes.

The goal of the package is to provide an automated, assembly-line type of reduction system for galaxy photometry of space-based or ground-based imaging data. The procedures outlined in the documentation are flux independent, thus, these routines can be used for non-optical data as well as typical imaging datasets.

ARCHANGEL has been tested on several current OS's (RedHat Linux, Ubuntu Linux, Solaris, Mac OS X). A tarball for installation is available at the download page. The main routines are Python and FORTRAN based, therefore, a current installation of Python and a FORTRAN compiler are required. The ARCHANGEL package also contains Python hooks to the PGPLOT package, an XML processor and network tools which automatically link to data archives (i.e. NED, HST, 2MASS, etc) to download images in a non-interactive manner.

[ascl:2006.015] ARCHI: Add-on pipeline module for background star analysis from CHEOPS data

The CHaracterizing ExOPlanet Satellite (CHEOPS) mission pipeline provides photometry for the central star in its field; ARCHI takes in data from the CHEOPS mission pipeline, analyzes the background stars, and determines the photometry of these stars, thus creating the possibility of producing photometric time-series of several close-by targets at once, in addition to using different stars in the image to calibrate systematic errors.

[ascl:1805.012] Arcmancer: Geodesics and polarized radiative transfer library

Arcmancer computes geodesics and performs polarized radiative transfer in user-specified spacetimes. The library supports Riemannian and semi-Riemannian spaces of any dimension and metric; it also supports multiple simultaneous coordinate charts, embedded geometric shapes, local coordinate systems, and automatic parallel propagation. Arcmancer can be used to solve various problems in numerical geometry, such as solving the curve equation of motion using adaptive integration with configurable tolerances and differential equations along precomputed curves. It also provides support for curves with an arbitrary acceleration term and generic tools for generating ray initial conditions and performing parallel computation over the image, among other tools.

[ascl:1909.010] AREPO: Cosmological magnetohydrodynamical moving-mesh simulation code

AREPO is a massively parallel gravity and magnetohydrodynamics code for astrophysics, designed for problems of large dynamic range. It employs a finite-volume approach to discretize the equations of hydrodynamics on a moving Voronoi mesh, and a tree-particle-mesh method for gravitational interactions. AREPO is originally optimized for cosmological simulations of structure formation, but has also been used in many other applications in astrophysics.

[ascl:2011.010] ARES: Accelerated Reionization Era Simulations

The Accelerated Reionization Era Simulations (ARES) code rapidly generates models for the global 21-cm signal. It can also be used as a 1-D radiative transfer code, stand-alone non-equilibrium chemistry solver, or global radiation background calculator.

[ascl:1205.009] ARES: Automatic Routine for line Equivalent widths in stellar Spectra

ARES was developed for the measurement of Equivalent Width of absortion lines in stellar spectra; it can also be used to determine fundamental spectroscopic stellar parameters.The code reads a 1D FITS spectra and fits the requested lines in order to calculate the Equivalent width. The code is written in C++ based on the standard method of determining EWs. It automates the manual procedure that one normally carries out when using interactive routines such as the splot routine implemented in IRAF.

[ascl:2410.013] ARK: 3D hydrodynamics code for the study of convective problems

ARK implements Computational Fluid Dynamics applications, such as Euler and all-Mach regime, on a Cartesian grid with MPI+Kokkos. It provides a performance-portable Kokkos implementation for compressible hydrodynamics and performs simulations of convection without any approximation of Boussinesq nor anelastic type. It adapts an all-Mach number scheme into a well-balanced scheme for gravity, which preserves arbitrary discrete equilibrium states up to the machine precision. The low-Mach correction in the numerical flux allows ARK to be more precise in the low-Mach regime; the code is well suited for studying highly stratified and high-Mach convective flows.

[ascl:1807.004] ARKCoS: Radial kernel convolution on the sphere

ARKCoS (Accelerated radial kernel convolution on the sphere) efficiently convolves pixelated maps on the sphere with radially symmetric kernels with compact support. It performs the convolution along isolatitude rings in Fourier space and integrates in longitudinal direction in pixel space. The computational costs scale linearly with the kernel support, making the method most beneficial for convolution with compact kernels. Typical applications include CMB beam smoothing, symmetric wavelet analyses, and point-source filtering operations. The software is written in C++/CUDA and provides two independent code paths to do the necessary computation either on conventional hardware (CPUs), or on graphics processing units (GPUs).

[ascl:1505.005] ARoME: Analytical Rossiter-McLaughlin Effects

The ARoMe (Analytical Rossiter-McLaughlin Effects) library generates analytical Rossiter-McLaughlin (RM) effects. It models the Doppler-shift of a star during a transit measured by the fit of a cross-correlation function by a Gaussian function, fit of an observed spectrum by a modeled one, and the weighted mean.

[ascl:2306.049] ARPACK-NG: Large scale eigenvalue problem solver

ARPACK-NG provides a common repository with maintained versions and a test suite for the ARPACK (ascl:1311.010) code, which is no longer updated; it is a collection of Fortran77 subroutines designed to solve large scale eigenvalue problems. ARPACK-NG offers routines for banded matrices, singular value decomposition, single and double precision real arithmetic versions for symmetric, non-symmetric standard or generalized problems, and a reverse communication interface (RCI). It also provides example driver routines that may be used as templates to implement numerous shift-invert strategies for all problem types, data types and precision, in addition to other tools. The ARPACK-NG project, started by Debian, Octave, and Scilab, is now a community project maintained by volunteers.

[ascl:1311.010] ARPACK: Solving large scale eigenvalue problems

ARPACK is a collection of Fortran77 subroutines designed to solve large scale eigenvalue problems. The package is designed to compute a few eigenvalues and corresponding eigenvectors of a general n by n matrix A. It is most appropriate for large sparse or structured matrices A where structured means that a matrix-vector product w <- Av requires order n rather than the usual order n2 floating point operations. This software is based upon an algorithmic variant of the Arnoldi process called the Implicitly Restarted Arnoldi Method (IRAM). When the matrix A is symmetric it reduces to a variant of the Lanczos process called the Implicitly Restarted Lanczos Method (IRLM). These variants may be viewed as a synthesis of the Arnoldi/Lanczos process with the Implicitly Shifted QR technique that is suitable for large scale problems. For many standard problems, a matrix factorization is not required; only the action of the matrix on a vector is needed. ARPACK is capable of solving large scale symmetric, nonsymmetric, and generalized eigenproblems from significant application areas.

A common community-maintained repository for this software, ARPACK-NG (ascl:2306.049), is available.

[ascl:2107.018] ART: A Reconstruction Tool

ART reconstructs log-probability distributions using Gaussian processes. It requires an existing MCMC chain or similar set of samples from a probability distribution, including the log-probabilities. Gaussian process regression is used for interpolating the log-probability for the rescontruction, allowing for easy resampling, importance sampling, marginalization, testing different samplers, investigating chain convergence, and other operations.

[ascl:1810.007] ARTES: 3D Monte Carlo scattering radiative transfer in planetary atmospheres

The 3D Monte Carlo radiative transfer code ARTES calculates reflected light and thermal radiation in a spherical grid with a parameterized distribution of gas, clouds, hazes, and circumplanetary material. Designed specifically for (polarized) scattered light simulations of planetary atmospheres, it can compute both reflected stellar light and thermal emission from the planet for an arbitrary atmospheric structure and distribution of opacity sources. Multiple scattering, absorption, and polarization are fully treated and the output includes an image, spectrum, or phase curve. Several tools are included to create opacities and scattering matrices for molecules and clouds.

[ascl:1802.004] ARTIP: Automated Radio Telescope Image Processing Pipeline

The Automated Radio Telescope Image Processing Pipeline (ARTIP) automates the entire process of flagging, calibrating, and imaging for radio-interferometric data. ARTIP starts with raw data, i.e. a measurement set and goes through multiple stages, such as flux calibration, bandpass calibration, phase calibration, and imaging to generate continuum and spectral line images. Each stage can also be run independently. The pipeline provides continuous feedback to the user through various messages, charts and logs. It is written using standard python libraries and the CASA package. The pipeline can deal with datasets with multiple spectral windows and also multiple target sources which may have arbitrary combinations of flux/bandpass/phase calibrators.

[ascl:2103.020] ARTIS: 3D Monte Carlo radiative transfer code for supernovae

ARTIS is a 3D radiative transfer code for Type Ia supernovae using the Monte Carlo method with indivisible energy packets. It incorporates polarization and virtual packets and non-LTE physics appropriate for the nebular phase of Type Ia supernovae.

[ascl:1402.014] ARTIST: Adaptable Radiative Transfer Innovations for Submillimeter Telescopes

ARTIST is a suite of tools for comprehensive multi-dimensional radiative transfer calculations of dust and line emission, as well as their polarization, to help interpret observations from submillimeter telescopes. The ARTIST package consists of LIME, a radiative transfer code that uses adaptive gridding allowing simulations of sources with arbitrary multi-dimensional (1D, 2D, 3D) and time-dependent structures, thus ensuring rapid convergence; the DustPol and LinePol tools for modeling the polarization of the line and dust emission; and an interface run from Python scripts that manages the interaction between a general model library and LIME, and a graphical interface to simulate images.

[ascl:2110.006] ArtPop: Artificial Stellar Populations generator

ArtPop (Artificial Stellar Populations) synthesizes stellar populations and simulates realistic images of stellar systems. The code is modular, making it possible to use each of its functionalities independently or together. ArtPop can build stellar populations independently from generating mock images, as one might want to do when interested only in calculating integrated photometric properties of the population. The code can also generate stellar magnitudes and artificial galaxies, which can be inject into real imaging data.

[ascl:2004.012] ArviZ: Exploratory analysis of Bayesian models

ArviZ provides backend-agnostic tools for diagnostics and visualizations of Bayesian inference by first converting inference data into xarray objects. It includes functions for posterior analysis, model checking, comparison and diagnostics. ArviZ’s functions work with NumPy arrays, dictionaries of arrays, xarray datasets, and have built-in support for PyMC3 (ascl:1610.016), PyStan, CmdStanPy, Pyro (ascl:1507.018), NumPyro, emcee (ascl:1303.002), and TensorFlow Probability objects. A Julia wrapper is also available.

[ascl:1204.016] ASCfit: Automatic Stellar Coordinate Fitting Package

A modular software package for automatically fitting astrometric world coordinates (WCS) onto raw optical or infrared FITS images. Image stars are identified with stars in a reference catalog (USNO-A2 or 2MASS), and coordinates derived as a simple linear transformation from (X,Y) pixels to (RA,DEC) to the accuracy level of the reference catalog used. The package works with both optical and infrared images, at sidereal and non-sidereal tracking rates.

[ascl:1804.001] ASERA: A Spectrum Eye Recognition Assistant

ASERA, ASpectrum Eye Recognition Assistant, aids in quasar spectral recognition and redshift measurement and can also be used to recognize various types of spectra of stars, galaxies and AGNs (Active Galactic Nucleus). This interactive software allows users to visualize observed spectra, superimpose template spectra from the Sloan Digital Sky Survey (SDSS), and interactively access related spectral line information. ASERA is an efficient and user-friendly semi-automated toolkit for the accurate classification of spectra observed by LAMOST (the Large Sky Area Multi-object Fiber Spectroscopic Telescope) and is available as a standalone Java application and as a Java applet. The software offers several functions, including wavelength and flux scale settings, zoom in and out, redshift estimation, and spectral line identification.

[ascl:1603.009] Asfgrid: Asteroseismic parameters for a star

asfgrid computes asteroseismic parameters for a star with given stellar parameters and vice versa. Written in Python, it determines delta_nu, nu_max or masses via interpolation over a grid.

[ascl:1912.003] ASKAPsoft: ASKAP science data processor software

ASKAPsoft provides data processing functionality for Australian Square Kilometre Array Pathfinder, including calibration, spectral line imaging, continuum imaging, source detection and generation of source catalogs, and transient detection. The MPI-based package is the primary software for storing and processing raw data, and initiating the archiving of resulting science data products into the data archive (CASDA). The processing pipelines within ASKAPsoft are largely written in C++ built on top of casacore (ascl:1912.002) and other third party libraries.

[ascl:1609.020] Askaryan Module: Askaryan electric fields predictor

The Askaryan Module is a C++ class that predicts the electric fields that Askaryan-based detectors detect; it is computationally efficient and accurate, performing fully analytic calculations requiring no a priori MC analysis to compute the entire field, for any frequencies, times, or viewing angles chosen by the user.

[ascl:2205.018] ASOHF: Adaptive Spherical Overdensity Halo Finder

ASOHF (Adaptive Spherical Overdensity Halo Finder) identifies bound dark matter structures (dark matter haloes) in the outputs of cosmological simulations, and works directly on an input particle list. The computational cost of running ASOHF in simulations with a large number of particles can be reduced by using a domain decomposition to split the simulation box into smaller boxes, or subdomains, which are then processed independently. The basic output of ASOHF is a halo catalog. The package includes a python code to build a merger tree from ASOHF outputs.

[ascl:1807.030] ASP: Ames Stereo Pipeline

ASP (Ames Stereo Pipeline) provides fully automated geodesy and stereogrammetry tools for processing stereo imagery captured from satellites (around Earth and other planets), robotic rovers, aerial cameras, and historical imagery, with and without accurate camera pose information. It produces cartographic products, including digital elevation models (DEMs), ortho-projected imagery, 3D models, and bundle-adjusted networks of cameras. ASP's data products are suitable for science analysis, mission planning, and public outreach.

[ascl:1112.017] ASpec: Astronomical Spectrum Analysis Package

ASpec is a spectrum and line analysis package developed at STScI. ASpec is designed as an add-on package for IRAF and incorporates a variety of analysis techniques for astronomical spectra. ASpec operates on spectra from a wide variety of ground-based and space-based instruments and allows simultaneous handling of spectra from different wavelength regimes. The package accommodates non-linear dispersion relations and provides a variety of functions, individually or in combination, with which to fit spectral features and the continuum. It also permits the masking of known bad data. ASpec provides a powerful, intuitive graphical user interface implemented using the IRAF Object Manager and customized to handle: data input/output (I/O); on-line help; selection of relevant features for analysis; plotting and graphical interaction; and data base management.

[ascl:1209.015] Aspects: Probabilistic/positional association of catalogs of sources

Given two catalogs K and K' of n and n' astrophysical sources, respectively, Aspects (Association positionnelle/probabiliste de catalogues de sources) computes, for any objects MiK and M'jK', the probability that M'j is a counterpart of Mi, i.e. that they are the same source. To determine this probability of association, the code takes into account the coordinates and the positional uncertainties of all the objects. Aspects also computes the probability P(Ai, 0 | C ∩ C') that Mi has no counterpart.

Aspects is written in Fortran 95; the required Fortran 90 Numerical Recipes routines used in version 1.0 have been replaced with free equivalents in version 2.0.

[ascl:1806.031] ASPIC: Accurate Slow-roll Predictions for Inflationary Cosmology

Aspic, written in modern Fortran, computes various observable quantities used in cosmology from definite single field inflationary models. It provides an efficient, extendable, and accurate way of comparing theoretical inflationary predictions with cosmological data and supports many (~70) models of inflation. The Hubble flow functions, observable quantities up to second order in the slow-roll approximation, are in direct correspondence with the spectral index, the tensor-to-scalar ratio and the running of the primordial power spectrum. The ASPIC library also provides the field potential, its first and second derivatives, the energy density at the end of inflation, the energy density at the end of reheating, and the field value (or e-fold value) at which the pivot scale crossed the Hubble radius during inflation. All these quantities are computed in a way which is consistent with the existence of a reheating phase.

[ascl:1510.006] ASPIC: STARLINK image processing package

ASPIC handled basic astronomical image processing. Early releases concentrated on image arithmetic, standard filters, expansion/contraction/selection/combination of images, and displaying and manipulating images on the ARGS and other devices. Later releases added new astronomy-specific applications to this sound framework. The ASPIC collection of about 400 image-processing programs was written using the Starlink "interim" environment in the 1980; the software is now obsolete.

[ascl:2202.022] ASPIRED: Automated SpectroPhotometric Image REDuction

ASPIRED reduces 2D spectral data from raw image to wavelength and flux calibrated 1D spectrum automatically without any user input (quicklook quality), and provides a set of easily configurable routines to build pipelines for long slit spectrographs on different telescopes (science quality). It delivers near real-time data reduction, which can facilitate automated or interactive decision making, allowing "on-the-fly" modification of observing strategies and rapid triggering of other facilities.

[ascl:1310.005] ASPRO 2: Astronomical Software to PRepare Observations

ASPRO 2 (Astronomical Software to PRepare Observations) is an observation preparation tool for interferometric observations with the VLTI or other interferometers such as CHARA and SUSI. It is a Java standalone program that provides a dynamic graphical interface to simulate the projected baseline evolution during observations (super-synthesis) and derive visibilities for targets (i.e., single star, binaries, user defined FITS image). It offers other useful functions such as the ability to load and save your observation settings and generate Observing Blocks.

[ascl:1903.011] AsPy: Aspherical fluctuations on the spherical collapse background

AsPy computes the determinants of aspherical fluctuations on the spherical collapse background. Written in Python, this procedure includes analytic factorization and cancellation of the so-called `IR-divergences'—spurious enhanced contributions that appear in the dipole sector and are associated with large bulk flows.

[ascl:2304.001] ASSIST: Solar system test particles trajectories integrator

ASSIST integrates test particle trajectories in the field of the Sun, Moon, planets, and massive asteroids, with the positions of the masses obtained from the JPL DE441 ephemeris and its associated asteroid perturber file. Using REBOUND's (ascl:1110.016) IAS15 integrator, ASSIST incorporates the most significant gravitational harmonics and general relativistic corrections and accounts for position- and velocity-dependent non-gravitational effects. The first-order variational equations are included for all terms to support orbit fitting and covariance mapping.

[ascl:1404.016] AST: World Coordinate Systems in Astronomy

The AST library provides a comprehensive range of facilities for attaching world coordinate systems to astronomical data, for retrieving and interpreting that information in a variety of formats, including FITS-WCS, and for generating graphical output based on it. Core projection algorithms are provided by WCSLIB (ascl:1108.003) and astrometry is provided by the PAL (ascl:1606.002) and SOFA (ascl:1403.026) libraries. AST bindings are available in Python (pyast), Java (JNIAST) and Perl (Starlink::AST). AST is used as the plotting and astrometry library in DS9 and GAIA, and is distributed separately and as part of the Starlink software collection.

[ascl:1505.002] ASteCA: Automated Stellar Cluster Analysis

ASteCA (Automated Stellar Cluster Analysis), written in Python, fully automates standard tests applied on star clusters in order to determine their characteristics, including center, radius, and stars' membership probabilities. It also determines associated intrinsic/extrinsic parameters, including metallicity, age, reddening, distance, total mass, and binarity fraction, among others.

[ascl:1403.023] ASTERIX: X-ray Data Processing System

ASTERIX is a general purpose X-ray data reduction package optimized for ROSAT data reduction. ASTERIX uses the Starlink software environment (ascl:1110.012).

[ascl:2112.009] AsteroGaP: Asteroid Gaussian Processes

The Bayesian-based Gaussian Process model AsteroGaP (Asteroid Gaussian Processes) fits sparsely-sampled asteroid light curves. By utilizing a more flexible Gaussian Process framework for modeling asteroid light curves, it is able to represent light curves in a periodic but non-sinusoidal manner.

[ascl:1607.016] astLib: Tools for research astronomers

astLib is a set of Python modules for performing astronomical plots, some statistics, common calculations, coordinate conversions, and manipulating FITS images with World Coordinate System (WCS) information through PyWCSTools, a simple wrapping of WCSTools (ascl:1109.015).

[ascl:2004.006] ASTRAEUS: Semi-analytical semi-numerical galaxy evolution and reionization code

ASTRAEUS (semi-numerical rAdiative tranSfer coupling of galaxy formaTion and Reionization in n-body dArk mattEr simUlationS) self-consistently derives the evolution of galaxies and the reionization of the IGM based on the merger trees and density fields of a DM-only N-body simulation. It models gas accretion, star formation, SN feedback, the time and spatial evolution of the ionized regions, accounting for recombinations, HI fractions and photoionization rates within ionized regions, and radiative feedback. ASTRAEUS is for studying the galaxy-reionization interplay in the first billion years. The underlying code is written in C and is MPI-parallelized; its modular design allows new physical processes and galaxy properties to be added easily. ASTRAEUS can be run on a tree-branch-by-tree-branch basis (i.e., fully vertical) or on a redshift-by-redshift basis (i.e., fully horizontal) when evolving the galaxies by using local horizontal merger trees.

[ascl:1605.009] ASTRiDE: Automated Streak Detection for Astronomical Images

ASTRiDE detects streaks in astronomical images using a "border" of each object (i.e. "boundary-tracing" or "contour-tracing") and their morphological parameters. Fast moving objects such as meteors, satellites, near-Earth objects (NEOs), or even cosmic rays can leave streak-like traces in the images; ASTRiDE can detect not only long streaks but also relatively short or curved streaks.

[ascl:2103.028] Astro-Fix: Correcting astronomical bad pixels in Python

astrofix is an astronomical image correction algorithm based on Gaussian Process Regression. It trains itself to apply the optimal interpolation kernel for each image, performing multiple times better than median replacement and interpolation with a fixed kernel.

[ascl:1907.032] Astro-SCRAPPY: Speedy Cosmic Ray Annihilation Package in Python

Astro-SCRAPPY detects cosmic rays in images (numpy arrays), based on Pieter van Dokkum's L.A.Cosmic algorithm and originally adapted from cosmics.py written by Malte Tewes. This implementation is optimized for speed, resulting in slight difference from the original code, such as automatic recognition of saturated stars (rather than treating such stars as large cosmic rays, and use of a separable median filter instead of the true median filter. Astro-SCRAPPY is an AstroPy (ascl:1304.002) affiliated package.

[ascl:1705.016] astroABC: Approximate Bayesian Computation Sequential Monte Carlo sampler

astroABC is a Python implementation of an Approximate Bayesian Computation Sequential Monte Carlo (ABC SMC) sampler for parameter estimation. astroABC allows for massive parallelization using MPI, a framework that handles spawning of processes across multiple nodes. It has the ability to create MPI groups with different communicators, one for the sampler and several others for the forward model simulation, which speeds up sampling time considerably. For smaller jobs the Python multiprocessing option is also available.

[ascl:1912.010] AstroAccelerate: Accelerated software package for processing time-domain radio astronomy data

AstroAccelerate processes time-domain radio astronomy data. It offers a standalone code that can be used to process filterbank data and a library that performs GPU-accelerated single pulse processing (SPS), Fourier Domain Acceleration Searching (FDAS) and dedispersion in real-time on very large data-sets comparable to those that will be produced by next-generation radio telescopes such as the SKA. AstroAccelerate uses NVIDIAR GPUs, and is configurable, stable, and easily maintained.

[ascl:1906.001] Astroalign: Asterism-matching alignment of astronomical images

Astroalign tries to register (align) two stellar astronomical images, especially when there is no WCS information available. It does so by finding similar 3-point asterisms (triangles) in both images and deducing the affine transformation between them. Generic registration routines try to match feature points, using corner detection routines to make the point correspondence. These generally fail for stellar astronomical images since stars have very little stable structure so are, in general, indistinguishable from each other. Asterism matching is more robust and closer to the human way of matching stellar images. Astroalign can match images of very different field of view, point-spread function, seeing and atmospheric conditions. It may require special care or may not work on images of extended objects with few point-like sources or in crowded fields.

[ascl:1311.003] AstroAsciiData: ASCII table Python module

ASCII tables continue to be one of the most popular and widely used data exchange formats in astronomy. AstroAsciiData, written in Python, imports all reasonably well-formed ASCII tables. It retains formatting of data values, allows column-first access, supports SExtractor style headings, performs column sorting, and exports data to other formats, including FITS, Numpy/Numarray, and LaTeX table format. It also offers interchangeable comment character, column delimiter and null value.

[ascl:1104.002] AstroBEAR: Adaptive Mesh Refinement Code for Ideal Hydrodynamics & Magnetohydrodynamics

AstroBEAR is a modular hydrodynamic & magnetohydrodynamic code environment designed for a variety of astrophysical applications. It uses the BEARCLAW package, a multidimensional, Eulerian computational code used to solve hyperbolic systems of equations. AstroBEAR allows adaptive-mesh-refinment (AMR) simulations in 2, 2.5 (i.e., cylindrical), and 3 dimensions, in either cartesian or curvilinear coordinates. Parallel applications are supported through the MPI architecture. AstroBEAR is written in Fortran 90/95 using standard libraries.

AstroBEAR supports hydrodynamic (HD) and magnetohydrodynamic (MHD) applications using a variety of spatial and temporal methods. MHD simulations are kept divergence-free via the constrained transport (CT) methods of Balsara & Spicer. Three different equation of state environments are available: ideal gas, gas with differing isentropic γ, and the analytic Thomas-Fermi formulation of A.R. Bell.

[ascl:1512.007] AstroBlend: Visualization package for use with Blender

AstroBlend is a visualization package for use in the three dimensional animation and modeling software, Blender. It reads data in via a text file or can use pre-fab isosurface files stored as OBJ or Wavefront files. AstroBlend supports a variety of codes such as FLASH (ascl:1010.082), Enzo (ascl:1010.072), and Athena (ascl:1010.014), and combines artistic 3D models with computational astrophysics datasets to create models and animations.

[ascl:2006.017] AstroCatR: Time series reconstruction of large-scale astronomical catalogs

AstroCatR reconstructs celestial objects' time series data for astronomical catalogs. It is a command-line program running on the Linux platform and is implemented in C and Python; AstroCatR's capabilities are based on specialized sky partitioning and MPI parallel programming. The package contains three parts: ETL (extract-transform-load) pre-processing, TS-matching calculation, and time series data retrieval. Once the user obtains the original catalogs, running ETL pre-processing generates a sky zoning file. The TS-matching module marks celestial objects, and finally, running the Query program searches celestial objects from the time series datasets which matched with the target.

[ascl:1507.010] Astrochem: Abundances of chemical species in the interstellar medium

Astrochem computes the abundances of chemical species in the interstellar medium, as function of time. It studies the chemistry in a variety of astronomical objects, including diffuse clouds, dense clouds, photodissociation regions, prestellar cores, protostars, and protostellar disks. Astrochem reads a network of chemical reactions from a text file, builds up a system of kinetic rates equations, and solves it using a state-of-the-art stiff ordinary differential equation (ODE) solver. The Jacobian matrix of the system is computed implicitly, so the resolution of the system is extremely fast: large networks containing several thousands of reactions are usually solved in a few seconds. A variety of gas phase process are considered, as well as simple gas-grain interactions, such as the freeze-out and the desorption via several mechanisms (thermal desorption, cosmic-ray desorption and photo-desorption). The computed abundances are written in a HDF5 file, and can be plotted in different ways with the tools provided with Astrochem. Chemical reactions and their rates are written in a format which is meant to be easy to read and to edit. A tool to convert the chemical networks from the OSU and KIDA databases into this format is also provided. Astrochem is written in C, and its source code is distributed under the terms of the GNU General Public License (GPL).

[ascl:2407.015] AstroCLIP: Multimodal contrastive pretraining for astronomical data

AstroCLIP performs contrastive pre-training between two different kinds of astronomical data modalities (multi-band imaging and optical spectra) to yield a meaningful embedding space which captures physical information about galaxies and is shared between both modalities. The embeddings can be used as the basis for competitive zero- and few-shot learning on a variety of downstream tasks, including similarity search, redshift estimation, galaxy property prediction, and morphology classification.

[ascl:1905.007] Astrocut: Tools for creating cutouts of TESS images

The Transiting Exoplanet Survey Satellite (TESS) produces Full Frame Images (FFIs) at a half hour cadence and keeps the same pointing for ~27 days at a time. Astrocut performs the same cutout across all FFIs that share a common pointing to create a time series of images on a small portion of the sky.

The Astrocut package has two parts: the CubeFactory and the CutoutFactory. The CubeFactory class creates a large image cube from a list of FFI files, which allows the cutout operation to be performed efficiently. The CutoutFactory class performs the actual cutout and builds a target pixel file (TPF) that is compatible with TESS pipeline TPFs. Because this software operates on TESS mission-produced FFIs, the resulting TPFs are not background-subtracted. In addition to the Astrocut software itself, the Mikulski Archive for Space Telescopes (MAST) provides a cutout service, TESScut, which runs Astrocut on MAST servers, and allows users to simply request cutouts through a web form or direct HTTP API query.

[ascl:1804.004] AstroCV: Astronomy computer vision library

AstroCV processes and analyzes big astronomical datasets, and is intended to provide a community repository of high performance Python and C++ algorithms used for image processing and computer vision. The library offers methods for object recognition, segmentation and classification, with emphasis in the automatic detection and classification of galaxies.

[ascl:2111.001] astroDDPM: Realistic galaxy simulation via score-based generative models

astroDDPM uses a denoising diffusion probabilistic model (DDPM) to synthesize galaxies that are qualitatively and physically indistinguishable from the real thing. The similarity of the synthesized images to real galaxies from the Photometry and Rotation curve OBservations from Extragalactic Surveys (PROBES) sample and from the Sloan Digital Sky Survey is quantified using the Fréchet Inception Distance to test for subjective and morphological similarity. The emergent physical properties (such as total magnitude, color, and half light radius) of a ground truth parent and synthesized child dataset are also compared to generate a Synthetic Galaxy Distance metric. The DDPM approach produces sharper and more realistic images than other generative methods such as Adversarial Networks (with the downside of more costly inference), and could be used to produce large samples of synthetic observations tailored to a specific imaging survey. Potential uses of the DDPM include accurate in-painting of occluded data, such as satellite trails, and domain transfer, where new input images can be processed to mimic the properties of the DDPM training set.

[ascl:1907.016] astrodendro: Astronomical data dendrogram creator

Astrodendro, written in Python, creates dendrograms for exploring and displaying hierarchical structures in observed or simulated astronomical data. It handles noisy data by allowing specification of the minimum height of a structure and the minimum number of pixels needed for an independent structure. Astrodendro allows interactive viewing of computed dendrograms and can also produce publication-quality plots with the non-interactive plotting interface.

[ascl:1010.013] AstroGK: Astrophysical Gyrokinetics Code

The gyrokinetic simulation code AstroGK is developed to study fundamental aspects of kinetic plasmas and for applications mainly to astrophysical problems. AstroGK is an Eulerian slab code that solves the electromagnetic Gyrokinetic-Maxwell equations in five-dimensional phase space, and is derived from the existing gyrokinetics code GS2 by removing magnetic geometry effects. Algorithms used in the code are described. The code is benchmarked using linear and nonlinear problems. Serial and parallel performance scalings are also presented.

[ascl:2003.013] AstroHOG: Analysis correlations using the Histograms of Oriented Gradients

AstroHOG compares extended spectral-line observations (PPV cubes); the histogram of oriented gradients (HOG) technique takes as input two PPV cubes and provides an estimate of their spatial correlation across velocity channels to study spatial correlation between different tracers of the ISM.

[ascl:1309.001] AstroImageJ: ImageJ for Astronomy

AstroImageJ is generic ImageJ (ascl:1206.013) with customizations to the base code and a packaged set of astronomy specific plugins. It reads and writes FITS images with standard headers, displays astronomical coordinates for images with WCS, supports photometry for developing color-magnitude data, offers flat field, scaled dark, and non-linearity processing, and includes tools for precision photometry that can be used during real-time data acquisition.

[ascl:1502.022] AstroLines: Astrophysical line list generator in the H-band

AstroLines adjusts spectral line parameters (gf and damping constant) starting from an initial line list. Written in IDL and tailored to the APO Galactic Evolution Experiment (APOGEE), it runs a slightly modified version of MOOG (ascl:1202.009) to compare synthetic spectra with FTS spectra of the Sun and Arcturus.

[ascl:1406.008] ASTROM: Basic astrometry program

ASTROM performs "plate reductions" by taking user-provided star positions and the (x,y) coordinates of the corresponding star images and establishes the relationship between (x,y) and (ra,dec), thus enabling the coordinates of unknown stars to be determined. ASTROM is distributed with the Starlink software (ascl:1110.012) and uses SLALIB (ascl:1403.025).

[ascl:1010.078] AstroMD: A Multi Dimensional Visualization and Analysis Toolkit for Astrophysics

Over the past few years, the role of visualization for scientific purpose has grown up enormously. Astronomy makes an extended use of visualization techniques to analyze data, and scientific visualization has became a fundamental part of modern researches in Astronomy. With the evolution of high performance computers, numerical simulations have assumed a great role in the scientific investigation, allowing the user to run simulation with higher and higher resolution. Data produced in these simulations are often multi-dimensional arrays with several physical quantities. These data are very hard to manage and to analyze efficiently. Consequently the data analysis and visualization tools must follow the new requirements of the research. AstroMD is a tool for data analysis and visualization of astrophysical data and can manage different physical quantities and multi-dimensional data sets. The tool uses virtual reality techniques by which the user has the impression of travelling through a computer-based multi-dimensional model.

[ascl:2205.020] ASTROMER: Building light curves embeddings using transfomers

ASTROMER is a Transformer-based model trained on millions of stars for the representation of light curves. Pretrained models can be directly used or finetuned on specific datasets. ASTROMER is useful in downstream tasks in which data are limited to train deep learning models.

[ascl:1203.012] Astrometrica: Astrometric data reduction of CCD images

Astrometrica is an interactive software tool for scientific grade astrometric data reduction of CCD images. The current version of the software is for the Windows 32bit operating system family. Astrometrica reads FITS (8, 16 and 32 bit integer files) and SBIG image files. The size of the images is limited only by available memory. It also offers automatic image calibration (Dark Frame and Flat Field correction), automatic reference star identification, automatic moving object detection and identification, and access to new-generation star catalogs (PPMXL, UCAC 3 and CMC-14), in addition to online help and other features. Astrometrica is shareware, available for use for a limited period of time (100 days) for free; special arrangements can be made for educational projects.

[ascl:1208.001] Astrometry.net: Astrometric calibration of images

Astrometry.net is a reliable and robust system that takes as input an astronomical image and returns as output the pointing, scale, and orientation of that image (the astrometric calibration or World Coordinate System information). The system requires no first guess, and works with the information in the image pixels alone; that is, the problem is a generalization of the "lost in space" problem in which nothing—not even the image scale—is known. After robust source detection is performed in the input image, asterisms (sets of four or five stars) are geometrically hashed and compared to pre-indexed hashes to generate hypotheses about the astrometric calibration. A hypothesis is only accepted as true if it passes a Bayesian decision theory test against a null hypothesis. With indices built from the USNO-B catalog and designed for uniformity of coverage and redundancy, the success rate is >99.9% for contemporary near-ultraviolet and visual imaging survey data, with no false positives. The failure rate is consistent with the incompleteness of the USNO-B catalog; augmentation with indices built from the Two Micron All Sky Survey catalog brings the completeness to 100% with no false positives. We are using this system to generate consistent and standards-compliant meta-data for digital and digitized imaging from plate repositories, automated observatories, individual scientific investigators, and hobbyists.

[ascl:1407.018] AstroML: Machine learning and data mining in astronomy

Written in Python, AstroML is a library of statistical and machine learning routines for analyzing astronomical data in python, loaders for several open astronomical datasets, and a large suite of examples of analyzing and visualizing astronomical datasets. An optional companion library, astroML_addons, is available; it requires a C compiler and contains faster and more efficient implementations of certain algorithms in compiled code.

[ascl:2103.012] AstroNet-Triage: Neural network for TESS light curve triage

AstroNet-Triage contains TensorFlow models and data processing code for identifying exoplanets in astrophysical light curves; this is the triage version of two TESS neural networks. For the vetting version, see AstroNet-Vetting (ascl:2103.011). The TensorFlow code downloads and pre-processes TESS data, builds different types of neural network classification models, trains and evaluates new models, and generates new predictions using a trained model. Utilities that operate on light curves are provided; these reading TESS data from .h5 files, and perform phase folding, splitting, binning, and other tasks. C++ implementations of some light curve utilities are also included.

[ascl:2103.011] AstroNet-Vetting: Neural network for TESS light curve vetting

AstroNet-Vetting identifies exoplanets in astrophysical light curves. This is the vetting version of two TESS neural networks; for the triage version, see AstroNet-Triage (ascl:2103.012). The package contains TensorFlow code that downloads and pre-processes TESS data, builds different types of neural network classification models, trains and evaluates a new model, and uses a trained model to generate new predictions. It includes utilities for operating on light curves, such as for reading TESS data from .h5 files, phase folding, splitting, and binning. In addition, C++ implementations of light curve utilities are also provided.

[ascl:2408.005] Astronify: Astronomical data sonification

Astronify contains tools for sonifying astronomical data, specifically data series. Data series sonification takes a data table and maps one column to time, and one column to pitch. This technique is commonly used to sonify light curves, where observation time is scaled to listening time and flux is mapped to pitch. While Astronify’s sonification uses the columns “time” and “flux” by default, any two columns can be supplied and a sonification created.

[ascl:2404.014] astroNN: Deep learning for astronomers with Tensorflow

astroNN creates neural networks for deep learning using Keras for model and training prototyping while taking advantage of Tensorflow's flexibility. It contains tools for use with APOGEE, Gaia and LAMOST data, though is primarily designed to apply neural nets on APOGEE spectra analysis and predict luminosity from spectra using data from Gaia parallax with reasonable uncertainty from Bayesian Neural Net. astroNN can handle 2D and 2D colored images, and the package contains custom loss functions and layers compatible with Tensorflow or Keras with Tensorflow backend to deal with incomplete labels. The code contains demo for implementing Bayesian Neural Net with Dropout Variational Inference for reasonable uncertainty estimation and other neural nets.

[ascl:2010.012] Astronomaly: Flexible framework for anomaly detection in astronomy

Astronomaly actively detects anomalies in astronomical data. A python back-end runs anomaly detection based on machine learning; a JavaScript front-end provides data viewing and labeling. The package works on many common astronomy data types, including one-dimensional data and images, and offering extendable techniques for preprocessing, feature extraction, and machine learning.

[ascl:2308.004] AstroPhot: Fitting everything everywhere all at once in astronomical images

AstroPhot quickly extracts detailed information from complex astronomical data for individual images or large survey programs. It fits models for sky, stars, galaxies, PSFs, and more in a principled chi^2 forward optimization, recovering Bayesian posterior information and covariance of all parameters. The code optimizes forward models on CPU or GPU, across images that are large, multi-band, multi-epoch, rotated, dithered, and more. Models are optimized together, thus handling overlapping objects and including the covariance between parameters (including PSF and galaxy parameters). AstroPhot includes several optimization algorithms, including Levenberg-Marquardt, Gradient descent, and No-U-Turn MCMC sampling.

[ascl:1802.009] astroplan: Observation planning package for astronomers

astroplan is a flexible toolbox for observation planning and scheduling. It is powered by Astropy (ascl:1304.002); it works for Python beginners and new observers, and is powerful enough for observatories preparing nightly and long-term schedules as well. It calculates rise/set/meridian transit times, alt/az positions for targets at observatories anywhere on Earth, and offers built-in plotting convenience functions for standard observation planning plots (airmass, parallactic angle, sky maps). It can also determine the observability of sets of targets given an arbitrary set of constraints (i.e., altitude, airmass, moon separation/illumination, etc.).

[ascl:1402.003] astroplotlib: Astronomical library of plots

Astropoltlib is a multi-language astronomical library of plots, a collection of templates useful for creating paper-quality figures. Most of the codes for producing the plots are written in IDL and/or Python; a very few are written in Mathematica. Any plot can be downloaded and customized to one's own needs.

[ascl:2204.002] Astroplotlib: Python scripts to handle astronomical images

Astroplotlib builds images with any scale, overlay contours, physical bars, and orientation arrows (N and E axes) automatically. The package contains scripts to overlay pseudo-slits and obtain statistics from apertures, estimate the background sky, and overlay the fitted isophotes and their respective contours on an image. Astroplotlib can work with the output table from the Ellipse task of IRAF and overlay fitted isophotes and their respective contours. It includes a GUI for masking areas in the images by using different polygons, and can also obtain statistical information (e.g., total flux and mean, among others) from the masked areas. There is also a GUI to overlay star catalogs on an image and an option to download them directly from the Vizier server.

[ascl:1805.024] ASTROPOP: ASTROnomical Polarimetry and Photometry pipeline

AstroPoP reduces almost any CCD photometry and image polarimetry data. For photometry reduction, the code performs source finding, aperture and PSF photometry, astrometry calibration using different automated and non-automated methods and automated source identification and magnitude calibration based on online and local catalogs. For polarimetry, the code resolves linear and circular Stokes parameters produced by image beam splitter or polarizer polarimeters. In addition to the modular functions, ready-to-use pipelines based in configuration files and header keys are also provided with the code. AstroPOP was initially developed to reduce the IAGPOL polarimeter data installed at Observatório Pico dos Dias (Brazil).

[ascl:1304.002] Astropy: Community Python library for astronomy

Astropy provides a common framework, core package of code, and affiliated packages for astronomy in Python. Development is actively ongoing, with major packages such as PyFITS, PyWCS, vo, and asciitable already merged in. Astropy is intended to contain much of the core functionality and some common tools needed for performing astronomy and astrophysics with Python.

[ascl:1207.007] Astropysics: Astrophysics utilities for python

Astropysics is a library containing a variety of utilities and algorithms for reducing, analyzing, and visualizing astronomical data. Best of all, it encourages the user to leverage the existing capabilities of Python to make this quick, easy, and as painless as cutting-edge science can even actually be. There do exist other Python packages with some of the capabilities of this project, but the goal of this project is to integrate all these tools together and make them interact in the most straightforward ways possible.

[ascl:1407.007] ASTRORAY: General relativistic polarized radiative transfer code

ASTRORAY employs a method of ray tracing and performs polarized radiative transfer of (cyclo-)synchrotron radiation. The radiative transfer is conducted in curved space-time near rotating black holes described by Kerr-Schild metric. Three-dimensional general relativistic magneto hydrodynamic (3D GRMHD) simulations, in particular performed with variations of the HARM code, serve as an input to ASTRORAY. The code has been applied to reproduce the sub-mm synchrotron bump in the spectrum of Sgr A*, and to test the detectability of quasi-periodic oscillations in its light curve. ASTRORAY can be readily applied to model radio/sub-mm polarized spectra of jets and cores of other low-luminosity active galactic nuclei. For example, ASTRORAY is uniquely suitable to self-consistently model Faraday rotation measure and circular polarization fraction in jets.

[ascl:2111.013] Astrosat: Satellite transit calculator

Astrosat calculates which satellites can be seen by a given observer in a given field of view at a given observation time and observation duration. This includes the geometry of the satellite and observer but also estimates the expected apparent brightness of the satellite to aid astronomers in assessing the impact on their observations.

[ascl:1010.023] AstroSim: Collaborative Visualization of an Astrophysics Simulation in Second Life

AstroSim is a Second Life based prototype application for synchronous collaborative visualization targeted at astronomers.

[ascl:1507.019] AstroStat: Statistical analysis tool

AstroStat performs statistical analysis on data and is compatible with Virtual Observatory (VO) standards. It accepts data in a variety of formats and performs various statistical tests using a menu driven interface. Analyses, performed in R, include exploratory tests, visualizations, distribution fitting, correlation and causation, hypothesis testing, multivariate analysis and clustering. AstroStat is available in two versions with an identical interface and features: as a web service that can be run using any standard browser and as an offline application.

[ascl:1307.007] AstroTaverna: Tool for Scientific Workflows in Astronomy

AstroTaverna is a plugin for Taverna Workbench that provides the means to build astronomy workflows using Virtual Observatory services discovery and efficient manipulation of VOTables (based on STIL tool set). It integrates SAMP-enabled software, allowing data exchange and communication among local VO tools, as well as the ability to execute Aladin scripts and macros.

[ascl:2201.002] AstroToolBox: Java tools for identifying and classifying astronomical objects

AstroToolBox identifies and classifies astronomical objects with a focus on low-mass stars and ultra-cool dwarfs. It can search numerous catalogs, including SIMBAD (measurements & references), AllWISE, Gaia, SDSS, among others, evaluates spectral type for main sequence stars including brown dwarfs, and provides SED fitting for ultra-cool and white dwarfs. AstroToolBox draws Gaia color-magnitude diagrams (CMD) with overplotted M0-M9 spectral types, and can draw Montreal Cooling Sequences on the white dwarf branch of the Gaia CMD. The tool can also blink images from different epochs in an image viewer, thus allowing visual identification of the motion or variability of objects. The software displays time series (static or animated) using infrared and optical images of various surveys and contains a photometric classifier. It also includes astrometric calculators and converters, an ADQL query interface (IRSA, VizieR, NOAO) and a batch spectral type lookup feature that uses a CSV file with object coordinates as input. The ToolBox also has a file browser linked to the image viewer, which makes it possible to check a large list of objects in a convenient way, and can save interesting finds in an object collection for later use.

[ascl:2009.013] AstroVaDEr: Unsupervised clustering and synthetic image generation

AstroVaDEr (Astronomical Variational Deep Embedder) performs unsupervised clustering and synthetic image generation using astronomical imaging catalogs to classify their morphologies. This variational autoencoder leverages improvements to the variational deep clustering (VDC) paradigm; its variational inference properties allow the network to be employed as a generative network. AstroVaDEr can be adapted to various surveys and image classification problems.

[ascl:1608.005] AstroVis: Visualizing astronomical data cubes

AstroVis enables rapid visualization of large data files on platforms supporting the OpenGL rendering library. Radio astronomical observations are typically three dimensional and stored as data cubes. AstroVis implements a scalable approach to accessing these files using three components: a File Access Component (FAC) that reduces the impact of reading time, which speeds up access to the data; the Image Processing Component (IPC), which breaks up the data cube into smaller pieces that can be processed locally and gives a representation of the whole file; and Data Visualization, which implements an approach of Overview + Detail to reduces the dimensions of the data being worked with and the amount of memory required to store it. The result is a 3D display paired with a 2D detail display that contains a small subsection of the original file in full resolution without reducing the data in any way.

[ascl:1406.001] ASURV: Astronomical SURVival Statistics

ASURV (Astronomical SURVival Statistics) provides astronomy survival analysis for right- and left-censored data including the maximum-likelihood Kaplan-Meier estimator and several univariate two-sample tests, bivariate correlation measures, and linear regressions. ASURV is written in FORTRAN 77, and is stand-alone and does not call any specialized libraries.

[ascl:2208.005] Asymmetric Uncertainty: Handling nonstandard numerical uncertainties

Asymmetric Uncertainty implements and provides an object class for dealing with uncertainties for physical quantities that are not symmetric. Instances of the class behave appropriately with other numeric objects under most mathematical operations, and the associated errors propagate accordingly. The class also provides utilities such as methods for evaluating and plotting probability density functions, as well as capabilities for handling arrays of such objects. Standard and symmetric uncertainties are also supported.

[ascl:2105.003] ATARRI: A TESS Archive RR Lyrae Classifier

ATARRI is a graphical user interface for downloading TESS Full Frame Images (FFIs) and displaying properties of the lightcurves of selected objects. Preliminary analysis is performed assuming the object is an RR Lyrae variable. The raw lightcurve, a Lomb-Scargle analysis (both full and pre-whitened), and a folded lightcurve are presented to the user along with options to select the type of RR Lyrae and data quality flags for output.

[ascl:2106.015] ATES: ATmospheric EScape

The ATES hydrodynamics code computes the temperature, density, velocity and ionization fraction profiles of highly irradiated planetary atmospheres, along with the current, steady-state mass loss rate. ATES solves the one-dimensional Euler, mass and energy conservation equations in
radial coordinates through a finite-volume scheme. The hydrodynamics module is paired with a photoionization equilibrium solver that includes cooling via bremsstrahlung, recombination and collisional excitation/ionization for the case of an atmosphere of primordial composition (i.e., pure atomic hydrogen-helium), while also accounting for advection of the different ion species.

[ascl:1010.014] Athena: Grid-based code for astrophysical magnetohydrodynamics (MHD)

Athena is a grid-based code for astrophysical magnetohydrodynamics (MHD). It was developed primarily for studies of the interstellar medium, star formation, and accretion flows. The code has been designed to be easily extensible for use with static and adaptive mesh refinement. It combines higher-order Godunov methods with the constrained transport (CT) technique to enforce the divergence-free constraint on the magnetic field. Discretization is based on cell-centered volume-averages for mass, momentum, and energy, and face-centered area-averages for the magnetic field. Novel features of the algorithm include (1) a consistent framework for computing the time- and edge-averaged electric fields used by CT to evolve the magnetic field from the time- and area-averaged Godunov fluxes, (2) the extension to MHD of spatial reconstruction schemes that involve a dimensionally-split time advance, and (3) the extension to MHD of two different dimensionally-unsplit integration methods. Implementation of the algorithm in both C and Fortran95 is detailed, including strategies for parallelization using domain decomposition. Results from a test suite which includes problems in one-, two-, and three-dimensions for both hydrodynamics and MHD are given, not only to demonstrate the fidelity of the algorithms, but also to enable comparisons to other methods. The source code is freely available for download on the web.

[ascl:1402.026] athena: Tree code for second-order correlation functions

athena is a 2d-tree code that estimates second-order correlation functions from input galaxy catalogues. These include shear-shear correlations (cosmic shear), position-shear (galaxy-galaxy lensing) and position-position (spatial angular correlation). Written in C, it includes a power-spectrum estimator implemented in Python; this script also calculates the aperture-mass dispersion. A test data set is available.

[ascl:1912.005] Athena++: Radiation GR magnetohydrodynamics code

Athena++ is a complete re-write of the Athena astrophysical magnetohydrodynamics (MHD) code (ascl:1010.014) in C++. Compared to earlier versions, the Athena++ code has much more flexible coordinate and grid options and supports new physics. It also offers significantly improved performance and scalability, and improved source code clarity and modularity. Athena++ supports compressible hydrodynamics and MHD in 1D, 2D, and 3D, and special and general relativistic hydrodynamics and MHD. In addition, it supports Cartesian, cylindrical, or spherical polar coordinates; static or adaptive mesh refinement in any coordinate system; mixed parallelization with both OpenMP and MPI; and a task-based execution model for improved load balancing, scalability and modularity.

[ascl:1505.006] Athena3D: Flux-conservative Godunov-type algorithm for compressible magnetohydrodynamics

Written in FORTRAN, Athena3D, based on Athena (ascl:1010.014), is an implementation of a flux-conservative Godunov-type algorithm for compressible magnetohydrodynamics. Features of the Athena3D code include compressible hydrodynamics and ideal MHD in one, two or three spatial dimensions in Cartesian coordinates; adiabatic and isothermal equations of state; 1st, 2nd or 3rd order reconstruction using the characteristic variables; and numerical fluxes computed using the Roe scheme. In addition, it offers the ability to add source terms to the equations and is parallelized based on MPI.

[ascl:1911.006] ATHOS: A Tool for HOmogenizing Stellar parameters

ATHOS provides on-the-fly stellar parameter determination of FGK stars based on flux ratios from optical spectra. Once configured properly, it will measure flux ratios in the input spectra and deduce the stellar parameters effective temperature, iron abundance (a.k.a [Fe/H]), and surface gravity by employing pre-defined analytical relations. ATHOS can be configured to run in parallel in an arbitrary number of threads, thus enabling the fast and efficient analysis of huge datasets.

[ascl:1110.015] atlant: Advanced Three Level Approximation for Numerical Treatment of Cosmological Recombination

atlant is a public numerical code for fast calculations of cosmological recombination of primordial hydrogen-helium plasma is presented. This code is based on the three-level approximation (TLA) model of recombination and allows us to take into account some "fine'' physical effects of cosmological recombination simultaneously with using fudge factors.

[submitted] atlas-fit

atlas-fit is a python tool to amend the results of [spectroflat] with calibration against a solar atlas. I.e., data for wavelength calibration and continuum-correction is genereted from flat field information and selected solar atlantes

[ascl:1911.013] ATLAS: Turning Dopplergram images into frequency shift measurements

ATLAS performs the tracking, projecting, power-spectrum-making, and ring-fitting needed to turn a set of Dopplergram images into a set of frequency shift measurements. This code is essentially a combination of three codes, FRACK (FORTRAN Tracking), PSPEC (Power SPECtrum), and MRF (Multi-Ridge Fitting), included in the ATLAS package. ATLAS reads in a list of longitude/latitude coordinates corresponding to the desired tile centers and a set of full-disk Dopplergram images and outputs frequency shift measurements from each wave mode of each tile. The code relies on both distributed-memory (MPI) and shared-memory (OpenMP) parallelism to scale up to around 1000 processes. Due to the immense volume of data produced by the tracking and projecting steps, the intermediate data products (tiles, power spectra) are never written out.

[ascl:1303.024] ATLAS12: Opacity sampling model atmosphere program

ATLAS12 is an opacity sampling model atmosphere program to allow computation of models with individual abundances using line data. ATLAS12 is able to compute the same models as ATLAS9 which uses pretabulated opacities, plus models with arbitrary abundances. ATLAS12 sampled fluxes are quite accurate for predicting the total flux except in the intermediate or narrow bandpass intervals because the sample size is too small.

[ascl:1607.003] Atlas2bgeneral: Two-body resonance calculator

For a massless test particle and given a planetary system, Atlas2bgeneral calculates all resonances in a given range of semimajor axes with all the planets taken one by one. Planets are assumed in fixed circular and coplanar orbits and the test particle with arbitrary orbit. A sample input data file to calculate the two-body resonances is available for use with the Fortran77 source code.

[ascl:1607.004] Atlas3bgeneral: Three-body resonance calculator

For a massless test particle and given a planetary system, atlas3bgeneral calculates all three body resonances in a given range of semimajor axes with all the planets taken by pairs. Planets are assumed in fixed circular and coplanar orbits and the test particle with arbitrary orbit. A sample input data file to calculate the three-body resonances is available for use with the Fortran77 source code.

[ascl:1710.017] ATLAS9: Model atmosphere program with opacity distribution functions

ATLAS9 computes model atmospheres using a fixed set of pretabulated opacities, allowing one to work on huge numbers of stars and interpolate in large grids of models to determine parameters quickly. The code works with two different sets of opacity distribution functions (ODFs), one with “big” wavelength intervals covering the whole spectrum and the other with 1221 “little” wavelength intervals covering the whole spectrum. The ODFs use a 12-step representation; the radiation field is computed starting with the highest step and working down. If a lower step does not matter because the line opacity is small relative to the continuum at all depths, all the lower steps are lumped together and not computed to save time.

[ascl:2407.009] ATM: Asteroid Thermal Modeling

ATM (Asteroid Thermal Modeling) models asteroid flux measurements to estimate an asteroid's size, surface temperature distribution, and emissivity, and creates model spectral energy distributions for the different thermal models. After downloading lookup tables for relevant models, it can also fit observations of asteroids.

[ascl:2106.039] atmos: Coupled climate–photochemistry model

Atmos contains two atmospheric models and scripts to couple them together. One atmospheric model calculates the profiles of chemical species, including both gaseous and aerosol phases, and the second model calculates the temperature profile. Because these profiles depend on each other - kinetic reaction rates are temperature-dependent and radiative transfer is subject to radiatively active gases - atmos alternates the running of these two models until both models have solutions consistent with the other one. While either of these models can be run with time-dependence, most applications of these models are to find steady-state solutions for the atmosphere that would be stable over long (geological/astronomical) time periods, given constant inputs to the atmosphere.

[ascl:1703.013] Atmospheric Athena: 3D Atmospheric escape model with ionizing radiative transfer

Atmospheric Athena simulates hydrodynamic escape from close-in giant planets in 3D. It uses the Athena hydrodynamics code (ascl:1010.014) with a new ionizing radiative transfer implementation to self-consistently model photoionization driven winds from the planet. The code is fully compatible with static mesh refinement and MPI parallelization and can handle arbitrary planet potentials and stellar initial conditions.

[ascl:2206.017] atoMEC: Average-Atom code for Matter under Extreme Conditions

atoMEC simulates high energy density phenomena such as in warm dense matter. It uses Kohn-Sham density functional theory, in combination with an average-atom approximation, to solve the electronic structure problem for single-element materials at finite temperature.

[ascl:1708.001] ATOOLS: A command line interface to the AST library

The ATOOLS package of applications provides an interface to the AST library (ascl:1404.016), allowing quick experiments to be performed from the shell. It manipulates descriptions of coordinate frames and mappings in the form of AST objects and performs other functions, with each application within the package corresponding closely to one of the functions in the AST library.

[ascl:1405.009] ATV: Image display tool

ATV displays and analyses astronomical images using the IDL image-processing language. It allows interactive control of the image scaling, color table, color stretch, and zoom, with support for world coordinate systems. It also does point-and-click aperture photometry, simple spectral extractions, and can produce publication-quality postscript output images.

[ascl:2108.002] AUM: A Unified Modeling scheme for galaxy abundance, galaxy clustering and galaxy-galaxy lensing

AUM predicts galaxy abundances, their clustering, and the galaxy-galaxy lensing signal, given the halo occupation distribution of galaxies and the underlying cosmological model. In combination with the measurements of the clustering, abundance, and lensing of galaxies, these routines can be used to perform cosmological parameter inference.

[ascl:1909.001] Auto-multithresh: Automated masking for clean

Auto-multithresh implements an automated masking algorithm for clean. It operates on the residual image within the minor cycle of clean to identify and mask regions of significant emission. It then cascades these significant regions down to lower signal to noise. It includes features to pad the mask to avoid sharp edges and to remove small regions that are unlikely to be significant emission. The algorithm described by this code was incorporated into the tclean task within CASA as auto-multithresh.

[ascl:1406.004] Autoastrom: Autoastrometry for Mosaics

Autoastrom performs automated astrometric corrections on an astronomical image by automatically detecting objects in the frame, retrieving a reference catalogue, cross correlating the catalog with CCDPACK (ascl:1403.021) or MATCH, and using the ASTROM (ascl:1406.008) application to calculate a correction. It is distributed as part of the Starlink software collection (ascl:1110.012).

[ascl:1904.007] AutoBayes: Automatic design of customized analysis algorithms and programs

AutoBayes automatically generates customized algorithms from compact, declarative specifications in the data analysis domain, taking a statistical model as input and creating documented and optimized C/C++ code. The synthesis process uses Bayesian networks to enable problem decompositions and guide the algorithm derivation. Program schemas encapsulate advanced algorithms and data structures, and a symbolic-algebraic system finds closed-form solutions for problems and emerging subproblems. AutoBayes has been used to analyze planetary nebulae images taken by the Hubble Space Telescope, and can be applied to other scientific data analysis tasks.

[ascl:1602.001] Automark: Automatic marking of marked Poisson process in astronomical high-dimensional datasets

Automark models photon counts collected form observation of variable-intensity astronomical sources. It aims to mark the abrupt changes in the corresponding wavelength distribution of the emission automatically. In the underlying methodology, change points are embedded into a marked Poisson process, where photon wavelengths are regarded as marks and both the Poisson intensity parameter and the distribution of the marks are allowed to change.

[ascl:2406.030] AutoPhOT: Rapid publication-quality photometry of transients

AutoPhOT (AUTOmated Photometry Of Transients) produces publication-quality photometry of transients quickly. Written in Python 3, this automated pipeline's capabilities include aperture and PSF-fitting photometry, template subtraction, and calculation of limiting magnitudes through artificial source injection. AutoPhOT is also capable of calibrating photometry against either survey catalogs (e.g., SDSS, PanSTARRS) or using a custom set of local photometric standards.

[ascl:2108.017] AutoProf: Automatic Isophotal solutions for galaxy images

AutoProf performs basic and advanced non-parametric galaxy image analysis. The pipeline's design allows for fast startup and easy implementation; the package offers a suite of robust default and optional tools for surface brightness profile extractions and related methods. AUTOPROF is highly extensible and can be adapted for a variety of applications, providing flexibility for exploring new ideas and supporting advanced users.

[ascl:2203.014] AutoSourceID-Light: Source localization in optical images

AutoSourceID-Light (ASID-L) analyzes optical imaging data using computer vision techniques that can naturally deal with large amounts of data. The framework rapidly and reliably localizes sources in optical images.

[ascl:1812.015] AUTOSPEC: Automated Spectral Extraction Software for integral field unit data cubes

AUTOSPEC provides fast, automated extraction of high quality 1D spectra from astronomical datacubes with minimal user effort. AutoSpec takes an integral field unit (IFU) datacube and a simple parameter file in order to extract a 1D spectra for each object in a supplied catalogue. A custom designed cross-correlation algorithm improves signal to noise as well as isolates sources from neighboring contaminants.

[ascl:1612.014] AUTOSTRUCTURE: General program for calculation of atomic and ionic properties

AUTOSTRUCTURE calculates atomic and ionic energy levels, radiative rates, autoionization rates, photoionization cross sections, plane-wave Born and distorted-wave excitation cross sections in LS- and intermediate-coupling using non- or (kappa-averaged) relativistic wavefunctions. These can then be further processed to form Auger yields, fluorescence yields, partial and total dielectronic and radiative recombination cross sections and rate coefficients, photoabsorption cross sections, and monochromatic opacities, among other properties.

[ascl:2101.005] Avocado: Photometric classification of astronomical transients and variables with biased spectroscopic samples

Avocado produces classifications of arbitrary astronomical transients and variable objects. It addresses the problem of biased spectroscopic samples by generating many lightcurves from each object in the original spectroscopic sample at a variety of redshifts and with many different observing conditions. The "augmented" samples of lightcurves that are generated are much more representative of the full datasets than the original spectroscopic samples.

[ascl:1109.016] aXe: Spectral Extraction and Visualization Software

aXe is a spectroscopic data extraction software package that was designed to handle large format spectroscopic slitless images such as those from the Wide Field Camera 3 (WFC3) and the Advanced Camera for Surveys (ACS) on HST. aXe is a PyRAF/IRAF package that consists of several tasks and is distributed as part of the Space Telescope Data Analysis System (STSDAS). The various aXe tasks perform specific parts of the extraction and calibration process and are successively used to produce extracted spectra.

[ascl:2203.026] axionCAMB: Modification of the CAMB Boltzmann code

axionCAMB is a modified version of the publicly available code CAMB (ascl:1102.026). axionCAMB computes cosmological observables for comparison with data. This is normally the CMB power spectra (T,E,B,\phi in auto and cross power), but also includes the matter power spectrum.

[ascl:2307.005] axionHMcode: Non-linear power spectrum calculator

axionHMcode computes the non-linear matter power spectrum in a mixed dark matter cosmology with ultra-light axion (ULA) component of the dark matter. This model uses some of the fitting parameters and is inspired by HMcode (ascl:1508.001). axionHMcode uses the full expanded power spectrum to calculate the non-linear power spectrum; it splits the axion overdensity into a clustered and linear component to take the non clustering of axions on small scales due to free-streaming into account.

[ascl:2006.009] AxionNS: Ray-tracing in neutron stars

AxionNS computes radio light curves resulting from the resonant conversion of Axion dark matter into photons within the magnetosphere of a neutron star. Photon trajectories are traced from the observer to the magnetosphere where a root finding algorithm identifies the regions of resonant conversion. Given the modeling of the axion dark matter distribution and conversion probability, one can compute the photon flux emitted from these regions. The individual contributions from all the trajectories is then summed to obtain the radiated photon power per unit solid angle.

[ascl:2106.021] aztekas: GRHD numerical code

aztekas solves hyperbolic partial differential equations in conservative form using High Resolution Shock-Capturing (HRSC) schemes. The code can solve the non-relativistic and relativistic hydrodynamic equations of motion (Euler equations) for a perfect fluid. The relativistic part can solve these equations on a background fixed metric, such as for Schwarzschild, Minkowski, Kerr-Schild, and others.

[ascl:1605.004] BACCHUS: Brussels Automatic Code for Characterizing High accUracy Spectra

BACCHUS (Brussels Automatic Code for Characterizing High accUracy Spectra) derives stellar parameters (Teff, log g, metallicity, microturbulence velocity and rotational velocity), equivalent widths, and abundances. The code includes on the fly spectrum synthesis, local continuum normalization, estimation of local S/N, automatic line masking, four methods for abundance determinations, and a flagging system aiding line selection. BACCHUS relies on the grid of MARCS model atmospheres, Masseron's model atmosphere thermodynamic structure interpolator, and the radiative transfer code Turbospectrum (ascl:1205.004).

[ascl:2307.010] baccoemu: Cosmological emulators for large-scale structure statistics

baccoemu provides a collection of emulators for large-scale structure statistics over a wide range of cosmologies. The emulators provide fast predictions for the linear cold- and total-matter power spectrum, the nonlinear cold-matter power spectrum, and the modifications to the cold-matter power spectrum caused by baryonic physics in a wide cosmological parameter space, including dynamical dark energy and massive neutrinos.

[submitted] backtrack: fit relative motion of candidate direct imaging sources with background proper motion and parallax

Directly imaged planet candidates (high contrast point sources near bright stars) are often validated, among other supporting lines of evidence, by comparing their observed motion against the projected motion of a background source due to the proper motion of the bright star and the parallax motion due to the Earth's orbit. Often, the "background track" is constructed assuming an interloping point source is at infinity and has no proper motion itself, but this assumption can fail, producing false positive results, for crowded fields or insufficient observing time-baselines (e.g. Nielsen et al. 2017). `backtrack` is a tool for constructing background proper motion and parallax tracks for validation of high contrast candidates. It can produce classical infinite distance, stationary background tracks, but was constructed in order to fit finite distance, non-stationary tracks using nested sampling (and can be used on clusters). The code sets priors on parallax based on the relations in Bailer-Jones et al. 2021 that are fit to Gaia eDR3 data, and are therefore representative of the galactic stellar density. The public example currently reproduces the results of Nielsen et al. 2017 and Wagner et al. 2022, demonstrating that the motion of HD 131399A "b" is fit by a finite distance, non-stationary background star, but the code has been tested and validated on proprietary datasets. The code is open source, available on github, and additional contributions are welcome.

[ascl:2407.005] BaCoN: BAyesian COsmological Network

BaCoN (BAyesian COsmological Network) trains and tests Bayesian Convolutional Neural Networks in order to classify dark matter power spectra as being representative of different cosmologies, as well as to compute the classification confidence. It supports the following theories: LCDM, wCDM, f(R), DGP, and a randomly generated class. Additional cosmologies can be easily added.

[ascl:1708.010] BAGEMASS: Bayesian age and mass estimates for transiting planet host stars

BAGEMASS calculates the posterior probability distribution for the mass and age of a star from its observed mean density and other observable quantities using a grid of stellar models that densely samples the relevant parameter space. It is written in Fortran and requires FITSIO (ascl:1010.001).

[ascl:2104.017] Bagpipes: Bayesian Analysis of Galaxies for Physical Inference and Parameter EStimation

Bagpipes generates realistic model galaxy spectra and fits these to spectroscopic and photometric observations.

[ascl:2303.017] bajes: Bayesian Jenaer software

bajes [baɪɛs] provides a user-friendly interface for setting up a Bayesian analysis for an arbitrary model, and is specialized for the analysis of gravitational-wave and multi-messenger transients. The code runs a parameter estimation job, inferring the properties of the input model. bajes is designed to be simple-to-use and light-weighted with minimal dependencies on external libraries. The user can set up a pipeline for parameters estimation of multi-messenger transients by writing a configuration file containing the information to be passed to the executables. The package also includes tools and methods for data analysis of multi-messenger signals. The pipeline incorporates an interface with reduced-order-quadratude (ROQ) interpolants. In particular, the ROQ pipeline relies on the output provided by PyROQ-refactored.

[ascl:2107.009] Balrog: Astronomical image simulation

The Balrog package of Python simulation code is for use with real astronomical imaging data. Objects are simulated into a survey's images and measurement software is run over the simulated objects' images. Balrog allows the user to derive the mapping between what is actually measured and the input truth. The package uses GalSim (ascl:1402.009) for all object simulations; source extraction and measurement is performed by SExtractor (ascl:1010.064). Balrog facilitates the ease of running these codes en masse over many images, automating useful GalSim and SExtractor functionality, as well as filling in many bookkeeping steps along the way.

[ascl:2102.029] BALRoGO: Bayesian Astrometric Likelihood Recovery of Galactic Objects

BALRoGO (Bayesian Astrometric Likelihood Recovery of Galactic Objects) handles data from the Gaia space mission. It extracts galactic objects such as globular clusters and dwarf galaxies from data contaminated by interlopers using a combination of Bayesian and non-Bayesian approaches. It fits proper motion space, surface density, and the object center. It also provides confidence regions for the color-magnitude diagram and parallaxes.

[ascl:1312.008] BAMBI: Blind Accelerated Multimodal Bayesian Inference

BAMBI (Blind Accelerated Multimodal Bayesian Inference) is a Bayesian inference engine that combines the benefits of SkyNet (ascl:1312.007) with MultiNest (ascl:1109.006). It operated by simultaneously performing Bayesian inference using MultiNest and learning the likelihood function using SkyNet. Once SkyNet has learnt the likelihood to sufficient accuracy, inference finishes almost instantaneously.

[ascl:1408.020] bamr: Bayesian analysis of mass and radius observations

bamr is an MPI implementation of a Bayesian analysis of neutron star mass and radius data that determines the mass versus radius curve and the equation of state of dense matter. Written in C++, bamr provides some EOS models. This code requires O2scl (ascl:1408.019) be installed before compilation.

[ascl:1905.014] Bandmerge: Merge data from different wavebands

Bandmerge takes in ASCII tables of positions and fluxes of detected astronomical sources in 2-7 different wavebands, and write out a single table of the merged data. The tool was designed to work with source lists generated by the Spitzer Science Center's MOPEX (ascl:1111.006) software, although it can be "fooled" into running on other data as well.

[ascl:2205.022] BANG: BAyesian decomposiotioN of Galaxies

BANG (BAyesian decomposiotioN of Galaxies) models both the photometry and kinematics of galaxies. The underlying model is the superposition of different components with three possible combinations: 1.) Bulge + inner disc + outer disc + Halo; 2.) Bulge + disc + Halo; and 3.) inner disc + outer disc + Halo. As CPU parameter estimation can take days, running BANG on GPU is recommended.

[ascl:1801.001] BANYAN_Sigma: Bayesian classifier for members of young stellar associations

BANYAN_Sigma calculates the membership probability that a given astrophysical object belongs to one of the currently known 27 young associations within 150 pc of the Sun, using Bayesian inference. This tool uses the sky position and proper motion measurements of an object, with optional radial velocity (RV) and distance (D) measurements, to derive a Bayesian membership probability. By default, the priors are adjusted such that a probability threshold of 90% will recover 50%, 68%, 82% or 90% of true association members depending on what observables are input (only sky position and proper motion, with RV, with D, with both RV and D, respectively). The algorithm is implemented in a Python package, in IDL, and is also implemented as an interactive web page.

[ascl:2212.012] BANZAI-NRES: BANZAI data reduction pipeline for NRES

The BANZAI-NRES pipeline processes data from the Network of Robotic Echelle Spectrographs (NRES) on the Las Cumbres Observatory network and provides extracted, wavelength calibrated spectra. If the target is a star, it provides stellar classification parameters (e.g., effective temperature and surface gravity) and a radial velocity measurement. The automated radial velocity measurements from this pipeline have a precision of ~ 10 m/s for high signal-to-noise observations. The data flow and infrastructure of this code relies heavily on BANZAI (ascl:2207.031), enabling BANZAI-NRES to focus on analysis that is specific to spectrographs. The wavelength calibration is primarily done using xwavecal (ascl:2212.011). The pipeline propagates an estimate of the formal uncertainties from all of the data processing stages and includes these in the output data products. These are used as weights in the cross correlation function to measure the radial velocity.

[ascl:2207.031] BANZAI: Beautiful Algorithms to Normalize Zillions of Astronomical Images

BANZAI (Beautiful Algorithms to Normalize Zillions of Astronomical Images) processes raw data taken from Las Cumbres Observatory and produces science quality data products. It is capable of reducing single or multi-extension fits files. For historical data, BANZAI can also reduce the data cubes that were produced by the Sinistro cameras.

[ascl:2211.006] baobab: Training data generator for hierarchically modeling strong lenses with Bayesian neural networks

baobab generates images of strongly-lensed systems, given some configurable prior distributions over the parameters of the lens and light profiles as well as configurable assumptions about the instrument and observation conditions. Wrapped around lenstronomy (ascl:1804.012), baobab supports prior distributions ranging from artificially simple to empirical. A major use case for baobab is the generation of training and test sets for hierarchical inference using Bayesian neural networks (BNNs); the code can generate the training and test sets using different priors.

[ascl:2106.009] baofit: Fit cosmological data to measure baryon acoustic oscillations

baofit analyzes cosmological correlation functions to estimate parameters related to baryon acoustic oscillations and redshift-space distortions. It has primarily been used to analyze Lyman-alpha forest autocorrelations and cross correlations with the quasar number density in BOSS data. Fit models are fully three-dimensional and include flexible treatments of redshift-space distortions, anisotropic non-linear broadening, and broadband distortions.

[ascl:1402.025] BAOlab: Baryon Acoustic Oscillations software

Using the 2-point correlation function, BAOlab aids the study of Baryon Acoustic Oscillations (BAO). The code generates a model-dependent covariance matrix which can change the results both for BAO detection and for parameter constraints.

[ascl:1403.013] BAOlab: Image processing program

BAOlab is an image processing package written in C that should run on nearly any UNIX system with just the standard C libraries. It reads and writes images in standard FITS format; 16- and 32-bit integer as well as 32-bit floating-point formats are supported. Multi-extension FITS files are currently not supported. Among its tools are ishape for size measurements of compact sources, mksynth for generating synthetic images consisting of a background signal including Poisson noise and a number of pointlike sources, imconvol for convolving two images (a “source” and a “kernel”) with each other using fast fourier transforms (FFTs) and storing the output as a new image, and kfit2d for fitting a two-dimensional King model to an image.

[ascl:1810.002] Barcode: Bayesian reconstruction of cosmic density fields

Barcode (BAyesian Reconstruction of COsmic DEnsity fields) samples the primordial density fields compatible with a set of dark matter density tracers after cosmic evolution observed in redshift space. It uses a redshift space model based on the analytic solution of coherent flows within a Hamiltonian Monte Carlo posterior sampling of the primordial density field; this method is applicable to analytically derivable structure formation models, such as the Zel'dovich approximation, but also higher order schemes such as augmented Lagrangian perturbation theory or even particle mesh models. The algorithm is well-suited for analysis of the dark matter cosmic web implied by the observed spatial distribution of galaxy clusters, such as obtained from X-ray, SZ or weak lensing surveys, as well as that of the intergalactic medium sampled by the Lyman alpha forest. In these cases, virialized motions are negligible and the tracers cannot be modeled as point-like objects. Barcode can be used in all of these contexts as a baryon acoustic oscillation reconstruction algorithm.

[ascl:2008.008] Barry: Modular BAO fitting code

Barry compares different BAO models. It removes as many barriers and complications to BAO model fitting as possible and allows each component of the process to remain independent, allowing for detailed comparisons of individual parts. It contains datasets, model fitting tools, and model implementations incorporating different descriptions of non-linear physics and algorithms for isolating the BAO (Baryon Acoustic Oscillation) feature.

[ascl:1608.004] BART: Bayesian Atmospheric Radiative Transfer fitting code

BART implements a Bayesian, Monte Carlo-driven, radiative-transfer scheme for extracting parameters from spectra of planetary atmospheres. BART combines a thermochemical-equilibrium code, a one-dimensional line-by-line radiative-transfer code, and the Multi-core Markov-chain Monte Carlo statistical module to constrain the atmospheric temperature and chemical-abundance profiles of exoplanets.

[ascl:1807.018] BARYCORR: Python interface for barycentric RV correction

BARYCORR is a Python interface for ZBARYCORR (ascl:1807.017); it requires the measured redshift and returns the corrected barycentric velocity and time correction.

[ascl:1808.001] Barycorrpy: Barycentric velocity calculation and leap second management

barycorrpy (BCPy) is a Python implementation of Wright and Eastman's 2014 code (ascl:1807.017) that calculates precise barycentric corrections well below the 1 cm/s level. This level of precision is required in the search for 1 Earth mass planets in the Habitable Zones of Sun-like stars by the Radial Velocity (RV) method, where the maximum semi-amplitude is about 9 cm/s. BCPy was developed for the pipeline for the next generation Doppler Spectrometers - Habitable-zone Planet Finder (HPF) and NEID. An automated leap second management routine improves upon the one available in Astropy. It checks for and downloads a new leap second file before converting from the UT time scale to TDB. The code also includes a converter for JDUTC to BJDTDB.

[ascl:2401.012] baryon-sweep: Outlier rejection algorithm for JWST/NIRSpec IFS data

baryon-sweep produces a robust outlier rejection while simultaneously preserving the signal of the science target. The code works as a standalone solution or as a supplement to the current pipeline software. baryon-sweep creates the 2D pixel mask and mask layers, processes the sky (non-science target) spaxels, and creates a post-processed cube ready for use.

[ascl:1601.017] BASCS: Bayesian Separation of Close Sources

BASCS models spatial and spectral information from overlapping sources and the background, and jointly estimates all individual source parameters. The use of spectral information improves the detection of both faint and closely overlapping sources and increases the accuracy with which source parameters are inferred.

[ascl:1608.007] BASE-9: Bayesian Analysis for Stellar Evolution with nine variables

The BASE-9 (Bayesian Analysis for Stellar Evolution with nine variables) software suite recovers star cluster and stellar parameters from photometry and is useful for analyzing single-age, single-metallicity star clusters, binaries, or single stars, and for simulating such systems. BASE-9 uses a Markov chain Monte Carlo (MCMC) technique along with brute force numerical integration to estimate the posterior probability distribution for the age, metallicity, helium abundance, distance modulus, line-of-sight absorption, and parameters of the initial-final mass relation (IFMR) for a cluster, and for the primary mass, secondary mass (if a binary), and cluster probability for every potential cluster member. The MCMC technique is used for the cluster quantities (the first six items listed above) and numerical integration is used for the stellar quantities (the last three items in the above list).

[ascl:1208.010] BASE: Bayesian Astrometric and Spectroscopic Exoplanet Detection and Characterization Tool

BASE is a novel program for the combined or separate Bayesian analysis of astrometric and radial-velocity measurements of potential exoplanet hosts and binary stars. The tool fulfills two major tasks of exoplanet science, namely the detection of exoplanets and the characterization of their orbits. BASE was developed to provide the possibility of an integrated Bayesian analysis of stellar astrometric and Doppler-spectroscopic measurements with respect to their binary or planetary companions’ signals, correctly treating the astrometric measurement uncertainties and allowing to explore the whole parameter space without the need for informative prior constraints. The tool automatically diagnoses convergence of its Markov chain Monte Carlo (MCMC[2]) sampler to the posterior and regularly outputs status information. For orbit characterization, BASE delivers important results such as the probability densities and correlations of model parameters and derived quantities. BASE is a highly configurable command-line tool developed in Fortran 2008 and compiled with GFortran. Options can be used to control the program’s behaviour and supply information such as the stellar mass or prior information. Any option can be supplied in a configuration file and/or on the command line.

[ascl:1308.006] BASIN: Beowulf Analysis Symbolic INterface

BASIN (Beowulf Analysis Symbolic INterface) is a flexible, integrated suite of tools for multiuser parallel data analysis and visualization that allows researchers to harness the power of Beowulf PC clusters and multi-processor machines without necessarily being experts in parallel programming. It also includes general tools for data distribution and parallel operations on distributed data for developing libraries for specific tasks.

[ascl:2110.010] BASTA: BAyesian STellar Algorithm

BASTA determines properties of stars using a pre-computed grid of stellar models. It calculates the probability density function of a given stellar property based on a set of observational constraints defined by the user. BASTA is very versatile and has been used in a large variety of studies requiring robust determination of fundamental stellar properties.

[ascl:2304.003] BatAnalysis: HEASOFT wrapper for processing Swift-BAT data

BatAnalysis processes and analyzes Swift Burst Alert Telescope (BAT) survey data in a comprehensive computational pipeline. The code downloads BAT survey data, batch processes the survey observations, and extracts light curves and spectra for each survey observation for a given source. BatAnalysis allows for the use of BAT survey data in advanced analyses of astrophysical sources including pulsars, pulsar wind nebula, active galactic nuclei, and other known/unknown transient events that may be detected in the hard X-ray band. BatAnalysis can also create mosaicked images at different time bins and extract light curves and spectra from the mosaicked images for a given source.

[ascl:1510.002] batman: BAsic Transit Model cAlculatioN in Python

batman provides fast calculation of exoplanet transit light curves and supports calculation of light curves for any radially symmetric stellar limb darkening law. It uses an integration algorithm for models that cannot be quickly calculated analytically, and in typical use, the batman Python package can calculate a million model light curves in well under ten minutes for any limb darkening profile.

[ascl:1612.021] BaTMAn: Bayesian Technique for Multi-image Analysis

Bayesian Technique for Multi-image Analysis (BaTMAn) characterizes any astronomical dataset containing spatial information and performs a tessellation based on the measurements and errors provided as input. The algorithm iteratively merges spatial elements as long as they are statistically consistent with carrying the same information (i.e. identical signal within the errors). The output segmentations successfully adapt to the underlying spatial structure, regardless of its morphology and/or the statistical properties of the noise. BaTMAn identifies (and keeps) all the statistically-significant information contained in the input multi-image (e.g. an IFS datacube). The main aim of the algorithm is to characterize spatially-resolved data prior to their analysis.

[ascl:2101.002] BAYES-LOSVD: Bayesian framework for non-parametric extraction of the LOSVD

BAYES-LOSVD performs non-parametric extraction of the Line-Of-Sight Velocity Distributions in galaxies. Written in Python, it uses Stan (ascl:1801.003) to perform all the computations and provides reliable uncertainties for all the parameters of the model chosen for the fit. The code comes with a large number of features, including read-in routines for some of the most popular IFU spectrographs and surveys, such as ATLAS3D, CALIFA, MaNGA, MUSE-WFM, SAMI, and SAURON.

[ascl:1505.027] BAYES-X: Bayesian inference tool for the analysis of X-ray observations of galaxy clusters

The great majority of X-ray measurements of cluster masses in the literature assume parametrized functional forms for the radial distribution of two independent cluster thermodynamic properties, such as electron density and temperature, to model the X-ray surface brightness. These radial profiles (e.g. β-model) have an amplitude normalization parameter and two or more shape parameters. BAYES-X uses a cluster model to parametrize the radial X-ray surface brightness profile and explore the constraints on both model parameters and physical parameters. Bayes-X is programmed in Fortran and uses MultiNest (ascl:1109.006) as the Bayesian inference engine.

[ascl:2410.005] BayeSED: Bayesian SED synthesis and analysis of galaxies and AGNs

BayeSED implements full Bayesian interpretation of spectral energy distributions (SEDs) of galaxies and AGNs. It performs Bayesian parameter estimation using posteriori probability distributions (PDFs) and Bayesian SED model comparison using Bayesian evidence. Its latest version BayeSED3 supports various built-in SED models and can emulate other SED models using machine learning techniques.

[ascl:2002.018] Bayesfit: Command-line program for combining Tempo2 and MultiNest components

Bayesfit pulls together Tempo2 (ascl:1210.015) and MultiNest (ascl:1109.006) components to provide additional functionality such as the specification of priors; Nelder–Mead optimization of the maximum-posterior point; and the capability of computing the partially marginalized likelihood for a given subset of timing-model parameters. Bayesfit is a single python command-line application.

[ascl:1407.015] BayesFlare: Bayesian method for detecting stellar flares

BayesFlare identifies flaring events in light curves released by the Kepler mission; it identifies even weak events by making use of the flare signal shape. The package contains functions to perform Bayesian hypothesis testing comparing the probability of light curves containing flares to that of them containing noise (or non-flare-like) artifacts. BayesFlare includes functions in its amplitude-marginalizer suite to account for underlying sinusoidal variations in light curve data; it includes such variations in the signal model, and then analytically marginalizes over them.

[ascl:1209.001] Bayesian Blocks: Detecting and characterizing local variability in time series

Bayesian Blocks is a time-domain algorithm for detecting localized structures (bursts), revealing pulse shapes within bursts, and generally characterizing intensity variations. The input is raw time series data, in almost any form. Three data modes are elaborated: (1) time-tagged events, (2) binned counts, and (3) measurements at arbitrary times with normal errors. The output is the most probable segmentation of the observation interval into sub-intervals during which the signal is perceptibly constant, i.e. has no statistically significant variations. The idea is not that the source is deemed to actually have this discontinuous, piecewise constant form, rather that such an approximate and generic model is often useful. Treatment of data gaps, variable exposure, extension to piecewise linear and piecewise exponential representations, multi-variate time series data, analysis of variance, data on the circle, other data modes, and dispersed data are included.

This implementation is exact and replaces the greedy, approximate, and outdated algorithm implemented in BLOCK.

[ascl:2204.004] Bayesian SZNet: Bayesian deep learning to predict redshift with uncertainty

Bayesian SZNet predicts spectroscopic redshift through use of a Bayesian convolutional network. It uses Monte Carlo dropout to associate predictions with predictive uncertainties, allowing the user to determine unusual or problematic spectra for visual inspection and thresholding to balance between the number of incorrect redshift predictions and coverage.

[ascl:2112.020] BayesicFitting: Model fitting and Bayesian evidence calculation package

BayesicFitting fits models to data. Data in this context means a set of (measured) points x and y. The model provides some (mathematical) relation between the x and y. Fitting adapts the model such that certain criteria are optimized. The BayesicFitting toolbox also determines whether one model fits the data better than another, making the toolbox particularly powerful. The package consists of more than 100 Python classes, of which one third are model classes. Another third are fitters in one guise or another along with additional tools, and the remaining third is used for Nested Sampling.

[ascl:2404.011] BayeSN: NumPyro implementation of BayeSN

BayeSN performs hierarchical Bayesian SED modeling of type Ia supernova light curves. This probabilistic optical-NIR SED model analyzes the population distribution of physical properties as well as cosmology-independent distance estimation for individual SNe. BayeSN is built with NumPyro and Jax (ascl:2111.002) and provides support for GPU acceleration.

[ascl:1711.004] BayesVP: Full Bayesian Voigt profile fitting

BayesVP offers a Bayesian approach for modeling Voigt profiles in absorption spectroscopy. The code fits the absorption line profiles within specified wavelength ranges and generates posterior distributions for the column density, Doppler parameter, and redshifts of the corresponding absorbers. The code uses publicly available efficient parallel sampling packages to sample posterior and thus can be run on parallel platforms. BayesVP supports simultaneous fitting for multiple absorption components in high-dimensional parameter space. The package includes additional utilities such as explicit specification of priors of model parameters, continuum model, Bayesian model comparison criteria, and posterior sampling convergence check.

[ascl:2207.021] BAYGAUD: BAYesian GAUssian Decomposer

BAYGAUD (BAYesian GAUssian Decomposer) implements the decomposition of velocity profiles in a data cube and subsequent classification. It uses MultiNest (ascl:1109.006) for calculating the posterior distribution and the evidence for a given likelihood function. The code models a given line profile with an optimal number of Gaussians based on the Bayesian Markov Chain Monte Carlo (MCMC) techniques. BAYGAUD is parallelized using the Message-Passing Interface (MPI) standard, which reduces the time needed to calculate the evidence using MCMC techniques.

[ascl:1805.022] BCcodes: Bolometric Corrections and Synthetic Stellar Photometry

BCcodes computes bolometric corrections and synthetic colors in up to 5 filters for input values of the stellar parameters Teff, log(g), [Fe/H], E(B-V) and [alpha/Fe].

[ascl:2308.010] BCemu: Model baryonic effects in cosmological simulations

BCMemu provides emulators to model the suppression in the power spectrum due to baryonic feedback processes. These emulators are based on the baryonification model, where gravity-only N-body simulation results are manipulated to include the impact of baryonic feedback processes. The package also has a three parameter barynification model; the first assumes all the three parameters to be independent of redshift while the second assumes the parameters to be redshift dependent.

[ascl:2110.020] BCES: Linear regression for data with measurement errors and intrinsic scatter

BCES performs robust linear regression on (X,Y) data points where both X and Y have measurement errors. The fitting method is the bivariate correlated errors and intrinsic scatter (BCES). Some of the advantages of BCES regression compared to ordinary least squares fitting are that it allows for measurement errors on both variables and permits the measurement errors for the two variables to be dependent. Further it permits the magnitudes of the measurement errors to depend on the measurements and other lines such as the bisector and the orthogonal regression can be constructed.

[ascl:2307.002] BE-HaPPY: Bias emulator for halo power spectrum

BE-HaPPY (Bias Emulator for Halo Power spectrum Python) facilitates future large scale surveys analysis by providing an accurate, easy to use and computationally inexpensive method to compute the halo bias in the presence of massive neutrinos. Provided with a linear power spectrum, the package will compute a new power spectrum according to the chosen configuration. BE-HaPPY handles linear, polynomial, and perturbation theory bias models. The code also handles Kaiser and Scoccimarro redshifts; other available options include real or redshift space, the total neutrino mass, and a choice of mass bin or scale array, among others.

[ascl:1907.011] beamconv: Cosmic microwave background detector data simulator

beamconv simulates the scanning of the CMB sky while incorporating realistic beams and scan strategies. It uses (spin-)spherical harmonic representations of the (polarized) beam response and sky to generate simulated CMB detector signal timelines. Beams can be arbitrarily shaped. Pointing timelines can be read in or calculated on the fly; optionally, the results can be binned on the sphere.

[ascl:1905.006] beamModelTester: Model evaluation for fixed antenna phased array radio telescopes

beamModelTester enables evaluation of models of the variation in sensitivity and apparent polarization of fixed antenna phased array radio telescopes. The sensitivity of such instruments varies with respect to the orientation of the source to the antenna, resulting in variation in sensitivity over altitude and azimuth that is not consistent with respect to frequency due to other geometric effects. In addition, the different relative orientation of orthogonal pairs of linear antennae produces a difference in sensitivity between the antennae, leading to an artificial apparent polarization. Comparing the model with observations made using the given telescope makes it possible evaluate the model's performance; the results of this evaluation can provide a figure of merit for the model and guide improvements to it. This system also enables plotting of results from a single station observation on a variety of parameters.

[ascl:1104.013] BEARCLAW: Boundary Embedded Adaptive Refinement Conservation LAW package

The BEARCLAW package is a multidimensional, Eulerian AMR-capable computational code written in Fortran to solve hyperbolic systems for astrophysical applications. It is part of AstroBEAR (ascl:1104.002), a hydrodynamic & magnetohydrodynamic code environment designed for a variety of astrophysical applications which allows simulations in 2, 2.5 (i.e., cylindrical), and 3 dimensions, in either cartesian or curvilinear coordinates.

[ascl:1908.013] BEAST: Bayesian Extinction And Stellar Tool

BEAST (Bayesian Extinction and Stellar Tool) fits the ultraviolet to near-infrared photometric SEDs of stars to extract stellar and dust extinction parameters. The stellar parameters are age (t), mass (M), metallicity (M), and distance (d). The dust extinction parameters are dust column (Av), average grain size (Rv), and mixing between type A and B extinction curves (fA).

[ascl:1306.006] BEHR: Bayesian Estimation of Hardness Ratios

BEHR is a standalone command-line C program designed to quickly estimate the hardness ratios and their uncertainties for astrophysical sources. It is especially useful in the Poisson regime of low counts, and computes the proper uncertainty regardless of whether the source is detected in both passbands or not.

[submitted] BELLAMY: A cross-matching package for the cynical astronomer

BELLAMY is a cross-matching algorithm designed primarily for radio images, that aims to match all sources in the supplied target catalogue to sources in a reference catalogue by calculating the probability of a match. BELLAMY utilises not only the position of a source on the sky, but also the flux data to calculate this probability, determining the most probable match in the reference catalog to the target source. Additionally, BELLAMY attempts to undo any spatial distortion that may be affecting the target catalogue, by creating a model of the offsets of matched sources which is then applied to unmatched sources. This combines to produce an iterative cross-matching algorithm that provides the user with an obvious measure of how confident they should be with the results of a cross-match.

[ascl:2408.010] BELTCROSS2: Calculate the closest approaches of asteroids to meteoroid streams

BELTCROSS2 calculates the closest approaches of asteroid to the mean orbits of meteoroid streams. It is especially useful to check if an asteroid, which was observed to become active, passed through a meteoroid stream, and through which stream, a short time before the beginning of the activity. The basic characteristics of the closest encounter of the asteroid with the stream are provided by BELTCROSS2.

[ascl:1306.013] Bessel: Fast Bessel Function Jn(z) Routine for Large n,z

Bessel, written in the C programming language, uses an accurate scheme for evaluating Bessel functions of high order. It has been extensively tested against a number of other routines, demonstrating its accuracy and efficiency.

[ascl:1901.009] bettermoments: Line-of-sight velocity calculation

bettermoments measures precise line-of-sight velocities from Doppler shifted lines to determine small scale deviations indicative of, for example, embedded planets.

[ascl:2409.010] BeyonCE: Beyond Common Eclipsers

BeyonCE (Beyond Common Eclipsers) explores the large parameter space of eclipsing disc systems. The fitting code reduces the parameter space encompassed by the transit of circumsecondary disc (CSD) systems with azimuthally symmetric, non-uniform optical-depth profiles to constrain the size and orientation of discs with a complex sub-structure. BeyonCE does this by rejecting disc geometries that do not reproduce the measured gradients within their light curves.

[ascl:1402.015] BF_dist: Busy Function fitting

The "busy function" accurately describes the characteristic double-horn HI profile of many galaxies. Implemented in a C/C++ library and Python module called BF_dist, it is a continuous, differentiable function that consists of only two basic functions, the error function, erf(x), and a polynomial, |x|^n, of degree n >= 2. BF_dist offers great flexibility in fitting a wide range of HI profiles from the Gaussian profiles of dwarf galaxies to the broad, asymmetric double-horn profiles of spiral galaxies, and can be used to parametrize observed HI spectra of galaxies and the construction of spectral templates for simulations and matched filtering algorithms accurately and efficiently.

[submitted] BFast

A fast GPU-based bispectrum estimator implemented using JAX.

[ascl:1504.020] BGLS: A Bayesian formalism for the generalised Lomb-Scargle periodogram

BGLS calculates the Bayesian Generalized Lomb-Scargle periodogram. It takes as input arrays with a time series, a dataset and errors on those data, and returns arrays with sampled periods and the periodogram values at those periods.

[ascl:1806.002] BHDD: Primordial black hole binaries code

BHDD (BlackHolesDarkDress) simulates primordial black hole (PBH) binaries that are clothed in dark matter (DM) halos. The software uses N-body simulations and analytical estimates to follow the evolution of PBH binaries formed in the early Universe.

[ascl:1206.005] bhint: High-precision integrator for stellar systems

bhint is a post-Newtonian, high-precision integrator for stellar systems surrounding a super-massive black hole. The algorithm makes use of the fact that the Keplerian orbits in such a potential can be calculated directly and are only weakly perturbed. For a given average number of steps per orbit, bhint is almost a factor of 100 more accurate than the standard Hermite method.

[ascl:2109.024] BHJet: Semi-analytical black hole jet model

BHJet models steady-state SEDs of jets launched from accreting black holes. This semi-analytical, multi-zone jet model is applicable across the entire black hole mass scale, from black hole X-ray binaries (both low and high mass) to active galactic nuclei of any class (from low-luminosity AGN to flat spectrum radio quasars). It is designed to be more comparable than other codes to GRMHD simulations and/or RMHD semi-analytical solutions.

[ascl:1802.013] BHMcalc: Binary Habitability Mechanism Calculator

BHMcalc provides renditions of the instantaneous circumbinary habital zone (CHZ) and also calculates BHM properties of the system including those related to the rotational evolution of the stellar components and the combined XUV and SW fluxes as measured at different distances from the binary. Moreover, it provides numerical results that can be further manipulated and used to calculate other properties.

[ascl:2105.001] BHPToolkit: Black Hole Perturbation Toolkit

The Black Hole Perturbation Toolkit models gravitational radiation from small mass-ratio binaries as well as from the ringdown of black holes. The former are key sources for the future space-based gravitational wave detector LISA. BHPToolkit brings together core elements of multiple scattered black hole perturbation theory codes into a Toolkit that can be used by all; different tools can be installed individually by users depending on need and interest.

[ascl:9910.006] BHSKY: Visual distortions near a black hole

BHSKY (copyright 1999 by Robert J. Nemiroff) computes the visual distortion effects visible to an observer traveling around and descending near a non-rotating black hole. The codes are general relativistically accurate and incorporate concepts such as large-angle deflections, image magnifications, multiple imaging, blue-shifting, and the location of the photon sphere. Once star.dat is edited to define the position and orientation of the observer relative to the black hole, bhsky_table should be run to create a table of photon deflection angles. Next bhsky_image reads this table and recomputes the perceived positions of stars in star.num, the Yale Bright Star Catalog. Lastly, bhsky_camera plots these results. The code currently tracks only the two brightest images of each star, and hence becomes noticeably incomplete within 1.1 times the Schwarzschild radius.

[ascl:1501.009] BIANCHI: Bianchi VIIh Simulations

BIANCHI provides functionality to support the simulation of Bianchi Type VIIh induced temperature fluctuations in CMB maps of a universe with shear and rotation. The implementation is based on the solutions to the Bianchi models derived by Barrow et al. (1985), which do not incorporate any dark energy component. Functionality is provided to compute the induced fluctuations on the sphere directly in either real or harmonic space.

[ascl:2406.016] BiaPy: Bioimage analysis pipeline builder

BiaPy provides deep-learning workflows for a large variety of image analysis tasks, including 2D and 3D semantic segmentation, instance segmentation, object detection, image denoising, single image super-resolution, self-supervised learning and image classification. Though developed specifically for bioimages, it can be used for watershed-based instance segmentation for friends-of-friends proto-haloes.

[ascl:1908.021] bias_emulator: Halo bias emulator

bias_emulator models the clustering of halos on large scales. It incorporates the cosmological dependence of the bias beyond the mapping of halo mass to peak height. Precise measurements of the halo bias in the simulations are interpolated across cosmological parameter space to obtain the halo bias at any point in parameter space within the simulation cloud. A tool to produce realizations of correlated noise for propagating the modeling uncertainty into error budgets that use the emulator is also provided.

[ascl:1312.004] BIE: Bayesian Inference Engine

The Bayesian Inference Engine (BIE) is an object-oriented library of tools written in C++ designed explicitly to enable Bayesian update and model comparison for astronomical problems. To facilitate "what if" exploration, BIE provides a command line interface (written with Bison and Flex) to run input scripts. The output of the code is a simulation of the Bayesian posterior distribution from which summary statistics e.g. by taking moments, or determine confidence intervals and so forth, can be determined. All of these quantities are fundamentally integrals and the Markov Chain approach produces variates $ heta$ distributed according to $P( heta|D)$ so moments are trivially obtained by summing of the ensemble of variates.

[ascl:2106.036] BiFFT: Fast estimation of the bispectrum

BiFFT uses Fourier transforms to implement the Dirac-Delta function that enforces a closed triangle of three k-vectors; this allows very fast calculations of the bispectrum. Once the C code associated with the package is compiled and the source folder directed to the location of the C code, the user can run the code using the python wrapper.The binning in each function has been tested over the course of many years and the user can use it out of the box without ever touching the underlying C code. However, the cylindrical bispectrum calculation is much more sensitive to sample variance; its default binning is quite coarse and might need adjusting (and testing) for some datasets.

[ascl:1711.021] Bifrost: Stream processing framework for high-throughput applications

Bifrost is a stream processing framework that eases the development of high-throughput processing CPU/GPU pipelines. It is designed for digital signal processing (DSP) applications within radio astronomy. Bifrost uses a flexible ring buffer implementation that allows different signal processing blocks to be connected to form a pipeline. Each block may be assigned to a CPU core, and the ring buffers are used to transport data to and from blocks. Processing blocks may be run on either the CPU or GPU, and the ring buffer will take care of memory copies between the CPU and GPU spaces.

[ascl:1208.007] Big MACS: Accurate photometric calibration

Big MACS is a Python program that estimates an accurate photometric calibration from only an input catalog of stellar magnitudes and filter transmission functions. The user does not have to measure color terms which can be difficult to characterize. Supplied with filter transmission functions, Big MACS synthesizes an expected stellar locus for your data and then simultaneously solves for all unknown zeropoints when fitting to the instrumental locus. The code uses a spectroscopic model for the SDSS stellar locus in color-color space and filter functions to compute expected locus. The stellar locus model is corrected for Milky Way reddening. If SDSS or 2MASS photometry is available for stars in field, Big MACS can yield a highly accurate absolute calibration.

[ascl:2407.011] bigfile: A reproducible massively parallel IO library for hierarchical data

bigfile stores data from cosmology simulations from HPC systems and beyond. It provides a hierarchical structure of data columns via File, Dataset and Column. A Column stores a two dimensional table. Numerical typed columns are supported; attributes can be attached to a Column and both numerical attributes and string attributes are supported. Type casting is performed on-the-fly if read/write operations request a different data type than the file has stored.

[ascl:2211.017] BiGONLight: Bi-local Geodesic Operators framework for Numerical Light propagation

BiGONLight (Bi-local geodesic operators framework for numerical light propagation) encodes the Bi-local Geodesic Operators formalism (BGO) to study light propagation in the geometric optics regime in General Relativity. The parallel transport equations, the optical tidal matrix, and the geodesic deviation equations for the bilocal operators are expressed in 3+1 form and encoded in BiGONLight as Mathematica functions. The bilocal operators are used to obtain all possible optical observables by combining them with the observer and emitter four-velocities and four-accelerations. The user can choose the position of the source and the observer anywhere along the null geodesic with any four-velocities and four-accelerations.

[ascl:2106.031] BiHalofit: Fitting formula of non-linear matter bispectrum

BiHalofit fits the matter bispectrum in the nonlinear regime calibrated by high-resolution cosmological N-body simulations of 41 cold dark matter models around the Planck 2015 best-fit parameters. The parameterization is similar to that in Halofit (ascl:1402.032). The simulation volume is sufficiently large to cover almost all measurable triangle bispectrum configurations in the universe, and the function is calibrated using one-loop perturbation theory at large scales. BiHaloFit predicts the weak-lensing bispectrum and will assist current and future weak-lensing surveys and cosmic microwave background lensing experiments.

[ascl:1901.011] Bilby: Bayesian inference library

Bilby provides a user-friendly interface to perform parameter estimation. It is primarily designed and built for inference of compact binary coalescence events in interferometric data, such as analysis of compact binary mergers and other types of signal model including supernovae and the remnants of binary neutron star mergers, but it can also be used for more general problems. The software is flexible, allowing the user to change the signal model, implement new likelihood functions, and add new detectors. Bilby can also be used to do population studies using hierarchical Bayesian modelling.

[ascl:2307.036] binary_c-python: Stellar population synthesis tool and interface to binary_c

binary_c-python provides a manager for and interface to the binary_c framework (ascl:2307.035), and rapidly evolves individual systems and populations of stars. It provides functions such as data processing tools and initial distribution functions for stellar properties. binary_c-python also includes tools to run large grids of (binary) stellar systems on servers or distributed systems.

[ascl:2307.035] binary_c: Stellar population synthesis software framework

The binary_c software framework models the evolution of single, binary and multiple stars, including stellar evolution and nucleosynthesis. Stellar evolution includes wind mass loss, rotation, thermal pulses, magnetic braking, pre-main sequence evolution, supernovae and kicks, and neutron stars; binary-star evolution includes mass transfer, gravitational-wave losses, tides, novae, circumbinary discs, and merging stars. binary_c natively includes nucleosynthesis, and, as it is designed for stellar population calculations, it is lightweight and versatile. binary_c works in standalone, virtual and HPC environments, and its support software contains tools for development and data analysis. A version in Python, binary_c-python (ascl:2307.036), is also available.

[ascl:2404.028] binary_precursor: Light curve model of supernova precursors powered by compact object companions

binary_precursor models light curves of supernova (SN) precursors powered by a pre-SN outburst accompanying accretion onto a compact object companion. Though it is only one of the possible models, it is useful for interpretations of (bright) SN precursors highly exceeding the Eddington limit of massive stars, which are observed in a fraction of SNe with dense circumstellar matter (CSM) around the progenitor. It offers a number of editable parameters, including compact object mass, progenitor mass, progenitor radii, and opacity. Initial CSM velocity can be normalized by the progenitor escape velocity (xi parameter), and the CSM mass, ionization temperature, and binary separation can also be specified.

[ascl:2009.025] Binary-Speckle: Binary or triple star parameters

Binary-Speckle reduces Speckle or AO data from the raw data to deconvolved images (in Fourier space), to determine the parameters of a binary or triple, and to find limits for undetected companion stars.

[ascl:1710.008] Binary: Accretion disk evolution

Binary computes the evolution of an accretion disc interacting with a binary system. It has been developed and used to study the coupled evolution of supermassive BH binaries and gaseous accretion discs.

[ascl:1811.003] binaryBHexp: On-the-fly visualizations of precessing binary black holes

binaryBHexp (binary black hole explorer) uses surrogate models of numerical simulations to generate on-the-fly interactive visualizations of precessing binary black holes. These visualizations can be generated in a few seconds and at any point in the 7-dimensional parameter space of the underlying surrogate models. These visualizations provide a valuable means to understand and gain insights about binary black hole systems and gravitational physics such as those detected by the LIGO gravitational wave detector.

[ascl:2102.025] binaryoffset: Detecting and correcting the binary offset effect in CCDs

binaryoffset identifies the binary offset effect in images from any detector. The easiest input to work with is a dark or bias image that is spatially flat. The code can also be run on images that are not spatially flat, assuming that there is some model of the signal on the CCD that can be used to produce a residual image.

[ascl:2012.004] BinaryStarSolver: Orbital elements of binary stars solver

Given a series of radial velocities as a function of time for a star in a binary system, BinaryStarSolver solves for various orbital parameters. Namely, it solves for eccentricity (e), argument of periastron (ω), velocity amplitude (K), long term average radial velocity (γ), and orbital period (P). If the orbital parameters of a primary star are already known, it can also find the orbital parameters of a companion star, with only a few radial velocity data points.

[ascl:1312.012] BINGO: BI-spectra and Non-Gaussianity Operator

The BI-spectra and Non-Gaussianity Operator (BINGO) code, written in Fortran, computes the scalar bi-spectrum and the non-Gaussianity parameter fNL in single field inflationary models involving the canonical scalar field. BINGO can calculate all the different contributions to the bi-spectrum and the parameter fNL for an arbitrary triangular configuration of the wavevectors.

[ascl:1805.015] BinMag: Widget for comparing stellar observed with theoretical spectra

BinMag examines theoretical stellar spectra computed with Synth/SynthMag/Synmast/Synth3/SME spectrum synthesis codes and compare them to observations. An IDL widget program, BinMag applies radial velocity shift and broadening to the theoretical spectra to account for the effects of stellar rotation, radial-tangential macroturbulence, and instrumental smearing. The code can also simulate spectra of spectroscopic binary stars by appropriate coaddition of two synthetic spectra. Additionally, BinMag can be used to measure equivalent width, fit line profile shapes with analytical functions, and to automatically determine radial velocity and broadening parameters. BinMag interfaces with the Synth3 (ascl:1212.010) and SME (ascl:1202.013) codes, allowing the user to determine chemical abundances and stellar atmospheric parameters from the observed spectra.

[ascl:1905.004] Binospec: Data reduction pipeline for the Binospec imaging spectrograph

Binospec reduces data for the Binospec imaging spectrograph. The software is also used for observation planning and instrument control, and is automated to decrease the number of tasks the user has to perform. Binospec uses a database-driven approach for instrument configuration and sequencing of observations to maximize efficiency, and a web-based interface is available for defining observations, monitoring status, and retrieving data products.

[ascl:1011.008] Binsim: Visualising Interacting Binaries in 3D

Binsim produces images of interacting binaries for any system parameters. Though not suitable for modeling light curves or spectra, the resulting images are helpful in visualizing the geometry of a given system and are also helpful in talks and educational work. The code uses the OpenGL API to do the 3D rendering. The software can produce images of cataclysmic variables and X-ray binaries, and can render the mass donor star, an axisymmetric disc (without superhumps, warps or spirals), the accretion stream and hotspot, and a "corona."

[ascl:1208.002] BINSYN: Simulating Spectra and Light Curves of Binary Systems with or without Accretion Disks

The BINSYN program suite is a collection of programs for analysis of binary star systems with or without an optically thick accretion disk. BINSYN produces synthetic spectra of individual binary star components plus a synthetic spectrum of the system. If the system includes an accretion disk, BINSYN also produces a separate synthetic spectrum of the disk face and rim. A system routine convolves the synthetic spectra with filter profiles of several photometric standards to produce absolute synthetic photometry output. The package generates synthetic light curves and determines an optimized solution for system parameters.

[ascl:2109.029] BiPoS1: Dynamical processing of the initial binary star population

BiPoS1 (Binary Population Synthesizer) efficiently calculates binary distribution functions after the dynamical processing of a realistic population of binary stars during the first few Myr in the hosting embedded star cluster. It is particularly useful for generating a realistic birth binary population as an input for N-body simulations of globular clusters. Instead of time-consuming N-body simulations, BiPoS1 uses the stellar dynamical operator, which determines the fraction of surviving binaries depending on the binding energy of the binaries. The stellar dynamical operator depends on the initial star cluster density, as well as the time until the residual gas of the star cluster is expelled. At the time of gas expulsion, the dynamical processing of the binary population is assumed to effectively end due to the expansion of the star cluster related to that event. BiPoS1 has also a galactic-field mode, in order to synthesize the stellar population of a whole galaxy.

[ascl:1512.008] Bisous model: Detecting filamentary pattern in point processes

The Bisous model is a marked point process that models multi-dimensional patterns. The Bisous filament finder works directly with galaxy distribution data and the model intrinsically takes into account the connectivity of the filamentary network. The Bisous model generates the visit map (the probability to find a filament at a given point) together with the filament orientation field; these two fields are used to extract filament spines from the data.

[ascl:1712.004] Bitshuffle: Filter for improving compression of typed binary data

Bitshuffle rearranges typed, binary data for improving compression; the algorithm is implemented in a python/C package within the Numpy framework. The library can be used alongside HDF5 to compress and decompress datasets and is integrated through the dynamically loaded filters framework. Algorithmically, Bitshuffle is closely related to HDF5's Shuffle filter except it operates at the bit level instead of the byte level. Arranging a typed data array in to a matrix with the elements as the rows and the bits within the elements as the columns, Bitshuffle "transposes" the matrix, such that all the least-significant-bits are in a row, etc. This transposition is performed within blocks of data roughly 8kB long; this does not in itself compress data, but rearranges it for more efficient compression. A compression library is necessary to perform the actual compression. This scheme has been used for compression of radio data in high performance computing.

[ascl:1411.027] BKGE: Fermi-LAT Background Estimator

The Fermi-LAT Background Estimator (BKGE) is a publicly available open-source tool that can estimate the expected background of the Fermi-LAT for any observational conguration and duration. It produces results in the form of text files, ROOT files, gtlike source-model files (for LAT maximum likelihood analyses), and PHA I/II FITS files (for RMFit/XSpec spectral fitting analyses). Its core is written in C++ and its user interface in Python.

[ascl:2105.011] BlackBOX: BlackGEM and MeerLICHT image reduction software

BlackBOX performs standard CCD image reduction tasks on multiple images from the BlackGEM and MeerLICHT telescopes. It uses the satdet module of ASCtools (ascl:2011.024) and Astro-SCRAPPY (ascl:1907.032). BlackBOX simultaneously uses multi-processing and multi-threading and feeds the reduced images to ZOGY (ascl:2105.010) to ultimately perform optimal image subtraction and detect transient sources.

[ascl:2012.020] BlackHawk: Black hole evaporation calculator

BlackHawk calculates the Hawking evaporation spectra of any black hole distribution. Written in C, the program enables users to compute the primary and secondary spectra of stable or long-lived particles generated by Hawking radiation of the distribution of black holes, and to study their evolution in time.

[ascl:2211.010] BlackJAX: Library of samplers for JAX

BlackJAX is a sampling library designed for ease of use, speed, and modularity and works on CPU as well as GPU. It is not a probabilistic programming library (PLL), though it integrates well with PPLs as long as they can provide a (potentially unnormalized) log-probability density function compatible with JAX. BlackJAX is written in pure Python and depends on XLA via JAX (ascl:2111.002). It can be used by those who have a logpdf and need a sampler or need more than a general-purpose sampler. It is also useful for building a sample on GPU and for users who want to learn how sampling algorithms work.

[ascl:2210.014] Blacklight: GR ray tracing code for post-processing Athena++ simulations

Blacklight postprocesses general-relativistic magnetohydrodynamic simulation data and produces outputs for analyzing data sets, including maps of auxiliary quantities and false-color renderings. The code can use Athena++ (ascl:1912.005) outputs directly, and also supports files in HARM (ascl:1209.005) and iHARM3d (ascl:2210.013) format. Written in C++, Blacklight offers support for adaptive mesh refinement input, slow-light calculations, and adaptive ray tracing.

[ascl:2405.022] blackthorn: Spectra from right-handed neutrino decays

blackthorn generates spectra of dark matter annihilations into right-handed (RH) neutrinos or into particles that result from their decay. These spectra include photons, positrons, and neutrinos. The code provides support for varied RH-neutrino masses ranging from MeV to TeV by incorporating hazma, PPPC4DMID, and HDMSpectra models to compute dark matter annihilation cross sections and mediator decay widths. blackthorn also computes decay branching fractions and partial decay widths.

[ascl:2208.001] BlaST: Synchrotron peak estimator for blazars

BlaST (Blazar Synchrotron Tool) estimates the synchrotron peak of blazars given their spectral energy distribution. It uses a machine-learning algorithm that simplifies the estimation and also provides a reliable uncertainty estimation. The package naturally accounts for additional SED components from the host galaxy and the disk emission. BlaST also supports bulk estimation, e.g. estimating a whole catalog, by providing a directory or zip file containing the seds as well as an output file in which to write the results.

[ascl:1906.002] Blimpy: Breakthrough Listen I/O Methods for Python

Blimpy (Breakthrough Listen I/O Methods for Python) provides utilities for viewing and interacting with the data formats used within the Breakthrough Listen program, including Sigproc filterbank (.fil) and HDF5 (.h5) files that contain dynamic spectra (aka 'waterfalls'), and guppi raw (.raw) files that contain voltage-level data. Blimpy can also extract, calibrate, and visualize data and a suite of command-line utilities are also available.

[ascl:2303.005] Blobby3D: Bayesian inference for gas kinematics

Blobby3D performs Bayesian inference for gas kinematics on emission line observations of galaxies using Integral Field Spectroscopy. The code robustly infers gas kinematics for regularly rotating galaxies even if the gas profiles have significant substructure. Blobby3D also infers gas kinematic properties free from the effects of beam smearing (where beam smearing is the effect of the observational seeing spatially blurring the gas profiles), which has significant effects on the observed gas kinematic properties, particularly the observed velocity dispersion.

[ascl:1208.009] BLOBCAT: Software to Catalog Blobs

BLOBCAT is a source extraction software that utilizes the flood fill algorithm to detect and catalog blobs, or islands of pixels representing sources, in 2D astronomical images. The software is designed to process radio-wavelength images of both Stokes I intensity and linear polarization, the latter formed through the quadrature sum of Stokes Q and U intensities or as a by-product of rotation measure synthesis. BLOBCAT corrects for two systematic biases to enable the flood fill algorithm to accurately measure flux densities for Gaussian sources. BLOBCAT exhibits accurate measurement performance in total intensity and, in particular, linear polarization, and is particularly suited to the analysis of large survey data.

[ascl:9909.005] BLOCK: A Bayesian block method to analyze structure in photon counting data

Bayesian Blocks is a time-domain algorithm for detecting localized structures (bursts), revealing pulse shapes, and generally characterizing intensity variations. The input is raw counting data, in any of three forms: time-tagged photon events, binned counts, or time-to-spill data. The output is the most probable segmentation of the observation into time intervals during which the photon arrival rate is perceptibly constant, i.e. has no statistically significant variations. The idea is not that the source is deemed to have this discontinuous, piecewise constant form, rather that such an approximate and generic model is often useful. The analysis is based on Bayesian statistics.

This code is obsolete and yields approximate results; see Bayesian Blocks (ascl:1209.001) instead for an algorithm guaranteeing exact global optimization.

[ascl:2201.003] BLOSMapping: Determine line-of-sight magnetic fields of molecular clouds

BLOSMapping determines the line-of-sight component of magnetic fields associated with molecular clouds. The code uses Faraday rotation measure catalogs along with an on-off approach based on relative measurements to estimate the rotation measure caused by molecular clouds. It then uses the outputs from a chemical evolution code along with extinction maps to determine the line-of-sight magnetic field strength and direction.

[ascl:1607.008] BLS: Box-fitting Least Squares

BLS (Box-fitting Least Squares) is a box-fitting algorithm that analyzes stellar photometric time series to search for periodic transits of extrasolar planets. It searches for signals characterized by a periodic alternation between two discrete levels, with much less time spent at the lower level.

[submitted] BMarXiv

BMarXiv scans new (i.e., since the last time checked) submissions from arXiv, ranks submissions based on keyword matches, and produces an HTML page as an output.

The keywords are looked for (with regex capabilities) in the title, abstract, but also the author list, so it is possible to look for people too. The score is calculated for each specific entry but additional (and optional) scoring is performed using the first author recent submissions and/or the other authors' recent submissions.

It is possible to include/exclude any arXiv categories (within astro-ph or not). New astronomical conferences (from CADC by default) and new codes (from ASCL.net) are also checked and can also be scanned for keywords.

A local bibliography file can be scanned to find frequent words/groups of words that could become scanned keywords.

[ascl:1709.009] bmcmc: MCMC package for Bayesian data analysis

bmcmc is a general purpose Markov Chain Monte Carlo package for Bayesian data analysis. It uses an adaptive scheme for automatic tuning of proposal distributions. It can also handle Bayesian hierarchical models by making use of the Metropolis-Within-Gibbs scheme.

[ascl:1801.008] BOND: Bayesian Oxygen and Nitrogen abundance Determinations

BOND determines oxygen and nitrogen abundances in giant H II regions by comparison with a large grid of photoionization models. The grid spans a wide range in O/H, N/O and ionization parameter U, and covers different starburst ages and nebular geometries. Unlike other statistical methods, BOND relies on the [Ar III]/[Ne III] emission line ratio to break the oxygen abundance bimodality. By doing so, it can measure oxygen and nitrogen abundances without assuming any a priori relation between N/O and O/H. BOND takes into account changes in the hardness of the ionizing radiation field, which can come about due to the ageing of H II regions or the stochastically sampling of the IMF. The emission line ratio He I/Hβ, in addition to commonly used strong lines, constrains the hardness of the ionizing radiation field. BOND relies on the emission line ratios [O III]/Hβ, [O II]/Hβ and [N II]/Hβ, [Ar III]/Hβ, [Ne III]/Hβ, He I/Hβ as its input parameters, while its output values are the measurements and uncertainties for O/H and N/O.

[ascl:1212.001] Bonsai: N-body GPU tree-code

Bonsai is a gravitational N-body tree-code that runs completely on the GPU. This reduces the amount of time spent on communication with the CPU. The code runs on NVIDIA GPUs and on a GTX480 it is able to integrate ~2.8M particles per second. The tree construction and traverse algorithms are portable to many-core devices which have support for CUDA or OpenCL programming languages.

[ascl:2203.029] Bootsik: Potential field calculator

The Bootsik software generates and visualizes potential magnetic fields. bootsik.f90 generates a potential magnetic field on a 3D mesh, staggered relative to the magnetic potential, by extrapolating the magnetic field normal to the photospheric surface. The code first calculates a magnetic potential using a modified Green’s function method and then uses a finite differencing scheme to calculate the magnetic field from the potential. The IDL script boobox.pro can then be used to visualize the magnetic field.

[ascl:1210.030] BOOTTRAN: Error Bars for Keplerian Orbital Parameters

BOOTTRAN calculates error bars for Keplerian orbital parameters for both single- and multiple-planet systems. It takes the best-fit parameters and radial velocity data (BJD, velocity, errors) and calculates the error bars from sampling distribution estimated via bootstrapping. It is recommended to be used together with the RVLIN (ascl:1210.031) package, which find best-fit Keplerian orbital parameters. Both RVLIN and BOOTTRAN are compatible with multiple-telescope data. BOOTTRAN also calculates the transit time and secondary eclipse time and their associated error bars. The algorithm is described in the appendix of the associated article.

[ascl:1108.019] BOREAS: Mass Loss Rate of a Cool, Late-type Star

The basic mechanisms responsible for producing winds from cool, late-type stars are still largely unknown. We take inspiration from recent progress in understanding solar wind acceleration to develop a physically motivated model of the time-steady mass loss rates of cool main-sequence stars and evolved giants. This model follows the energy flux of magnetohydrodynamic turbulence from a subsurface convection zone to its eventual dissipation and escape through open magnetic flux tubes. We show how Alfven waves and turbulence can produce winds in either a hot corona or a cool extended chromosphere, and we specify the conditions that determine whether or not coronal heating occurs. These models do not utilize arbitrary normalization factors, but instead predict the mass loss rate directly from a star's fundamental properties. We take account of stellar magnetic activity by extending standard age-activity-rotation indicators to include the evolution of the filling factor of strong photospheric magnetic fields. We compared the predicted mass loss rates with observed values for 47 stars and found significantly better agreement than was obtained from the popular scaling laws of Reimers, Schroeder, and Cuntz. The algorithm used to compute cool-star mass loss rates is provided as a self-contained and efficient IDL computer code. We anticipate that the results from this kind of model can be incorporated straightforwardly into stellar evolution calculations and population synthesis techniques.

[ascl:2210.023] BornRaytrace: Weak gravitational lensing effects simulator

BornRaytrace uses neural data compression of weak lensing map summary statistics to simulate weak gravitational lensing effects. It can raytrace through overdensity Healpix maps to return a convergence map, include shear-kappa transformation on the full sphere, and also include intrinsic alignments (NLA model).

[ascl:2307.015] BOWIE: Gravitational wave binary signal analysis

BOWIE (Binary Observability With Illustrative Exploration) performs graphical analysis of binary signals from gravitational waves. It takes gridded data sets and produces different types of plots in customized arrangements for detailed analysis of gravitational wave sensitivity curves and/or binary signals. BOWIE offers three main tools: a gridded data generator, a plotting tool, and a waveform generator for general use. The waveform generator creates PhenomD waveforms for binary black hole inspiral, merger, and ringdown. Gridded data sets are created using the PhenomD generator for signal-to-noise (SNR) analysis. Using the gridded data sets, customized configurations of plots are created with the plotting package.

[ascl:2306.059] BOXFIT: Gamma-ray burst afterglow light curve generator

BOXFIT calculates light curves and spectra for arbitrary observer times and frequencies and of performing (broadband) data fits using the downhill simplex method combined with simulated annealing. The flux value for a given observer time and frequency is a function of various variables that set the explosion physics (energy of the explosion, circumburst number density and jet collimation angle), the radiative process (magnetic field generation efficiency, electron shock-acceleration efficiency and synchrotron power slope for the electron energy distribution) and observer position (distance, redshift and angle). The code can be run both in parallel and on a single core. Because a data fit takes many iterations, this is best done in parallel. Single light curves and spectra can readily be done on a single core.

[ascl:1607.017] BoxRemap: Volume and local structure preserving mapping of periodic boxes

BoxRemap remaps the cubical domain of a cosmological simulation into simple non-cubical shapes. It can be used for on-the-fly remappings of the simulation geometry and is volume-preserving; remapped geometry has the same volume V = L3 as the original simulation box. The remappings are structure-preserving (local neighboring structures are mapped to neighboring places) and one-to-one, with every particle/halo/galaxy/etc. appearing once and only once in the remapped volume.

[ascl:1108.011] BPZ: Bayesian Photometric Redshift Code

Photometric redshift estimation is becoming an increasingly important technique, although the currently existing methods present several shortcomings which hinder their application. Most of those drawbacks are efficiently eliminated when Bayesian probability is consistently applied to this problem. The use of prior probabilities and Bayesian marginalization allows the inclusion of valuable information, e.g. the redshift distributions or the galaxy type mix, which is often ignored by other methods. In those cases when the a priori information is insufficient, it is shown how to `calibrate' the prior distributions, using even the data under consideration. There is an excellent agreement between the 108 HDF spectroscopic redshifts and the predictions of the method, with a rms error Delta z/(1+z_spec) = 0.08 up to z<6 and no systematic biases nor outliers. The results obtained are more reliable than those of standard techniques even when the latter include near-IR colors. The Bayesian formalism developed here can be generalized to deal with a wide range of problems which make use of photometric redshifts, e.g. the estimation of individual galaxy characteristics as the metallicity, dust content, etc., or the study of galaxy evolution and the cosmological parameters from large multicolor surveys. Finally, using Bayesian probability it is possible to develop an integrated statistical method for cluster mass reconstruction which simultaneously considers the information provided by gravitational lensing and photometric redshifts.

[ascl:1806.025] BRATS: Broadband Radio Astronomy ToolS

BRATS (Broadband Radio Astronomy ToolS) provides tools for the spectral analysis of broad-bandwidth radio data and legacy support for narrowband telescopes. It can fit models of spectral ageing on small spatial scales, offers automatic selection of regions based on user parameters (e.g. signal to noise), and automatic determination of the best-fitting injection index. It includes statistical testing, including Chi-squared, error maps, confidence levels and binning of model fits, and can map spectral index as a function of position. It also provides the ability to reconstruct sources at any frequency for a given model and parameter set, subtract any two FITS images and output residual maps, easily combine and scale FITS images in the image plane, and resize radio maps.

[ascl:2305.009] breizorro: Image masking tool

Given a FITS image, breizorro creates a binary mask. The software allows the user control various parameters and functions, such as setting a sigma threshold for masking, merging in or subtracting one or more masks or region files, filling holes, applying dilation within a defined radius of pixels, and inverting the mask.

[ascl:1412.005] BRUCE/KYLIE: Pulsating star spectra synthesizer

BRUCE and KYLIE, written in Fortran 77, synthesize the spectra of pulsating stars. BRUCE constructs a point-sampled model for the surface of a rotating, gravity-darkened star, and then subjects this model to perturbations arising from one or more non-radial pulsation modes. Departures from adiabaticity can be taken into account, as can the Coriolis force through adoption of the so-called traditional approximation. BRUCE writes out a time-sequence of perturbed surface models. This sequence is read in by KYLIE, which synthesizes disk-integrated spectra for the models by co-adding the specific intensity emanating from each visible point toward the observer. The specific intensity is calculated by interpolation in a large temperature-gravity-wavelength-angle grid of pre-calculated intensity spectra.

[ascl:1407.016] Brut: Automatic bubble classifier

Brut, written in Python, identifies bubbles in infrared images of the Galactic midplane; it uses a database of known bubbles from the Milky Way Project and Spitzer images to build an automatic bubble classifier. The classifier is based on the Random Forest algorithm, and uses the WiseRF implementation of this algorithm.

[ascl:1903.004] brutifus: Python module to post-process datacubes from integral field spectrographs

brutifus aids in post-processing datacubes from integral field spectrographs. The set of Python routines in the package handle generic tasks, such as the registration of a datacube WCS solution with the Gaia catalogue, the correction of Galactic reddening, or the subtraction of the nebular/stellar continuum on a spaxel-per-spaxel basis, with as little user interactions as possible. brutifus is modular, in that the order in which the post-processing routines are run is entirely customizable.

[submitted] BSAVI: Bayesian Sample Visualizer for Cosmological Likelihoods

BSAVI (Bayesian Sample Visualizer) is a tool to aid likelihood analysis of model parameters where samples from a distribution in the parameter space are used as inputs to calculate a given observable. For example, selecting a range of samples will allow you to easily see how the observables change as you traverse the sample distribution. At the core of BSAVI is the Observable object, which contains the data for a given observable and instructions for plotting it. It is modular, so you can write your own function that takes the parameter values as inputs, and BSAVI will use it to compute observables on the fly. It also accepts tabular data, so if you have pre-computed observables, simply import them alongside the dataset containing the sample distribution to start visualizing.

[ascl:1303.014] BSE: Binary Star Evolution

BSE is a rapid binary star evolution code. It can model circularization of eccentric orbits and synchronization of stellar rotation with the orbital motion owing to tidal interaction in detail. Angular momentum loss mechanisms, such as gravitational radiation and magnetic braking, are also modelled. Wind accretion, where the secondary may accrete some of the material lost from the primary in a wind, is allowed with the necessary adjustments made to the orbital parameters in the event of any mass variations. Mass transfer occurs if either star fills its Roche lobe and may proceed on a nuclear, thermal or dynamical time-scale. In the latter regime, the radius of the primary increases in response to mass-loss at a faster rate than the Roche-lobe of the star. Prescriptions to determine the type and rate of mass transfer, the response of the secondary to accretion and the outcome of any merger events are in place in BSE.

[ascl:9904.001] BSGMODEL: The Bahcall-Soneira Galaxy Model

BSGMODEL is used to construct the disk and spheroid components of the Galaxy from which the distribution of visible stars and mass in the Galaxy is calculated. The computer files accessible here are available for export use. The modifications are described in comment lines in the software. The Galaxy model software has been installed and used by different people for a large variety of purposes (see, e. g., the the review "Star Counts and Galactic Structure'', Ann. Rev. Astron. Ap. 24, 577, 1986 ).

[ascl:2309.015] bskit: Bispectra from cosmological simulation snapshots

bskit, built upon the nbodykit (ascl:1904.027) simulation analysis package, measures density bispectra from snapshots of cosmological N-body or hydrodynamical simulations. It can measure auto or cross bispectra in a user-specified set of triangle bins (that is, triplets of 3-vector wavenumbers). Several common sets of bins are also implemented, including all triangle bins for specified k_min and k_max, equilateral triangles between specified k_min and k_max, isosceles triangles, and squeezed isosceles triangles.

[ascl:2001.007] BTS: Behind The Spectrum

Behind The Spectrum (BTS) is a fully-automated multiple-component fitter for optically-thin spectra. Written as a python module, the routine uses the first, second and third derivatives to determine thenumber of components in the spectrum. A least-squared fitting routine then determines the best fit with that number of components, checking for over-fitting and over-lapping velocity centroids.

[ascl:2403.004] BTSbot: Automated identification of supernovae with multi-modal deep learning

BTSbot automates real-time identification of bright extragalactic transients in Zwicky Transient Facility (ZTF) data. A multi-modal convolutional neural network, BTSbot provides a bright transient score to individual ZTF detections using their image data and 25 extracted features. The package eliminates the need for daily visual inspection of new transients by automatically identifying and requesting spectroscopic follow-up observations of new bright transient candidates. BTSbot recovers all bright transients in our test split and performs on par with human experts in terms of identification speed (on average, ∼1 hour quicker than scanners).

[ascl:1204.003] BUDDA: BUlge/Disk Decomposition Analysis

Budda is a Fortran code developed to perform a detailed structural analysis on galaxy images. It is simple to use and gives reliable estimates of the galaxy structural parameters, which can be used, for instance, in Fundamental Plane studies. Moreover, it has a powerful ability to reveal hidden sub-structures, like inner disks, secondary bars and nuclear rings.

[ascl:2312.003] BUQO: Bayesian Uncertainty Quantification by Optimization

BUQO solves large-scale imaging inverse problems. It leverages probability concentration phenomena and the underlying convex geometry to formulate the Bayesian hypothesis test as a convex problem that is then efficiently solved by using scalable optimization algorithms. This allows scaling to high-resolution and high-sensitivity imaging problems that are computationally unaffordable for other Bayesian computation approaches.

[ascl:2212.024] Burning Arrow: Black hole massive particles orbit degradation

Burning Arrow determines the destabilization of massive particle circular orbits due to thermal radiation, emitted in X-ray, from the hot accretion disk material. This code requires the radiation forces exerted on the material at the point of interest found by running the code Infinity (ascl:2212.021). Burning Arrow begins by assuming a target particle in the disk that moves in a circular orbit. It then introduces the recorded radiation forces from Infinity code for the target region. The forces are subsequently introduced into the target particle equations of motion and the trajectory is recalculated. Burning Arrow then produces images of the black hole - accretion disk system that includes the degenerated particle trajectories that obey the assorted velocity profiles.

[ascl:1610.010] BurnMan: Lower mantle mineral physics toolkit

BurnMan determines seismic velocities for the lower mantle. Written in Python, BurnMan calculates the isotropic thermoelastic moduli by solving the equations-of-state for a mixture of minerals defined by the user. The user may select from a list of minerals applicable to the lower mantle included or can define one. BurnMan provides choices in methodology, both for the EoS and for the multiphase averaging scheme and the results can be visually or quantitatively compared to observed seismic models.

[ascl:2306.030] Butterpy: Stellar butterfly diagram and rotational light curve simulator

Butterpy simulates star spot emergence, evolution, decay, and stellar rotational light curves. It tests the recovery of stellar rotation periods using different frequency analysis techniques. Butterpy can simulate light curves of stars with variable activity level, rotation period, spot lifetime, magnetic cycle duration and overlap, spot emergence latitudes, and latitudinal differential rotation shear.

[ascl:1806.026] BWED: Brane-world extra dimensions

Braneworld-extra-dimensions places constraints on the size of the AdS5 radius of curvature within the Randall-Sundrum brane-world model in light of the near-simultaneous detection of the gravitational wave event GW170817 and its optical counterpart, the short γ-ray burst event GRB170817A. The code requires a (supplied) patch to the Montepython cosmological MCMC sampler (ascl:1805.027) to sample the posterior distribution of the 4-dimensional parameter space in VBV17 and obtain constraints on the parameters.

[ascl:1610.011] BXA: Bayesian X-ray Analysis

BXA connects the nested sampling algorithm MultiNest (ascl:1109.006) to the X-ray spectral analysis environments Xspec (ascl:9910.005) and Sherpa (ascl:1107.005) for Bayesian parameter estimation and model comparison. It provides parameter estimation in arbitrary dimensions and plotting of spectral model vs. the data for best fit, posterior samples, or each component. BXA allows for model selection; it computes the evidence for the considered model, ready for use in computing Bayes factors and is not limited to nested models. It also visualizes deviations between model and data with Quantile-Quantile (QQ) plots, which do not require binning and are more comprehensive than residuals.

[ascl:1211.005] C-m Emu: Concentration-mass relation emulator

The concentration-mass relation for dark matter-dominated halos is one of the essential results expected from a theory of structure formation. C-m Emu is a simple numerical code for the c-M relation as a function of cosmological parameters for wCDM models generates the best-fit power-law model for each redshift separately and then interpolate between the redshifts. This produces a more accurate answer at each redshift at the minimal cost of running a fast code for every c -M prediction instead of using one fitting formula. The emulator is constructed from 37 individual models, with three nested N-body gravity-only simulations carried out for each model. The mass range covered by the emulator is 2 x 10^{12} M_sun < M <10^{15} M_sun with a corresponding redshift range of z=0 -1. Over this range of mass and redshift, as well as the variation of cosmological parameters studied, the mean halo concentration varies from c ~ 2 to c ~ 8. The distribution of the concentration at fixed mass is Gaussian with a standard deviation of one-third of the mean value, almost independent of cosmology, mass, and redshift over the ranges probed by the simulations.

[ascl:2312.022] C2-Ray: Time-dependent photo-ionization calculations

C2-Ray calculates spherical symmetric time-dependent photo-ionization in 1D with the source at the origin for hydrogen only. The code is explicitly photon-conserving and uses an analytical relaxation solution for the ionization rate equations for each time step, thus enabling integration of the equation of transfer along a ray with fewer cells and time steps than previous methods. It is suitable for coupling radiative transfer to gas and N-body dynamics methods on fixed or adaptive grids. C2-Ray is not parallelized but contains an MPI module for compatibility with the 3D version (C2-Ray3Dm).

[ascl:2312.023] C2-Ray3Dm: 3D version of C2-Ray for multiple sources, hydrogen only

C2-Ray3Dm performs time-dependent photo-ionization calculations for 3D multiple sources, and for hydrogen only. Based on C2-Ray (ascl:2312.022), it runs under both MPI and OpenMP. The length of subroutines has been reduced to make the code more manageable and easier to read.

[ascl:2312.024] C2-Ray3Dm1D_Helium: Hydrogen + helium version of C2-Ray

C2-Ray3Dm1D_Helium is the hydrogen + helium version of the radiative transfer photo-ionization code C2-Ray. It combines the 1D and 3D versions of the code.

[ascl:1610.006] C3: Command-line Catalogue Crossmatch for modern astronomical surveys

The Command-line Catalogue Cross-matching (C3) software efficiently performs the positional cross-match between massive catalogues from modern astronomical surveys, whose size have rapidly increased in the current data-driven science era. Based on a multi-core parallel processing paradigm, it is executed as a stand-alone command-line process or integrated within any generic data reduction/analysis pipeline. C3 provides its users with flexibility in portability, parameter configuration, catalogue formats, angular resolution, region shapes, coordinate units and cross-matching types.

[ascl:1102.013] Cactus: HPC infrastructure and programming tools

Cactus provides computational scientists and engineers with a collaborative, modular and portable programming environment for parallel high performance computing. Cactus can make use of many other technologies for HPC, such as Samrai, HDF5, PETSc and PAPI, and several application domains such as numerical relativity, computational fluid dynamics and quantum gravity are developing open community toolkits for Cactus.

[ascl:2306.037] CADET: X-ray cavity detection tool

The machine learning pipeline CADET (CAvity DEtection Tool) finds and size-estimates arbitrary surface brightness depressions (X-ray cavities) on noisy Chandra images of galaxies. The pipeline is a self-standing Python script and inputs either raw Chandra images in units of counts (numbers of captured photons) or normalized background-subtracted and/or exposure-corrected images. CADET saves corresponding pixel-wise as well as decomposed cavity predictions in FITS format and also preserves the WCS coordinates; it also outputs a PNG file showing decomposed predictions for individual scales.

[ascl:1303.017] CADRE: CArma Data REduction pipeline

CADRE, the Combined Array for Millimeter-wave Astronomy (CARMA) data reduction pipeline, gives investigators a first look at a fully reduced set of their data. It runs automatically on all data produced by the telescope as they arrive in the data archive. The pipeline is written in python and uses python wrappers for MIRIAD subroutines for direct access to the data. It applies passband, gain and flux calibration to the data sets and produces a set of continuum and spectral line maps in both MIRIAD and FITS format.

[ascl:2108.009] caesar-rest: Web service for the caesar source extractor

caesar-rest is a REST-ful web service for astronomical source extraction and classification with the caesar source extractor [ascl:1807.015]. The software is developed in python and consists of containerized microservices, deployable on standalone servers or on a distributed cloud infrastructure. The core component is the REST web application, based on the Flask framework and providing APIs for managing the input data (e.g. data upload/download/removal) and source finding jobs (e.g. submit, get status, get outputs) with different job management systems (Kubernetes, Slurm, Celery). Additional services (AAI, user DB, log storage, job monitor, accounting) enable the user authentication, the storage and retrieval of user data and job information, the monitoring of submitted jobs, and the aggregation of service logs and user data/job stats.

[ascl:1807.015] CAESAR: Compact And Extended Source Automated Recognition

CAESAR extracts and parameterizes both compact and extended sources from astronomical radio interferometric maps. The processing pipeline is a series of stages that can run on multiple cores and processors. After local background and rms map computation, compact sources are extracted with flood-fill and blob finder algorithms, processed (selection + deblending), and fitted using a 2D gaussian mixture model. Extended source search is based on a pre-filtering stage, allowing image denoising, compact source removal and enhancement of diffuse emission, followed by a final segmentation. Different algorithms are available for image filtering and segmentation. The outputs delivered to the user include source fitted and shape parameters, regions and contours. Written in C++, CAESAR is designed to handle the large-scale surveys planned with the Square Kilometer Array (SKA) and its precursors.

[ascl:1505.001] CALCEPH: Planetary ephemeris files access code

CALCEPH accesses binary planetary ephemeris files, including INPOPxx, JPL DExxx ,and SPICE ephemeris files. It provides a C Application Programming Interface (API) and, optionally, a Fortran 77 or 2003 interface to be called by the application. Two groups of functions enable the access to the ephemeris files, single file access functions, provided to make transition easier from the JPL functions, such as PLEPH, to this library, and many ephemeris file at the same time. Although computers have different endianess (order in which integers are stored as bytes in computer memory), CALCEPH can handles the binary ephemeris files with any endianess by automatically swaps the bytes when it performs read operations on the ephemeris file.

[ascl:1210.010] CALCLENS: Curved-sky grAvitational Lensing for Cosmological Light conE simulatioNS

CALCLENS, written in C and employing widely available software libraries, efficiently computes weak gravitational lensing shear signals from large N-body light cone simulations over a curved sky. The algorithm properly accounts for the sky curvature and boundary conditions, is able to produce redshift-dependent shear signals including corrections to the Born approximation by using multiple-plane ray tracing, and properly computes the lensed images of source galaxies in the light cone. The key feature of this algorithm is a new, computationally efficient Poisson solver for the sphere that combines spherical harmonic transform and multgrid methods. As a result, large areas of sky (~10,000 square degrees) can be ray traced efficiently at high-resolution using only a few hundred cores on widely available machines. Coupled with realistic galaxy populations placed in large N-body light cone simulations, CALCLENS is ideally suited for the construction of synthetic weak lensing shear catalogs to be used to test for systematic effects in data analysis procedures for upcoming large-area sky surveys.

[ascl:2106.035] CalPriorSNIa: Effective calibration prior on the absolute magnitude of Type Ia supernovae

CalPriorSNIa quickly computes the effective calibration prior on the absolute magnitude MB of Type Ia supernovae that corresponds to a given determination of H0.

[ascl:2301.001] CALSAGOS: Select cluster members and search, find, and identify substructures

CALSAGOS (Clustering ALgorithmS Applied to Galaxies in Overdense Systems) selects cluster members and searches, finds, and identifies substructures and galaxy groups in and around galaxy clusters using the redshift and position in the sky of the galaxies. The package offers two ways to determine cluster members, ISOMER and CLUMBERI. The ISOMER (Identifier of SpectrOscopic MembERs) function selects the spectroscopic cluster members by defining cluster members as those galaxies with a peculiar velocity lower than the escape velocity of the cluster. The CLUMBERI (CLUster MemBER Identifier) function select the cluster members using a 3D-Gaussian Mixture Modules (GMM). Both functions remove the field interlopers by using a 3-sigma clipping algorithm. CALSAGOS uses the function LAGASU (LAbeller of GAlaxies within SUbstructures) to search, find, and identify substructures and groups in and around a galaxy cluster; this function is based on clustering algorithms (GMM and DBSCAN), which search areas with high density to define a substructure or groups.

[ascl:2207.015] calviacat: Calibrate star photometry by catalog comparison

calviacat calibrates star photometry by comparison to a catalog, including PanSTARRS 1, ATLAS-RefCat2, and SkyMapper catalogs. Catalog queries are cached so that subsequent calibrations of the same or similar fields can be more quickly executed.

[ascl:1105.013] CAMB Sources: Number Counts, Lensing & Dark-age 21cm Power Spectra

We relate the observable number of sources per solid angle and redshift to the underlying proper source density and velocity, background evolution and line-of-sight potentials. We give an exact result in the case of linearized perturbations assuming general relativity. This consistently includes contributions of the source density perturbations and redshift distortions, magnification, radial displacement, and various additional linear terms that are small on sub-horizon scales. In addition we calculate the effect on observed luminosities, and hence the result for sources observed as a function of flux, including magnification bias and radial-displacement effects. We give the corresponding linear result for a magnitude-limited survey at low redshift, and discuss the angular power spectrum of the total count distribution. We also calculate the cross-correlation with the CMB polarization and temperature including Doppler source terms, magnification, redshift distortions and other velocity effects for the sources, and discuss why the contribution of redshift distortions is generally small. Finally we relate the result for source number counts to that for the brightness of line radiation, for example 21-cm radiation, from the sources.

[ascl:1102.026] CAMB: Code for Anisotropies in the Microwave Background

We present a fully covariant and gauge-invariant calculation of the evolution of anisotropies in the cosmic microwave background (CMB) radiation. We use the physically appealing covariant approach to cosmological perturbations, which ensures that all variables are gauge-invariant and have a clear physical interpretation. We derive the complete set of frame-independent, linearised equations describing the (Boltzmann) evolution of anisotropy and inhomogeneity in an almost Friedmann-Robertson-Walker (FRW) cold dark matter (CDM) universe. These equations include the contributions of scalar, vector and tensor modes in a unified manner. Frame-independent equations for scalar and tensor perturbations, which are valid for any value of the background curvature, are obtained straightforwardly from the complete set of equations. We discuss the scalar equations in detail, including the integral solution and relation with the line of sight approach, analytic solutions in the early radiation dominated era, and the numerical solution in the standard CDM model. Our results confirm those obtained by other groups, who have worked carefully with non-covariant methods in specific gauges, but are derived here in a completely transparent fashion.

[ascl:1801.007] cambmag: Magnetic Fields in CAMB

cambmag is a modification to CAMB (ascl:1102.026) that calculates the compensated magnetic mode in the scalar, vector and tensor case. Previously CAMB included code only for the vectors. It also corrects for tight-coupling issues and adds in the ability to include massive neutrinos when calculating vector modes.

[ascl:1605.006] CAMELOT: Cloud Archive for MEtadata, Library and Online Toolkit

CAMELOT facilitates the comparison of observational data and simulations of molecular clouds and/or star-forming regions. The central component of CAMELOT is a database summarizing the properties of observational data and simulations in the literature through pertinent metadata. The core functionality allows users to upload metadata, search and visualize the contents of the database to find and match observations/simulations over any range of parameter space.

To bridge the fundamental disconnect between inherently 2D observational data and 3D simulations, the code uses key physical properties that, in principle, are straightforward for both observers and simulators to measure — the surface density (Sigma), velocity dispersion (sigma) and radius (R). By determining these in a self-consistent way for all entries in the database, it should be possible to make robust comparisons.

[ascl:1502.015] Camelus: Counts of Amplified Mass Elevations from Lensing with Ultrafast Simulations

Camelus provides a prediction on weak lensing peak counts from input cosmological parameters. Written in C, it samples halos from a mass function and assigns a profile, carries out ray-tracing simulations, and then counts peaks from ray-tracing maps. The creation of the ray-tracing simulations requires less computing time than N-body runs and the results is in good agreement with full N-body simulations.

[ascl:1505.030] CANDID: Companion Analysis and Non-Detection in Interferometric Data

CANDID finds faint companion around star in interferometric data in the OIFITS format. It allows systematically searching for faint companions in OIFITS data, and if not found, estimates the detection limit. The tool is based on model fitting and Chi2 minimization, with a grid for the starting points of the companion position. It ensures all positions are explored by estimating a-posteriori if the grid is dense enough, and provides an estimate of the optimum grid density.

[ascl:2406.004] candl: Differentiable likelihood framework for analyzing CMB power spectrum measurements

candl (CMB Analysis With A Differentiable Likelihood) analyzes CMB power spectrum measurements using a differentiable likelihood framework. It is compatible with JAX (ascl:2111.002), though JAX is optional, allowing for fast and easy computation of gradients and Hessians of the likelihoods, and candl provides interface tools for working with other cosmology software packages, including Cobaya (ascl:1910.019) and MontePython (ascl:1805.027). The package also provides auxiliary tools for common analysis tasks, such as generating mock data, and supports the analysis of primary CMB and lensing power spectrum data.

[ascl:1106.017] CAOS: Code for Adaptive Optics Systems

The CAOS "system" (where CAOS stands for Code for Adaptive Optics Systems) is properly said a Problem Solving Environment (PSE). It is essentially composed of a graphical programming interface (the CAOS Application Builder) which can load different packages (set of modules). Current publicly distributed packages are the Software Package CAOS (the original adaptive optics package), the Software Package AIRY (an image-reconstruction-oriented package - AIRY stands for Astronomical Image Restoration with interferometrY), the Software Package PAOLAC (a simple CAOS interface for the analytic IDL code PAOLA developed by Laurent Jolissaint - PAOLAC stands for PAOLA within Caos), and a couple of private packages (not publicly distributed but restricted to the corresponding consortia): SPHERE (especially developed for the VLT planet finder SPHERE), and AIRY-LN (a specialized version of AIRY for the LBT instrument LINC-NIRVANA). Another package is also being developed: MAOS (that stands for Multiconjugate Adaptive Optics Simulations), developed for multi-reference multiconjugate AO studies purpose but still in a beta-version form.

[ascl:1404.011] CAP_LOESS_1D & CAP_LOESS_2D: Recover mean trends from noisy data

CAP_LOESS_1D and CAP_LOESS_2D provide improved implementations of the one-dimensional (Clevelend 1979) and two-dimensional (Cleveland & Devlin 1988) Locally Weighted Regression (LOESS) methods to recover the mean trends of the population from noisy data in one or two dimensions. They include a robust approach to deal with outliers (bad data). The software is available in both IDL and Python versions.

[ascl:2011.002] CAPTURE: Interferometric pipeline for image creation from GMRT data

CAPTURE (CAsa Pipeline-cum-Toolkit for Upgraded Giant Metrewave Radio Telescope data REduction) produces continuum images from radio interferometric data. Written in Python, it uses CASA (ascl:1107.013) tasks to analyze data obtained by the GMRT. It can produce self-calibrated images in a fully automatic mode or can run in steps to allow the data to be inspected throughout processing.

[ascl:2308.009] caput: Utilities for building radio astronomy data analysis pipelines

Caput (Cluster Astronomical Python Utilities) contains utilities for handling large datasets on computer clusters. Written with radio astronomy in mind, the package provides an infrastructure for building, managing and configuring pipelines for data processing. It includes modules for dynamically importing and utilizing mpi4py, in-memory mock-ups of h5py objects, and infrastructure for running data analysis pipelines on computer clusters. Caput features a generic container for holding self-documenting datasets in memory with straightforward syncing to h5py files, and offers specialization for holding time stream data. Caput also includes tools for MPI-parallel analysis and routines for converting between different time representations, dealing with leap seconds, and calculating celestial times.

[ascl:2006.014] CARACal: Containerized Automated Radio Astronomy Calibration pipeline

CARACal (Containerized Automated Radio Astronomy Calibration, formerly MeerKATHI) reduces radio-interferometric data. Developed originally as an end-to-end continuum- and line imaging pipeline for MeerKAT, it can also be used with other radio telescopes. CARACal reduces large data sets and produces high-dynamic-range continuum images and spectroscopic data cubes. The pipeline is platform-independent and delivers imaging quality metrics to efficiently assess the data quality.

[ascl:2406.007] CARDiAC: Anisotropic Redshift Distributions in Angular Clustering

CARDiAC (Code for Anisotropic Redshift Distributions in Angular Clustering) computes the impact of anisotropic redshift distributions on a wide class of angular clustering observables. It supports auto- and cross-correlations of galaxy samples and cosmic shear maps, including galaxy-galaxy lensing. The anisotropy can be present in the mean redshift and/or width of Gaussian distributions, as well as in the fraction of galaxies in each component of multi-modal distributions. Templates of these variations can be provided by the user or simulated internally within the code.

[ascl:1505.003] caret: Classification and Regression Training

caret (Classification And REgression Training) provides functions for training and plotting classification and regression models. It contains tools for data splitting, pre-processing, feature selection, model tuning using resampling, and variable importance estimation, as well as other functionality.

[ascl:1404.009] carma_pack: MCMC sampler for Bayesian inference

carma_pack is an MCMC sampler for performing Bayesian inference on continuous time autoregressive moving average models. These models may be used to model time series with irregular sampling. The MCMC sampler utilizes an adaptive Metropolis algorithm combined with parallel tempering.

[ascl:1611.016] Carpet: Adaptive Mesh Refinement for the Cactus Framework

Carpet is an adaptive mesh refinement and multi-patch driver for the Cactus Framework (ascl:1102.013). Cactus is a software framework for solving time-dependent partial differential equations on block-structured grids, and Carpet acts as driver layer providing adaptive mesh refinement, multi-patch capability, as well as parallelization and efficient I/O.

[ascl:2005.007] Carpyncho: VVV Catalog browser toolkit

Carpyncho browses catalogs to search for and characterize time variable data of the Vista Variables in the Via Lactea (VVV) Survey. The stacked pawprint data from the Cambridge Astronomical Science Unit's (CASU) Vista Data Flow System (VDFS) v>= 1.3 catalogs have been crossed matched with the VDFS CASU v1.3 tile catalogs into Parquet files, allowing detection and classification of periodic variables within this dataset.

[ascl:2103.021] Carsus: Atomic database for astronomy

Carsus manages atomic datasets. It requires Chianti (ascl:9911.004), and can read data from a variety of sources and output them to file formats readable by radiative transfer codes such as TARDIS (ascl:1402.018).

[ascl:2103.031] CARTA: Cube Analysis and Rendering Tool for Astronomy

CARTA (Cube Analysis and Rendering Tool for Astronomy) is a image visualization and analysis tool designed for the ALMA, VLA, SKA pathfinders, and the ngVLA. If offers catalog support, shared region analytics, profile smoothing, and spectral line query, and more. CARTA adopts a client-server architecture suitable for visualizing images with large file sizes (GB to TB) easily obtained from ALMA, VLA, or SKA pathfinder observations; computation and data storage are handled by remote enterprise-class servers or clusters with high performance storage, while processed products are sent to clients only for visualization with modern web features, such as GPU-accelerated rendering. This architecture also enables users to interact with the ALMA and VLA science archives by using CARTA as an interface. CARTA provides a desktop version and a server version. The former is suitable for single-user usage with a laptop, a desktop, or a remote server in the "remote" execution mode. The latter is suitable for institution-wide deployment to support multiple users with user authentication and additional server-side features.

[ascl:2207.025] casa_cube: Display and analyze astronomical data cubes

casa_cube provides an interface to data cubes generated by CASA (ascl:1107.013) or Gildas (ascl:1305.010). It performs simple tasks such as plotting given channel maps, moment maps, and line profile in various units, and also corrects for cloud extinction, reconvolves with a beam taper, and permits quick and easy comparisons with models.

[ascl:1107.013] CASA: Common Astronomy Software Applications

CASA, the Common Astronomy Software Applications package, is being developed with the primary goal of supporting the data post-processing needs of the next generation of radio astronomical telescopes such as ALMA and EVLA. The package can process both interferometric and single dish data. The CASA infrastructure consists of a set of C++ tools bundled together under an iPython interface as a set of data reduction tasks. This structure provides flexibility to process the data via task interface or as a python script. In addition to the data reduction tasks, many post-processing tools are available for even more flexibility and special purpose reduction needs.

[ascl:1912.002] casacore: Suite of C++ libraries for radio astronomy data processing

The casacore package contains the core libraries of the old AIPS++/CASA (ascl:1107.013) package. This split was made to get a better separation of core libraries and applications. CASA is now built on top of Casacore. The system consists of a set of layered libraries (packages) and includes a library (using Boost-Python) that converts the basic Casacore types (e.g., Array, Record) to and from Python. Casacore includes the casa package for core functionality and data types like Array and Record; a scimath package for N-dim functions with auto-differentiation and linear or non-linear fitting; and a tables package for the table data system supporting N-dim arrays with advanced querying. It also includes the measures package to manage values in astronomical reference frames using physical units (Quanta) and the MeasurementSets for storing data in the UV-domain, and also the images package for N-dim images in world coordinates with various analysis operations.

[ascl:1905.023] CASI-2D: Convolutional Approach to Shell Identification - 2D

CASI-2D (Convolutional Approach to Shell Identification) identifies stellar feedback signatures using data from magneto-hydrodynamic simulations of turbulent molecular clouds with embedded stellar sources and deep learning techniques. Specifically, a deep neural network is applied to dense regression and segmentation on simulated density and synthetic 12 CO observations to identify shells, sometimes referred to as "bubbles," and other structures of interest in molecular cloud data.

[ascl:2009.005] CASI-3D: Convolutional Approach to Structure Identification-3D

CASI-3D identifies signatures of stellar feedback in molecular line spectra, such as 12CO and 13CO, using deep learning. The code is developed from CASI-2D (ascl:1905.023) and exploits the full 3D spectral information.

[ascl:1402.013] CASSIS: Interactive spectrum analysis

CASSIS (Centre d'Analyse Scientifique de Spectres Infrarouges et Submillimetriques), written in Java, is suited for broad-band spectral surveys to speed up the scientific analysis of high spectral resolution observations. It uses a local spectroscopic database made of the two molecular spectroscopic databases JPL and CDMS, as well as the atomic spectroscopic database NIST. Its tools include a LTE model and the RADEX (ascl:1010.075) model connected to the LAMDA (ascl:1010.077) molecular collisional database. CASSIS can build a line list fitting the various transitions of a given species and to directly produce rotational diagrams from these lists. CASSIS is fully integrated into HIPE (ascl:1111.001), the Herschel Interactive Processing Environment, as a plug-in.

[ascl:1105.010] CASTRO: Multi-dimensional Eulerian AMR Radiation-hydrodynamics Code

CASTRO is a multi-dimensional Eulerian AMR radiation-hydrodynamics code that includes stellar equations of state, nuclear reaction networks, and self-gravity. Initial target applications for CASTRO include Type Ia and Type II supernovae. CASTRO supports calculations in 1-d, 2-d and 3-d Cartesian coordinates, as well as 1-d spherical and 2-d cylindrical (r-z) coordinate systems. Time integration of the hydrodynamics equations is based on an unsplit version of the piecewise parabolic method (PPM) with new limiters that avoid reducing the accuracy of the scheme at smooth extrema. CASTRO can follow an arbitrary number of isotopes or elements. The atomic weights and amounts of these elements are used to calculate the mean molecular weight of the gas required by the equation of state. CASTRO supports several different approaches to solving for self-gravity. The most general is a full Poisson solve for the gravitational potential. CASTRO also supports a monopole approximation for gravity, and a constant gravity option is also available. The CASTRO software is written in C++ and Fortran, and is based on the BoxLib software framework developed by CCSE.

[ascl:1804.013] CAT-PUMA: CME Arrival Time Prediction Using Machine learning Algorithms

CAT-PUMA (CME Arrival Time Prediction Using Machine learning Algorithms) quickly and accurately predicts the arrival of Coronal Mass Ejections (CMEs) of CME arrival time. The software was trained via detailed analysis of CME features and solar wind parameters using 182 previously observed geo-effective partial-/full-halo CMEs and uses algorithms of the Support Vector Machine (SVM) to make its predictions, which can be made within minutes of providing the necessary input parameters of a CME.

[ascl:2108.008] CatBoost: High performance gradient boosting on decision trees library

CatBoost is a machine learning method based on gradient boosting over decision trees and can be used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. It supports both numerical and categorical features and computation on CPU and GPU, and is fast and scalable. Visualization tools are also included in CatBoost.

[ascl:1206.008] Catena: Ensemble of stars orbit integration

Catena integrates the orbits of an ensemble of stars using the chain-regularization method (Mikkola & Aarseth) with an embedded Runge-Kutta integration method of 9(8)th order (Prince & Dormand).

[ascl:2007.024] CaTffs: Calcium triplet indexes

CaTffs predicts the strength of calcium triplet indices (CaT*, PaT and CaT) on the basis of empirical fitting functions and performs required interpolations between the different local functions. Together with the indices predictions, the program also computes the random errors associated to such predictions resulting from the covariance matrices of the fits (for the indices CaT* and PaT). This ensures a reliable error index estimation for any combination of input atmospheric parameters.

[ascl:1810.013] catsHTM: Catalog cross-matching tool

The catsHTM package quickly accesses and cross-matches large astronomical catalogs that have been reformatted into the HDF5-based file format. It performs efficient cone searches at resolutions from a few arc-seconds to degrees within a few milliseconds time, cross-match numerous catalogs, and can do general searches.

[ascl:2108.007] catwoman: Transit modeling Python package for asymmetric light curves

catwoman models asymmetric transit lightcurves. Written in Python, it calculates light curves for any radially symmetric stellar limb darkening law, and where planets are modeled as two semi-circles of different radii. Catwoman is built on the batman library (ascl:1510.002) and uses its integration algorithm.

[submitted] Caustic Mass Estimator for Galaxy Clusters

The caustic technique is a powerful method to infer cluster mass profiles to clustrocentric distances well beyond the virial radius. It relies in the measure of the escape velocity of the sistem using only galaxy redshift information. This method was introduced by Diaferio & Geller (1997) and Diaferio (1999). This code allows the caustic mass estimation for galaxy clusters, as well as outlier identification as a side effect. However, a pre-cleaning of interlopers is recommended, using e.g., the shifting-gapper technique.

[ascl:1904.012] CausticFrog: 1D Lagrangian Simulation Package

CausticFrog models the reaction of a system of orbiting particles to instantaneous mass loss. It applies to any spherically symmetric potential, and follows the radial evolution of shells of mass. CausticFrog tracks the inner and outer edge of each shell, whose radius evolves as a test particle. The amount of mass in each shell is fixed but multiple shells can overlap leading to higher densities.

[ascl:2404.001] cbeam: Coupled-mode propagator for slowly-varying waveguides

cbeam models the propagation of guided light through slowly-varying few-mode waveguides using the coupled-mode theory (CMT). When compared with more general numerical methods for waveguide simulation, such as the finite-differences beam propagation method (FD-BPM), numerical implementations of the CMT can be much more computationally efficient. Written in Python and Julia, the package provides a Pythonic class structure to define waveguides, with simple classes for directional couplers and photonic lanterns already provided. cbeam also doubles as a finite-element eigenmode solver.

[ascl:2406.009] CBiRd: Bias tracers In Redshift space

CBiRd (Code for Bias tracers In Redshift space) provides correlators in the Effective Field Theory of Large-Scale Structure (EFTofLSS) in a ready-to-use pipeline for cosmological analysis of galaxy-redshift surveys data. It provides a core calculation package (C++BiRd), a Python implementation of a Taylor expansion of the power spectrum around a reference cosmology for efficient evaluation (TBiRd), and libraries to correct for observational systematics. CBiRd also provides MCMC samplers (MCBiRd) for a power spectrum and bispectrum analysis of galaxy-redshift surveys data based on emcee (ascl:1303.002), and can provide an earlybird pass to explore the cosmos with LSS surveys.

[ascl:2402.004] CCBH-Numerics: Cosmologically-coupled-black-holes formation mass numerics

CCBH-Numerics (previously called CCBH-PLPP) computes the probability of the existence of a single cosmologically coupled black hole (BH) with a formation mass below a specified threshold for given observational data of binary black holes (BBHs) from gravitational waves. The code uses the unbiased population of BBHs, as given by the power-law-plus-peak (PLPP) profile, as the observational input, and assumes that the detected BBHs are formed from stellar evolution, not primordial BHs. CCBH-Numerics also works with individual data from BBHs and for NSBH pairs as well.

[ascl:2206.020] CCDLAB: FITS image viewer and data reducer

CCDLAB provides graphical user interface functionality for FITS image viewing and data reduction based on the JPFITS FITS-file interface. It can view, manipulate, and save FITS primary image data and image extensions, view and manipulate FITS image headers, and view FITS Bintable extensions. The code enables batch processing, viewing, and saving of FITS images and searching FITS files on disk. CCDLAB also provides general image reduction techniques, source detection and characterization, and can create World Coordinate Solutions automatically or manually for FITS images.

[ascl:1403.021] CCDPACK: CCD Data Reduction Package

CCDPACK contains programs to debias, remove dark current, flatfield, register, resample and normalize data from single- or multiple-CCD instruments. The basic reduction stages can be set up using an X based GUI that controls an automated reduction system so one can to start working without any detailed knowledge of the package (or indeed of CCD reduction). Registration is performed using graphical, script based or automated techniques that keep the amount of work to a minimum. CCDPACK uses the Starlink environment (ascl:1110.012).

[ascl:1510.007] ccdproc: CCD data reduction software

Ccdproc is an affiliated package for the AstroPy package for basic data reductions of CCD images. The ccdproc package provides many of the necessary tools for processing of ccd images built on a framework to provide error propagation and bad pixel tracking throughout the reduction process.

[ascl:1511.013] CCDtoRGB: RGB image production from three-band astronomical images

CCDtoRGB produces red‐green‐blue (RGB) composites from three‐band astronomical images, ensuring an object with a specified astronomical color has a unique color in the RGB image rather than burnt‐out white stars. Use of an arcsinh stretch shows faint objects while simultaneously preserving the structure of brighter objects in the field, such as the spiral arms of large galaxies.

[ascl:1707.004] CCFpams: Atmospheric stellar parameters from cross-correlation functions

CCFpams allows the measurement of stellar temperature, metallicity and gravity within a few seconds and in a completely automated fashion. Rather than performing comparisons with spectral libraries, the technique is based on the determination of several cross-correlation functions (CCFs) obtained by including spectral features with different sensitivity to the photospheric parameters. Literature stellar parameters of high signal-to-noise (SNR) and high-resolution HARPS spectra of FGK Main Sequence stars are used to calibrate the stellar parameters as a function of CCF areas.

[ascl:1901.003] CCL: Core Cosmology Library

The Core Cosmology Library (CCL) computes basic cosmological observables and provides predictions for many cosmological quantities, including distances, angular power spectra, correlation functions, halo bias and the halo mass function through state-of-the-art modeling prescriptions. Fiducial specifications for the expected galaxy distributions for the Large Synoptic Survey Telescope (LSST) are also included, together with the capability of computing redshift distributions for a user-defined photometric redshift model. Predictions for correlation functions of galaxy clustering, galaxy-galaxy lensing and cosmic shear are within a fraction of the expected statistical uncertainty of the observables for the models and in the range of scales of interest to LSST. CCL is written in C and has a python interface.

[ascl:1208.006] ccogs: Cosmological Calculations on the GPU

This suite contains two packages for computing cosmological quantities on the GPU: aperture_mass, which calculates the aperture mass map for a given dataset using the filter proposed by Schirmer et al (2007) (an NFW profile with exponential cut-offs at zero and large radii), and angular_correlation, which calculates the 2-pt angular correlation function using data and a flat distribution of randomly generated galaxies. A particular estimator is chosen, but the user has the flexibility to explore other estimators.

[ascl:1604.009] CCSNMultivar: Core-Collapse Supernova Gravitational Waves

CCSNMultivar aids the analysis of core-collapse supernova gravitational waves. It includes multivariate regression of Fourier transformed or time domain waveforms, hypothesis testing for measuring the influence of physical parameters, and the Abdikamalov et. al. catalog for example use. CCSNMultivar can optionally incorporate additional uncertainty due to detector noise and approximate waveforms from anywhere within the parameter space.

[ascl:1904.006] CDAWeb: Coordinated Data Analysis Web

CDAWeb (Coordinated Data Analysis Workshop Web) enables viewing essentially any data produced in Common Data Format/CDF with the ISTP/IACG Guidelines and supports interactive plotting of variables from multiple instruments on multiple investigations simultaneously on arbitrary, user-defined time-scales. It also supports data retrieval in both CDF or ASCII format. NASA's GSFC Space Physics Data Facility maintains a publicly available database that includes approximately 600 data variables from Geotail, Wind, Interball, Polar, SOHO, ancilliary spacecraft and ground-based investigations. CDAWeb includes high resolution digital data products that support event correlative science. The system combines the client-server user interface technology of the Web with a powerful set of customized routines based in the COTS Interactive Data Language (IDL) package to leverage the data format standards.

[ascl:2005.017] cdetools: Tools for Conditional Density Estimates

cdetools provides tools for evaluating conditional density estimates and has applications to photometric redshift estimation and likelihood-free cosmological inference. Available in R and Python, it provides functions for computing a so-called CDE loss function for tuning and assessing the quality of individual probability density functions (PDFs) and diagnostic functions that probe the population-level performance of the PDFs.

[ascl:2305.025] CELEBI: Precision localizations and polarimetric data for fast radio bursts

The Australian Square Kilometre Array Pathfinder (ASKAP) has been enabled by the Commensal Real-time ASKAP Fast Transients Collaboration (CRAFT) to detect Fast Radio Bursts (FRBs) in real-time and save raw antenna voltages containing FRB detections. CELEBI, the CRAFT Effortless Localization and Enhanced Burst Inspection pipeline, extends CRAFT’s existing software to process ASKAP voltages to produce sub-arcsecond precision localizations and polarimetric data at time resolutions as fine as 3 ns of FRB events. CELEBI uses Nextflow (ascl:2305.024) to link together Bash and Python code to perform software correlation, interferometric imaging, and beamforming, thereby making use of common astronomical software packages.

[ascl:1709.008] celerite: Scalable 1D Gaussian Processes in C++, Python, and Julia

celerite provides fast and scalable Gaussian Process (GP) Regression in one dimension and is implemented in C++, Python, and Julia. The celerite API is designed to be familiar to users of george and, like george, celerite is designed to efficiently evaluate the marginalized likelihood of a dataset under a GP model. This is then be used alongside a non-linear optimization or posterior inference library for the best results.

celerite has been superceded by celerite2 (ascl:2310.001).

[ascl:2310.001] celerite2: Fast and scalable Gaussian Processes in one dimension

celerite2 is a re-write of celerite (ascl:1709.008), an algorithm for fast and scalable Gaussian Process (GP) Regression in one dimension. celerite2 improves numerical stability and integration with various machine learning frameworks. The implementation includes interfaces in Python and C++, with full support for PyMC (ascl:1610.016) and JAX (ascl:2111.002).

[ascl:1602.011] Celestial: Common astronomical conversion routines and functions

The R package Celestial contains common astronomy conversion routines, particularly the HMS and degrees schemes, and a large range of functions for calculating properties of different cosmologies (as used by the cosmocalc website). This includes distances, ages, growth rate/factor and densities (e.g., Omega evolution and critical energy density). It also includes functions for calculating thermal properties of the CMB and Planck's equations and virial properties of halos in different cosmologies, and standard NFW and weak-lensing formulas and low level orbital routines for calculating Roche properties, Vis-Viva and free-fall times.

[ascl:1612.016] CELib: Software library for simulations of chemical evolution

CELib (Chemical Evolution Library) simulates chemical evolution of galaxy formation under the simple stellar population (SSP) approximation and can be used by any simulation code that uses the SSP approximation, such as particle-base and mesh codes as well as semi-analytical models. Initial mass functions, stellar lifetimes, yields from type II and Ia supernovae, asymptotic giant branch stars, and neutron star mergers components are included and a variety of models are available for use. The library allows comparisons of the impact of individual models on the chemical evolution of galaxies by changing control flags and parameters of the library.

[ascl:2302.005] celmech: Sandbox for celestial mechanics calculations

celmech provides a variety of analytical and semianalytical tools for celestial mechanics and dynamical astronomy. The package interfaces closely with the REBOUND N-body integrator (ascl:1110.016), thus facilitating comparisons between calculation results and direct N-body integrations. celmech can isolate the contribution of particular resonances to a system's dynamical evolution, and can develop simple analytical models with the minimum number of terms required to capture a particular dynamical phenomenon.

[ascl:1906.021] centerRadon: Center determination code in stellar images

centerRadon finds the center of stars based on Radon Transform to sub-pixel precision. For a coronagraphic image of a star, it starts from a given location, then for each sub-pixel position, it interpolates the image and sums the pixels along different angles, creating a cost function. The center of the star is expected to correspond with where the cost function maximizes. The default values are set for the STIS coronagraphic images of the Hubble Space Telescope by summing over the diagonals (i.e., 45° and 135°), but it can be generally applied to other high-contrast imaging instruments with or without Adaptive Optics systems such as HST-NICMOS, P1640, or GPI.

[ascl:1308.015] Ceph_code: Cepheid light-curves fitting

Ceph_code fits multi-band Cepheid light-curves using templates derived from OGLE observations. The templates include short period stars (<10 day) and overtone stars.

[ascl:1610.002] CERES: Collection of Extraction Routines for Echelle Spectra

The Collection of Extraction Routines for Echelle Spectra (CERES) constructs automated pipelines for the reduction, extraction, and analysis of echelle spectrograph data. This modular code includes tools for handling the different steps of the processing: CCD reductions, tracing of the echelle orders, optimal and simple extraction, computation of the wave-length solution, estimation of radial velocities, and rough and fast estimation of the atmospheric parameters. The standard output of pipelines constructed with CERES is a FITS cube with the optimally extracted, wavelength calibrated and instrumental drift-corrected spectrum for each of the science images. Additionally, CERES includes routines for the computation of precise radial velocities and bisector spans via the cross-correlation method, and an automated algorithm to obtain an estimate of the atmospheric parameters of the observed star.

[ascl:1010.059] CESAM: A Free Code for Stellar Evolution Calculations

The Cesam code is a consistent set of programs and routines which perform calculations of 1D quasi-hydrostatic stellar evolution including microscopic diffusion of chemical species and diffusion of angular momentum. The solution of the quasi-static equilibrium is performed by a collocation method based on piecewise polynomials approximations projected on a B-spline basis; that allows stable and robust calculations, and the exact restitution of the solution, not only at grid points, even for the discontinuous variables. Other advantages are the monitoring by only one parameter of the accuracy and its improvement by super-convergence. An automatic mesh refinement has been designed for adjusting the localisations of grid points according to the changes of unknowns. For standard models, the evolution of the chemical composition is solved by stiffly stable schemes of orders up to four; in the convection zones mixing and evolution of chemical are simultaneous. The solution of the diffusion equation employs the Galerkin finite elements scheme; the mixing of chemicals is then performed by a strong turbulent diffusion. A precise restoration of the atmosphere is allowed for.

[ascl:2111.005] CEvNS: Calculate Coherent Elastic Neutrino-Nucleus Scattering cross sections and recoil spectra

CEvNS calculates Coherent Elastic Neutrino-Nucleus Scattering (CEvNS) cross sections and recoil spectra. It includes (among other things) the Standard Model contribution to the CEvNS cross section, along with the contribution from Simplified Models with new vector or scalar mediators. It also covers neutrino magnetic moments and non-standard contact neutrino interactions (NSI).

[ascl:1901.001] cFE: Core Flight Executive

The Core Flight Executive is a portable, platform-independent embedded system framework that is the basis for flight software for satellite data systems and instruments; cFE can be used on other embedded systems as well. The Core Flight Executive is written in C and depends on the software library Operating System Abstraction Layer (OSAL), which is available at https://sourceforge.net/projects/osal/.

[ascl:1010.001] CFITSIO: A FITS File Subroutine Library

CFITSIO is a library of C and Fortran subroutines for reading and writing data files in FITS (Flexible Image Transport System) data format. CFITSIO provides simple high-level routines for reading and writing FITS files that insulate the programmer from the internal complexities of the FITS format. CFITSIO also provides many advanced features for manipulating and filtering the information in FITS files.

[ascl:2101.009] cFS: core Flight System

cFS is a platform and project independent reusable software framework and set of reusable applications developed by NASA Goddard Space Flight Center. There are three key aspects to the cFS architecture: a dynamic run-time environment, layered software, and a component based design, making it suitable for reuse on NASA flight projects and/or embedded software systems. This framework is used as the basis for the flight software for satellite data systems and instruments, but can also be used on other embedded systems. Modules of this package are used in NICER (Neutron star Interior Composition Explorer). The modules are available as separate downloads from SourceForge through the NASA cFS website.

[ascl:1904.003] CGS: Collisionless Galactic Simulator

CGS (Collisionless Galactic Simulator) uses Fourier techniques to solve the Possion equation ∇2Φ = 4πGρ, relating the mean potential Φ of a system to the mass density ρ. The angular dependence of the force is treated exactly in terms of the single-particle Legendre polynomials, which preserves accuracy and avoids systematic errors. The density is assigned to a radial grid by means of a cloud-in-cell scheme with a linear kernel, i.e., a particle contributes to the density of the two closest cells with a weight depending linearly on the distance from the center of the cell considered. The same kernel is then used to assign the force from the grid to the particle. The time step is chosen adaptively in such a way that particles are not allowed to cross more than one radial cell during one step. CGS is based on van Albada's code (1982) and is distributed in the NEMO (ascl:1010.051) Stellar Dynamics Toolbox.

[ascl:1411.024] CGS3DR: UKIRT CGS3 data reduction software

CGS3DR is data reduction software for the UKIRT CGS3 mid-infrared grating spectrometer instrument. It includes a command-line interface and a GUI. The software, originally on VMS, was ported to Unix. It uses Starlink (ascl:1110.012) infrastructure libraries.

[ascl:1406.013] CGS4DR: Automated reduction of data from CGS4

CGS4DR is data reduction software for the CGS4 instrument at UKIRT. The software can be used offline to reprocess CGS4 data. CGS4DR allows a wide variety of data reduction configurations, and can interlace oversampled data frames; reduce known bias, dark, flat, arc, object and sky frames; remove the sky, residual sky OH-lines (λ < 2.3 μm) and thermal emission (λ ≥ 2.3 μm) from data; and add data into groups for improved signal-to-noise. It can also extract and de-ripple a spectrum and offers a variety of ways to plot data, in addition to other useful features. CGS4DR is distributed as part of the Starlink software collection (ascl:1110.012).

[ascl:1910.017] ChainConsumer: Corner plots, LaTeX tables and plotting walks

ChainConsumer consumes the chains output from Monte Carlo processes such as MCMC to produce plots of the posterior surface inferred from the chain distributions, to plot the chains as walks to check for mixing and convergence, and to output parameter summaries in the form of LaTeX tables. It handles multiple models (chains), allowing for model comparison using AIC, BIC or DIC metrics.

[ascl:1105.005] ChaNGa: Charm N-body GrAvity solver

ChaNGa (Charm N-body GrAvity solver) performs collisionless N-body simulations. It can perform cosmological simulations with periodic boundary conditions in comoving coordinates or simulations of isolated stellar systems. It also can include hydrodynamics using the Smooth Particle Hydrodynamics (SPH) technique. It uses a Barnes-Hut tree to calculate gravity, with hexadecapole expansion of nodes and Ewald summation for periodic forces. Timestepping is done with a leapfrog integrator with individual timesteps for each particle.

[ascl:1703.015] Charm: Cosmic history agnostic reconstruction method

Charm (cosmic history agnostic reconstruction method) reconstructs the cosmic expansion history in the framework of Information Field Theory. The reconstruction is performed via the iterative Wiener filter from an agnostic or from an informative prior. The charm code allows one to test the compatibility of several different data sets with the LambdaCDM model in a non-parametric way.

[ascl:2309.017] ChEAP: Chemical Evolution Analytic Package

ChEAP (Chemical Evolution Analytic Package) implements an analytic solution for the chemical evolution model of the Galaxy that extends the instantaneous recycling approximation with the contribution of Type Ia SNe. The code works for different prescriptions of the delay time distributions (DTDs), including the single and double degenerate scenarios, and allows the inclusion of an arbitrary number of pristine gas infalls. The required functions are contained in the CheapTools.py file, which is imported as a Python library. ChEAP also includes code to illustrate, with a random-parameter chemical evolution model, the accuracy of this analytic solution compared to one using numerical integration.

[ascl:1412.002] Cheetah: Starspot modeling code

Cheetah models starspots in photometric data (lightcurves) by calculating the modulation of a light curve due to starspots. The main parameters of the program are the linear and quadratic limb darkening coefficients, stellar inclination, spot locations and sizes, and the intensity ratio of the spots to the stellar photosphere. Cheetah uses uniform spot contrast and the minimum number of spots needed to produce a good fit and ignores bright regions for the sake of simplicity.

[ascl:2107.020] Chem-I-Calc: Chemical Information Calculator

Chem-I-Calc evaluates the chemical information content of resolved star spectroscopy. It takes advantage of the Fisher information matrix and the Cramér-Rao inequality to quickly calculate the Cramér-Rao lower bounds (CRLBs), which give the best theoretically achievable precision from a set of observations.

[ascl:1702.011] Chempy: A flexible chemical evolution model for abundance fitting

Chempy models Galactic chemical evolution (GCE); it is a parametrized open one-zone model within a Bayesian framework. A Chempy model is specified by a set of 5-10 parameters that describe the effective galaxy evolution along with the stellar and star-formation physics: e.g. the star-formation history (SFH), the feedback efficiency, the stellar initial mass function (IMF) and the incidence of supernova of type Ia (SN Ia). Chempy can sample the posterior probability distribution in the full model parameter space and test data-model matches for different nucleosynthetic yield sets, performing essentially as a chemical evolution fitting tool. Chempy can be used to confront predictions from stellar nucleosynthesis with complex abundance data sets and to refine the physical processes governing the chemical evolution of stellar systems.

ChempyMulti (ascl:1909.006) is available as an update to the ChempyScoring package.

[ascl:1909.006] ChempyMulti: Multi-star Bayesian inference with Chempy

ChempyMulti is an update to Chempy (ascl:1702.011) and provides yield table scoring and multi-star Bayesian inference. This replaces the ChempyScoring package in Chempy. Chempy is a flexible one-zone open-box chemical evolution model, incorporating abundance fitting and stellar feedback calculations. It includes routines for parameter optimization for simulations and observational data and yield table scoring.

[ascl:2108.016] Chemulator: Thermochemical emulator for hydrodynamical modeling

The neural network-based emulator Chemulator advances the gas temperature and chemical abundances of a single position in an astrophysical gas. It is accurate on a single timestep and stable over many iterations with decreased accuracy, though performs less well at low visual extinctions. The code is useful for applications such as large scale ISM modeling; by retraining the emulator for a given parameter space, Chemulator could also perform more specialized applications such as planetary atmosphere modeling.

[ascl:9911.004] CHIANTI: A database for astrophysical emission line spectroscopy

CHIANTI consists of a critically evaluated set of atomic data necessary to calculate the emission line spectrum of astrophysical plasmas. The data consists of atomic energy levels, atomic radiative data such as wavelengths, weighted oscillator strengths and A values, and electron collisional excitation rates. A set of programs that use these data to calculate the spectrum in a desired wavelength range as a function of temperature and density are also provided. These programs have been written in Interactive Data Language (IDL) and descriptions of these various programs are provided on the website.

[ascl:1308.017] ChiantiPy: Python package for the CHIANTI atomic database

ChiantiPy is an object-orient Python package for calculating astrophysical spectra using the CHIANTI atomic database for astrophysical spectroscopy. It provides access to the database and the ability to calculate various physical quantities for the interpretation of astrophysical spectra.

[ascl:1504.005] chimenea: Multi-epoch radio-synthesis data imaging

Chimenea implements an heuristic algorithm for automated imaging of multi-epoch radio-synthesis data. It generates a deep image via an iterative Clean subroutine performed on the concatenated visibility set and locates steady sources in the field of view. The code then uses this information to apply constrained and then unconstrained (i.e., masked/open-box) Cleans to the single-epoch observations. This obtains better results than if the single-epoch data had been processed independently without prior knowledge of the sky-model. The chimenea pipeline is built upon CASA (ascl:1107.013) subroutines, interacting with the CASA environment via the drive-casa (ascl:1504.006) interface layer.

[ascl:1403.006] CHIMERA: Core-collapse supernovae simulation code

CHIMERA simulates core collapse supernovas; it is three-dimensional and accounts for the differing energies of neutrinos. This massively parallel multiphysics code conserves total energy (gravitational, internal, kinetic, and neutrino) to within 0.5 B, given a conservative gravitational potential. CHIMERA has three main components: a hydro component, a neutrino transport component, and a nuclear reaction network component. It also includes a Poisson solver for the gravitational potential and a sophisticated equation of state.

[ascl:1602.017] CHIP: Caltech High-res IRS Pipeline

CHIP (Caltech High-res IRS Pipeline) reduces high signal-to-noise short-high and long-high Spitzer-IRS spectra, especially that taken with dedicated background exposures. Written in IDL, it is independent of other Spitzer reduction tools except IRSFRINGE (ascl:1602.016).

[ascl:2306.046] CHIPS: Circumstellar matter and light curves of interaction-powered transients simulator

CHIPS (Complete History of Interaction-Powered Supernovae) simulates the circumstellar matter and light curves of interaction-powered transients. Coupled with MESA (ascl:1010.083), the combined codes can obtain the circumstellar matter profile and light curves of the interaction-powered supernovae. CHIPS generates a realistic CSM from a model-agnostic mass eruption calculation, which can serve as a reference for observers to compare with various observations of the CSM. The code can also generate bolometric light curves from CSM interaction, which can be compared with observed light curves. The calculation of mass eruption and light curve typically takes respectively half a day and half an hour on modern CPUs.

[ascl:1104.012] CHIWEI: A Code of Goodness of Fit Tests for Weighted and Unweighed Histograms

A self-contained Fortran-77 program for goodness of fit tests for histograms with weighted entries as well as with unweighted entries is presented. The code calculates test statistic for case of histogram with normalized weights of events and for case of unnormalized weights of events.

[ascl:1409.008] CHLOE: A tool for automatic detection of peculiar galaxies

CHLOE is an image analysis unsupervised learning algorithm that detects peculiar galaxies in datasets of galaxy images. The algorithm first computes a large set of numerical descriptors reflecting different aspects of the visual content, and then weighs them based on the standard deviation of the values computed from the galaxy images. The weighted Euclidean distance of each galaxy image from the median is measured, and the peculiarity of each galaxy is determined based on that distance.

[ascl:1607.006] Cholla: 3D GPU-based hydrodynamics code for astrophysical simulation

Cholla (Computational Hydrodynamics On ParaLLel Architectures) models the Euler equations on a static mesh and evolves the fluid properties of thousands of cells simultaneously using GPUs. It can update over ten million cells per GPU-second while using an exact Riemann solver and PPM reconstruction, allowing computation of astrophysical simulations with physically interesting grid resolutions (>256^3) on a single device; calculations can be extended onto multiple devices with nearly ideal scaling beyond 64 GPUs.

[ascl:1202.008] Chombo: Adaptive Solutions of Partial Differential Equations

Chombo provides a set of tools for implementing finite difference methods for the solution of partial differential equations on block-structured adaptively refined rectangular grids. Both elliptic and time-dependent modules are included. Chombo supports calculations in complex geometries with both embedded boundaries and mapped grids, and also supports particle methods. Most parallel platforms are supported, and cross-platform self-describing file formats are included.

The Chombo package is a product of the community of Collaborators working with the Applied Numerical Algorithms Group (ANAG), part of the Computational Research Division at LBNL.

[ascl:1209.004] CHORIZOS: CHi-square cOde for parameterRized modeling and characterIZation of phOtometry and Spectrophotmetry

CHORIZOS is a multi-purpose Bayesian code developed in IDL to compare photometric data with model spectral energy distributions (SEDs). The user can select the SED family (e.g. Kurucz) and choose the behavior of each parameter (e.g. Teff) to be fixed, constrained to a given range, or unconstrained. The code calculates the likelihood for the full specified parameter ranges, thus allowing for the identification of multiple solutions and the evaluation of the full correlation matrix for the derived parameters of a single solution.

[ascl:1011.018] chrom_exact: Transit of a Spherical Planet of a Stellar Chromosphere which is Geometrically Thin

Transit light curves for stellar continua have only one minimum and a "U" shape. By contrast, transit curves for optically thin chromospheric emission lines can have a "W" shape because of stellar limb-brightening. We calculate light curves for an optically thin shell of emission and fit these models to time-resolved observations of Si IV absorption by the planet HD209458b. We find that the best fit Si IV absorption model has R_p,SIV/R_*= 0.34+0.07-0.12, similar to the Roche lobe of the planet. While the large radius is only at the limit of statistical significance, we develop formulae applicable to transits of all optically thin chromospheric emission lines.

[ascl:1804.007] chroma: Chromatic effects for LSST weak lensing

Chroma investigates biases originating from two chromatic effects in the atmosphere: differential chromatic refraction (DCR), and wavelength dependence of seeing. These biases arise when using the point spread function (PSF) measured with stars to estimate the shapes of galaxies with different spectral energy distributions (SEDs) than the stars.

[ascl:1701.008] ChromaStar (formerly GrayStar): Web-based pedagogical stellar modeling

ChromaStar (formerly GrayStar) is a web-based pedagogical stellar model. It approximates stellar atmospheric and spectral line modeling in JavaScript with visualization in HTML. It is suitable for a wide range of education and public outreach levels depending on which optional plots and print-outs are turned on. All plots and renderings are pure basic HTML and the plotting module contains original HTML procedures for automatically scaling and graduating x- and y-axes.

[ascl:1701.009] ChromaStarServer (formerly GrayStarServer): Stellar atmospheric modeling and spectrum synthesis

ChromaStarServer (formerly GrayStarServer) is a stellar atmospheric modeling and spectrum synthesis code of pedagogical accuracy that is accessible in any web browser on commonplace computational devices and that runs on a timescale of a few seconds.

[ascl:2009.021] Chrono: Multi-physics simulation engine

Chrono is a physics-based modelling and simulation infrastructure implemented in C++. It can handle multibody dynamics, collision detection, and granular flows, among many other physical processes. Though the applications for which Chrono has been used most often are vehicle dynamics, robotics, and machine design, it has been used to simulate asteroid aggregation and granular systems for astrophysics research. Chrono is written in C++; a Python version, PyChrono, is also available.

[ascl:1311.006] CIAO: Chandra Interactive Analysis of Observations

CIAO is a data analysis system written for the needs of users of the Chandra X-ray Observatory. Because Chandra data is 4-dimensional (2 spatial, time, energy) and each dimension has many independent elements, CIAO was built to handle N-dimensional data without concern about which particular axes were being analyzed. Apart from a few Chandra instrument tools, CIAO is mission independent. CIAO tools read and write several formats, including FITS images and tables (which includes event files) and IRAF imh files. CIAO is a powerful system for the analysis of many types of data.

[ascl:1803.002] CIFOG: Cosmological Ionization Fields frOm Galaxies

CIFOG is a versatile MPI-parallelised semi-numerical tool to perform simulations of the Epoch of Reionization. From a set of evolving cosmological gas density and ionizing emissivity fields, it computes the time and spatially dependent ionization of neutral hydrogen (HI), neutral (HeI) and singly ionized helium (HeII) in the intergalactic medium (IGM). The code accounts for HII, HeII, HeIII recombinations, and provides different descriptions for the photoionization rate that are used to calculate the residual HI fraction in ionized regions. This tool has been designed to be coupled to semi-analytic galaxy formation models or hydrodynamical simulations. The modular fashion of the code allows the user to easily introduce new descriptions for recombinations and the photoionization rate.

[ascl:1111.004] CIGALE: Code Investigating GALaxy Emission

The CIGALE code has been developed to study the evolution of galaxies by comparing modelled galaxy spectral energy distributions (SEDs) to observed ones from the far ultraviolet to the far infrared. It extends the SED fitting algorithm written by Burgarella et al. (2005, MNRAS 360, 1411). While the previous code was designed to fit SEDs in the optical and near infrared, CIGALE is able to fit SEDs up to the far infrared using Dale & Helou (2002, ApJ 576, 159). CIGALE Bayesian and CIGALE Monte Carlo Markov Chain are available.

[ascl:1708.002] CINE: Comet INfrared Excitation

CINE calculates infrared pumping efficiencies that can be applied to the most common molecules found in cometary comae such as water, hydrogen cyanide or methanol. One of the main mechanisms for molecular excitation in comets is the fluorescence by the solar radiation followed by radiative decay to the ground vibrational state. This command-line tool calculates the effective pumping rates for rotational levels in the ground vibrational state scaled by the heliocentric distance of the comet. Fluorescence coefficients are useful for modeling rotational emission lines observed in cometary spectra at sub-millimeter wavelengths. Combined with computational methods to solve the radiative transfer equations based, e.g., on the Monte Carlo algorithm, this model can retrieve production rates and rotational temperatures from the observed emission spectrum.

[ascl:2206.007] CircleCraters: Crater-counting plugin for QGIS

CircleCraters is a projection independent crater counting plugin for QGIS. It has the flexibility to crater count in a GIS environment on Windows, OS X, or Linux, and uses three-click input to define crater rims as a circle.

[ascl:1202.001] CISM_DX: Visualization and analysis tool

CISM_DX is a community-developed suite of integrated data, models, and data and model explorers, for research and education. The data and model explorers are based on code written for OpenDX and Octave; OpenDX provides the visualization infrastructures as well as the process for creating user interfaces to the model and data, and Octave allows for extensive data manipulation and reduction operations. The CISM-DX package extends the capabilities of the core software programs to meet the needs of space physics researchers.

[ascl:2202.014] Citlalicue: Create and manipulate stellar light curves

Citlalicue allows you to create synthetic stellar light curves (transits, stellar variability and white noise) and detrend light curves using Gaussian Processes (GPs). Transits are implemented using PyTransit (ascl:1505.024). Python notebooks are provided to demonstrate using Citlalicue for both functions.

[ascl:1312.013] CJAM: First and second velocity moments calculations

CJAM calculates first and second velocity moments using the Jeans Anisotropic MGE (JAM) models of Cappellari (2008) and Cappellari (2012). These models have been extended to calculate all three (x, y, z) first moments and all six (xx, yy, zz, xy, xz, yz) second moments. CJAM, written in C, is based on the IDL implementation of the line-of-sight calculations by Michele Cappellari.

[ascl:2210.028] CK: Cloud modeling and removal

Cloud Killer recovers surface albedo maps by using reflected light photometry to map the clouds and surface of unresolved exoplanets. For light curves with negligible photometric uncertainties, the minimal top-of-atmosphere albedo at a location is a good estimate of its surface albedo. On synthetic data, it shows little bias, good precision, and accuracy, but slightly underestimated uncertainties; exoplanets with large, changing cloud structures observed near quadrature phases are good candidates for Cloud Killer cloud removal.

[ascl:2105.018] ClaRAN: Classifying Radio sources Automatically with Neural networks

ClaRAN (Classifying Radio sources Automatically with Neural networks) classifies radio source morphology based upon the Faster Region-based Convolutional Neutral Network (Faster R-CNN). It is capable of associating discrete and extended components of radio sources in an automated fashion. ClaRAN demonstrates the feasibility of applying deep learning methods for cross-matching complex radio sources of multiple components with infrared maps. The promising results from ClaRAN have implications for the further development of efficient cross-wavelength source identification, matching, and morphology classifications for future radio surveys.

[ascl:2403.015] CLASS-PT: Nonlinear perturbation theory extension of the Boltzmann code CLASS

CLASS-PT modifies the CLASS (ascl:1106.020) code to compute the non-linear power spectra of dark matter and biased tracers in one-loop cosmological perturbation theory, for both Gaussian and non-Gaussian initial conditions. CLASS-PT can be interfaced with the MCMC sampler MontePython (ascl:1805.027) using the (new and improved) custom-built likelihoods found here.

[ascl:1106.020] CLASS: Cosmic Linear Anisotropy Solving System

Boltzmann codes are used extensively by several groups for constraining cosmological parameters with Cosmic Microwave Background and Large Scale Structure data. This activity is computationally expensive, since a typical project requires from 10'000 to 100'000 Boltzmann code executions. The code CLASS (Cosmic Linear Anisotropy Solving System) incorporates improved approximation schemes leading to a simultaneous gain in speed and precision. We describe here the three approximations used by CLASS for basic LambdaCDM models, namely: a baryon-photon tight-coupling approximation which can be set to first order, second order or to a compromise between the two; an ultra-relativistic fluid approximation which had not been implemented in public distributions before; and finally a radiation streaming approximation taking reionisation into account.

[ascl:1807.013] CLASSgal: Relativistic cosmological large scale structure code

CLASSgal computes large scale structure observables; it includes all relativistic corrections and computes both the power spectrum Cl(z1,z2) and the corresponding correlation function ξ(θ, z1, z2) of the matter density and the galaxy number fluctuations in linear perturbation theory. These quantities contain the full information encoded in the large scale matter distribution at the level of linear perturbation theory for Gaussian initial perturbations. CLASSgal is a modified version of CLASS (ascl:1106.020).

[ascl:2409.011] ClassiPyGRB: Swift/BAT GRB visualizer and classifier

ClassiPyGRB downloads, processes, visualizes, and classifies GRBs in the Swift/BAT database. Users can query light curves for any GRB and use tools to preprocess the data, including noise/duration reduction and interpolation. The package provides a set of facilities and tutorials for classifying GRBs based on their light curves using a method based on a dimensionality reduction of the data using t-Distributed Stochastic Neighbour Embedding (TSNE); results are visualized using a Graphical User Interface (GUI). ClassiPyGRB also plots and animates the results of the TSNE analysis for a deeper hyperparameter grid search.

[ascl:1407.010] CLE: Coronal line synthesis

CLE, written in Fortran 77, synthesizes Stokes profiles of forbidden lines such as Fe XIII 1074.7nm, formed in magnetic dipole transitions under coronal conditions. The lines are assumed to be optically thin, excited by (anisotropic) photospheric radiation and thermal particle collisions.

[ascl:1904.010] CLEAR: CANDELS Ly-alpha Emission at Reionization processing pipeline and library

The CLEAR pipeline and library performs various tasks for the CANDELS Ly-alpha Emission at Reionization (CLEAR) experiment of deep Hubble grism observations of high-z galaxies. It interlaces images, models contamination of overlapping grism spectra, extracts source spectra, stacks the extracted source spectra, and estimates fits for sources redshifts and emission lines.

[ascl:2310.008] clfd: Clean folded data

clfd (clean folded data) implements GPU-accelerated smart interference removal algorithms to be used on folded pulsar search and pulsar timing data. The code converts each source profile to a small set of representative features, flagging outliers in the resulting feature space. clfd further visualizes the outlier flagging process, as well as the resulting two-dimensional time-frequency mask that is applied to the clean archive. The code provides access to cleaning algorithms that were initially developed for the High Time Resolution Universe (HTRU) survey which found several pulsars.

[ascl:1602.019] CLOC: Cluster Luminosity Order-Statistic Code

CLOC computes cluster order statistics, i.e. the luminosity distribution of the Nth most luminous cluster in a population. It is flexible and requires few assumptions, allowing for parametrized variations in the initial cluster mass function and its upper and lower cutoffs, variations in the cluster age distribution, stellar evolution and dust extinction, as well as observational uncertainties in both the properties of star clusters and their underlying host galaxies. It uses Markov chain Monte Carlo methods to search parameter space to find best-fitting values for the parameters describing cluster formation and disruption, and to obtain rigorous confidence intervals on the inferred values.

[ascl:2410.014] CloudCovErr.jl: Debias and improve error bar estimates for photometry

CloudCovErr.jl debiases and improves error bar estimates for photometry on top of structured filamentary backgrounds. It first estimates the covariance matrix of the residuals from a previous photometric model and then computes corrections to the estimated flux and flux uncertainties. Using an infilling technique to estimate the background and its uncertainty dramatically improves flux and flux uncertainty estimates for stars in images of fields with significant nebulosity.

[ascl:2312.026] CloudFlex: Small-scale structure observational signatures modeling

CloudFlex models observational signatures associated with the small-scale structure of the circumgalactic medium. It populates cool gas structures in the CGM as a complex of cloudlets using a Monte Carlo method. Various parameters can be set to describe the structure of the cloudlet complexes, including cloudlet mass, density, velocity, and size. Functionality exists for generating the observational signatures of sightlines piercing these cloudlet complexes, borrowing heavily from the Trident code (ascl:1612.019).

[ascl:1103.015] Cloudy_3D: Quick Pseudo-3D Photoionization Code

We developed a new quick pseudo-3D photoionization code based on Cloudy (G. Ferland) and IDL (RSI) tools. The code is running the 1D photoionization code Cloudy various times, changing at each run the input parameters (e.g. inner radius, density law) according to an angular law describing the morphology of the object. Then a cube is generated by interpolating the outputs of Cloudy. In each cell of the cube, the physical conditions (electron temperature and density, ionic fractions) and the emissivities of lines are determined. Associated tools (VISNEB and VELNEB_3D) are used to rotate the nebula and to compute surface brightness maps and emission line profiles, given a velocity law and taking into account the effect of the thermal broadening and eventually the turbulence. Integrated emission line profiles are computed, given aperture shapes and positions (seeing and instrumental width effects are included). The main advantage of this tool is the short time needed to compute a model (a few tens minutes).

Cloudy_3D has been superseded by pycloudy (ascl:1304.020).

[ascl:9910.001] Cloudy: Numerical simulation of plasmas and their spectra

Cloudy is a large-scale spectral synthesis code designed to simulate fully physical conditions within an astronomical plasma and then predict the emitted spectrum. The code is freely available and is widely used in the analysis and interpretation of emission-line spectra.

[ascl:2409.008] cloudyfsps: Python interface between FSPS and Cloudy

cloudyfsps is a Python interface between FSPS (ascl:1010.043) and Cloudy (ascl:9910.001). It compiles FSPS models for use as ionizing sources (Stellar SED grids) within Cloudy and generates Cloudy input files, single-parameter or grids of parameters. It runs Cloudy models in parallel and formats the output, which is nebular continuum and nebular line emission, for FSPS input and for explorative manipulation and plotting within Python. cloudyfsps includes pre-packaged plots for BPT diagrams (NII, SII, OI, OII) with observed data from HII regions and SDSS galaxies, and also provides comparisons with MAPPINGS III (ascl:1306.008) models.

[ascl:1909.009] CLOVER: Convolutional neural network spectra identifier and kinematics predictor

CLOVER (Convnet Line-fitting Of Velocities in Emission-line Regions) is a convolutional neural network (ConvNet) trained to identify spectra with two velocity components along the line of sight and predict their kinematics. It works with Gaussian emission lines (e.g., CO) and lines with hyperfine structure (e.g., NH3). CLOVER has two prediction steps, classification and parameter prediction. For the first step, CLOVER segments the pixels in an input data cube into one of three classes: noise (i.e., no emission), one-component (emission line with single velocity component), and two-component (emission line with two velocity components). For the pixels identified as two-components in the first step, a second regression ConvNet is used to predict centroid velocity, velocity dispersion, and peak intensity for each velocity component.

[ascl:1107.014] Clumpfind: Determining Structure in Molecular Clouds

We describe an automatic, objective routine for analyzing the clumpy structure in a spectral line position-position-velocity data cube. The algorithm works by first contouring the data at a multiple of the rms noise of the observations, then searches for peaks of emission which locate the clumps, and then follows them down to lower intensities. No a proiri clump profile is assumed. By creating simulated data, we test the performance of the algorithm and show that a contour map most accurately depicts internal structure at a contouring interval equal to twice the rms noise of the map. Blending of clump emission leads to small errors in mass and size determinations and in severe cases can result in a number of clumps being misidentified as a single unit, flattening the measured clump mass spectrum. The algorithm is applied to two real data sets as an example of its use. The Rosette molecular cloud is a 'typical' star-forming cloud, but in the Maddalena molecular cloud high-mass star formation is completely absent. Comparison of the two clump lists generated by the algorithm show that on a one-to-one basis the clumps in the star-forming cloud have higher peak temperatures, higher average densities, and are more gravitationally bound than in the non-star-forming cloud. Collective properties of the clumps, such as temperature-size-line-width-mass relations appear very similar, however. Contrary to the initial results reported in a previous paper (Williams & Blitz 1993), we find that the current, more thoroughly tested analysis finds no significant difference in the clump mass spectrum of the two clouds.

[ascl:1201.012] CLUMPY: A code for gamma-ray signals from dark matter structures

CLUMPY is a public code for semi-analytical calculation of the gamma-ray flux astrophysical J-factor from dark matter annihilation/decay in the Galaxy, including dark matter substructures. The core of the code is the calculation of the line of sight integral of the dark matter density squared (for annihilations) or density (for decaying dark matter). The code can be used in three modes: i) to draw skymaps from the Galactic smooth component and/or the substructure contributions, ii) to calculate the flux from a specific halo (that is not the Galactic halo, e.g. dwarf spheroidal galaxies) or iii) to perform simple statistical operations from a list of allowed DM profiles for a given object. Extragalactic contributions and other tracers of DM annihilation (e.g. positrons, antiprotons) will be included in a second release.

[ascl:1711.008] clustep: Initial conditions for galaxy cluster halo simulations

clustep generates a snapshot in GADGET-2 (ascl:0003.001) format containing a galaxy cluster halo in equilibrium; this snapshot can also be read in RAMSES (ascl:1011.007) using the DICE patch. The halo is made of a dark matter component and a gas component, with the latter representing the ICM. Each of these components follows a Dehnen density profile, with gamma=0 or gamma=1. If gamma=1, then the profile corresponds to a Hernquist profile.

[ascl:2209.004] Cluster Toolkit: Tools for analyzing galaxy clusters

Cluster Toolkit calculates weak lensing signals from galaxy clusters and cluster cosmology. It offers 3D density and correlation functions, halo bias models, projected density and differential profiles, and radially averaged profiles. It also calculates halo mass functions, mass-concentration relations, Sunyaev-Zel’dovich (SZ) cluster signals, and cluster magnification. Cluster Toolkit consists of a Python front end wrapped around a well optimized back end in C.

[ascl:1610.008] cluster-in-a-box: Statistical model of sub-millimeter emission from embedded protostellar clusters

Cluster-in-a-box provides a statistical model of sub-millimeter emission from embedded protostellar clusters and consists of three modules grouped in two scripts. The first (cluster_distribution) generates the cluster based on the number of stars, input initial mass function, spatial distribution and age distribution. The second (cluster_emission) takes an input file of observations, determines the mass-intensity correlation and generates outflow emission for all low-mass Class 0 and I sources. The output is stored as a FITS image where the flux density is determined by the desired resolution, pixel scale and cluster distance.

[ascl:1605.002] cluster-lensing: Tools for calculating properties and weak lensing profiles of galaxy clusters

The cluster-lensing package calculates properties and weak lensing profiles of galaxy clusters. Implemented in Python, it includes cluster mass-richness and mass-concentration scaling relations, and NFW halo profiles for weak lensing shear, the differential surface mass density ΔΣ(r), and for magnification, Σ(r). Optionally the calculation will include the effects of cluster miscentering offsets.

[ascl:1911.016] CLUSTEREASY: Lattice simulator for evolving interacting scalar fields in an expanding universe on parallel computing clusters

CLUSTEREASY is a parallel programming extension of the simulation program LATTICEEASY (ascl:1911.015); running the program in parallel greatly extends the range of scales and times that can be simulated. The program is particularly useful for the study of reheating and thermalization after inflation.

[ascl:2011.018] Clustering: Code for clustering single pulse events

Clustering is a modified version of the single-pulse sifting algorithm RRATrap (ascl:2011.017) combined with DBSCAN codes to cluster single pulse events.

[ascl:1905.022] ClusterPyXT: Galaxy cluster pipeline for X-ray temperature maps

ClusterPyXT (Cluster Pypeline for X-ray Temperature maps) creates X-ray temperature maps, pressure maps, surface brightness maps, and density maps from X-ray observations of galaxy clusters to show turbulence, shock fronts, nonthermal phenomena, and the overall dynamics of cluster mergers. It requires CIAO (ascl:1311.006) and CALDB. The code analyzes archival data and provides capability for integrating additional observations into the analysis. The ClusterPyXT code is general enough to analyze data from other sources, such as galaxies, active galactic nuclei, and supernovae, though minor modifications may be necessary.

[ascl:1802.003] CMacIonize: Monte Carlo photoionisation and moving-mesh radiation hydrodynamics

CMacIonize simulates the self-consistent evolution of HII regions surrounding young O and B stars, or other sources of ionizing radiation. The code combines a Monte Carlo photoionization algorithm that uses a complex mix of hydrogen, helium and several coolants in order to self-consistently solve for the ionization and temperature balance at any given time, with a standard first order hydrodynamics scheme. The code can be run as a post-processing tool to get the line emission from an existing simulation snapshot, but can also be used to run full radiation hydrodynamical simulations. Both the radiation transfer and the hydrodynamics are implemented in a general way that is independent of the grid structure that is used to discretize the system, allowing it to be run both as a standard fixed grid code and also as a moving-mesh code.

[ascl:2102.008] CMasher: Scientific colormaps for making accessible, informative plots

CMasher provides a curated collection of scientific colormaps that are perceptually uniform sequential using the viscm package (ascl:2102.007). Most of them are color-vision deficiency friendly; they cover a wide range of different color combinations to accommodate for most applications. The package provides several alternatives to commonly used colormaps, such as chroma and rainforest for jet, sunburst for hot, neutral for binary, and fusion and redshift for coolwarm.

[ascl:1106.018] CMB B-modes from Faraday Rotation

This code is a quick and exact calculator of B-mode angular spectrum due to Faraday rotation by stochastic magnetic fields. Faraday rotation induced B-modes can provide a distinctive signature of primordial magnetic fields because of their characteristic frequency dependence and because they are only weakly damped on small scales, allowing them to dominate B-modes from other sources. By numerically solving the full CMB radiative transport equations, we study the B-mode power spectrum induced by stochastic magnetic fields that have significant power on scales smaller than the thickness of the last scattering surface. Constraints on the magnetic field energy density and inertial scale are derived from WMAP 7-year data, and are stronger than the big bang nucleosynthesis (BBN) bound for a range of parameters. Observations of the CMB polarization at smaller angular scales are crucial to provide tighter constraints or a detection.

[ascl:1106.023] CMBACT: CMB from ACTive sources

This code is based on the cosmic string model described in this paper by Pogosian and Vachaspati, as well as on the CMBFAST code (ascl:9909.004) created by Uros Seljak and Matias Zaldarriaga. It contains an integrator for the vector contribution to the CMB temperature and polarization. The code is reconfigured to make it easier to use with or without active sources. To produce inflationary CMB spectra one simply sets the string tension to zero (gmu=0.0d0). For a non-zero value of tension only the string contribution is calculated.

An option is added to randomize the directions of velocities of consolidated segments as they evolve in time. In the original segment model, which is still the default version (irandomv=0), each segment is given a random velocity initially, but then continues to move in a straight line for the rest of its life. The new option (irandomv=1) allows to additionally randomize velocities of each segment at roughly each Hubble time. However, the merits of this new option are still under investigation. The default version (irandomv=0) is strongly recommended, since it actually gives reasonable unequal time correlators. For each Fourier mode, k, the string stress-energy components are now evaluated on a time grid sufficiently fine for that k.

[ascl:1007.004] CMBEASY: An object-oriented code for the cosmic microwave background

CMBEASY is a software package for calculating the evolution of density fluctuations in the universe. Most notably, the Cosmic Microwave Background temperature anisotropies. It features a Markov Chain Monte Carlo driver and many routines to compute likelihoods of any given model. It is based on the CMBFAST package by Uros Seljak and Matias Zaldarriaga.

[ascl:9909.004] CMBFAST: A microwave anisotropy code

CMBFAST is the most extensively used code for computing cosmic microwave background anisotropy, polarization and matter power spectra. This package contains cosmological linear perturbation theory code to compute the evolution of various cosmological matter and radiation components, both today and at high redshift. The code has been tested over a wide range of cosmological parameters.

This code is no longer supported; please investigate using CAMB (ascl:1102.026) instead.

[ascl:2104.021] cmblensplus: Cosmic microwave background tools

cmblensplus reconstructs lensing potential, cosmic bi-refringence, and patchy reionization from cosmic microwave background anisotropies (CMB) in full and flat sky. This Fortran wrapper for Python also includes modules for delensing and bi-spectrum calculations. cmblensplus contains a module of basic routines such as analytic calculation of delensed B-mode spectrum and lensing bispectrum. Two additional main modules are for curved sky and flat sky analyses, and measure lensing, bi-refringence, patchy tau, bias-hardening, bi-spectrum, delensing and analytic reconstruction normalization. The package also contains simple Python utility and demonstration scripts. cmblensplus uses FFTW (ascl:1201.015), HEALPix (ascl:1107.018), LAPACK (ascl:2104.020), CFITSIO (ascl:1010.001), and LensPix (ascl:1102.025).

[ascl:1109.009] CMBquick: Spectrum and Bispectrum of Cosmic Microwave Background (CMB)

CMBquick is a package for Mathematica in which tools are provided to compute the spectrum and bispectrum of Cosmic Microwave Background (CMB). It is unavoidably slow, but the main goal is not to design a tool which can be used for systematic exploration of parameters in cosmology, but rather a toy CMB code which is transparent and easily modified. Considering this, the name chosen is nothing but a joke which refers to the widely spread and used softwares CMBFAST, CAMB or CMBeasy (ascl:1007.004), which should be used for serious and heavy first order CMB computations, and which are indeed very fast.

The package CMBquick is unavoidably slow when it comes to compute the multipoles Cls, and most of it is due to the access time for variables which in Mathematica is approximately ten times slower than in C or Fortran. CMBquick is thus approximately 10 times slower than CAMB and cannot be used for the same reasons. It uses the same method as CAMB for computing the CMB spectrum, which is based on the line of sight approach. However the integration is performed in a different gauge with different time steps and k-spacing. It benefits from the power of Mathematica on numerical resolution of stiff differential systems, and the transfer functions can be obtained with exquisite accuracy.

The purpose of CMBquick is thus twofold. First, CMBquick is a slow but precise and pedagogical, tool which can be used to explore and modify the physical content of the linear and non-linear dynamics. Second, it is a tool which can help developing templates for nonlinear computations, which could then be hard coded once their correctness is checked. The number of equations for non-linear dynamics is quite sizable and CMBquick makes it easy (but slow) to manipulate the non-linear equations, to solve them precisely, and to plot them.

[ascl:1112.011] CMBview: A Mac OS X program for viewing HEALPix-format sky map data on a sphere

CMBview is a viewer for FITS files containing HEALPix sky maps. Sky maps are projected onto a 3d sphere which can be rotated and zoomed interactively with the mouse. Features include:

  • rendering of the field of Stokes vectors
  • ray-tracing mode in which each screen pixel is projected onto the sphere for high quality rendering
  • control over sphere lighting
  • export an arbitrarily large rendered texture
  • variety of preset colormaps

[ascl:2108.023] CMC-COSMIC: Cluster Monte Carlo code

CMC-COSMIC models dense star clusters using Hénon's method using orbit-averaging collisional stellar dynamics. It includes all the relevant physics for modeling dense spherical star clusters, such as strong dynamical encounters, single and binary stellar evolution, central massive black holes, three-body binary formation, and relativistic dynamics, among others. CMC is parallelized using the Message Passing Interface (MPI), and is pinned to the COSMIC (ascl:2108.022) package for binary population synthesis, which itself was originally based on the version of BSE (ascl:1303.014). COSMIC is currently a submodule within CMC, ensuring that any cluster simulations or binary populations are integrated with the same physics.

[ascl:1611.020] CMCIRSED: Far-infrared spectral energy distribution fitting for galaxies near and far

The Caitlin M. Casey Infra Red Spectral Energy Distribution model (CMCIRSED) provides a simple SED fitting technique suitable for a wide range of IR data, from sources which have only three IR photometric points to sources with >10 photometric points. These SED fits produce accurate estimates to a source's integrated IR luminosity, dust temperature and dust mass. CMCIRSED is based on a single dust temperature greybody fit linked to a MIR power law, fitted simultaneously to data across ∼5–2000 μm.

[ascl:1907.022] CMDPT: Color Magnitude Diagrams Plot Tool

CMD Plot Tool calculates and plots Color Magnitude Diagrams (CMDs) from astronomical photometric data, e.g. of a star cluster observed in two filter bandpasses. It handles multiple file formats (plain text, DAOPHOT .mag files, ACS Survey of Galactic Globular Clusters .zpt files) to generate professional and customized plots without a steep learning curve. It works “out of the box” and does not require any installation of development environments, additional libraries, or resetting of system paths. The tool is available as a single application/executable file with the source code. Sample data is also bundled for demonstration. CMD Plot Tool can also convert DAOPHOT magnitude files to CSV format.

[ascl:2008.015] CMEchaser: Coronal Mass Ejection line-of-sight occultation detector

CMEchaser looks for the occultation of background astronomical sources by CMEs to enable measurement of effects such as variations in the ionized content and the associated Faraday rotation of polarized signals along the line of sight to the background source. The code transforms a given Galactic coordinate to its concordant point in the Helioprojective, Sun-centered plane and estimates the point at which the line of sight from the source to the Earth passes through it. It then searches a user selected database to detect if any CMEs which launched before the observation date would have crossed the line of sight at the epoch of observation, and produces a number of useful plots. CMEchaser can run as a flat script orcan be installed as a package.

[ascl:1109.020] CMFGEN: Probing the Universe through Spectroscopy

A radiative transfer code designed to solve the radiative transfer and statistical equilibrium equations in spherical geometry. It has been designed for application to W-R stars, O stars, and Luminous Blue-Variables. CMFGEN allows fundamental parameters such as effective temperatures, stellar radii and stellar luminosities to be determined. It can provide constraints on mass-loss rates, and allow abundance determinations for a wide range of atomic species. Further it can provide accurate energy distributions, and hence ionizing fluxes, which can be used as input for codes which model the spectra of HII regions and ring nebular.

[ascl:1101.005] CMHOG: Code for Ideal Compressible Hydrodynamics

CMHOG (Connection Machine Higher Order Godunov) is a code for ideal compressible hydrodynamics based on the Lagrange-plus-remap version of the piecewise parabolic method (PPM) of Colella & Woodward (1984, J. Comp. Phys., 74, 1). It works in one-, two- or three-dimensional Cartesian coordinates with either an adiabatic or isothermal equation of state. A limited amount of extra physics has been added using operator splitting, including optically-thin radiative cooling, and chemistry for combustion simulations.

[ascl:1011.014] CO5BOLD: COnservative COde for the COmputation of COmpressible COnvection in a BOx of L Dimensions with l=2,3

CO5BOLD - nickname COBOLD - is the short form of "COnservative COde for the COmputation of COmpressible COnvection in a BOx of L Dimensions with l=2,3''.

It is used to model solar and stellar surface convection. For solar-type stars only a small fraction of the stellar surface layers are included in the computational domain. In the case of red supergiants the computational box contains the entire star. Recently, the model range has been extended to sub-stellar objects (brown dwarfs).

CO5BOLD solves the coupled non-linear equations of compressible hydrodynamics in an external gravity field together with non-local frequency-dependent radiation transport. Operator splitting is applied to solve the equations of hydrodynamics (including gravity), the radiative energy transfer (with a long-characteristics or a short-characteristics ray scheme), and possibly additional 3D (turbulent) diffusion in individual sub steps. The 3D hydrodynamics step is further simplified with directional splitting (usually). The 1D sub steps are performed with a Roe solver, accounting for an external gravity field and an arbitrary equation of state from a table.

The radiation transport is computed with either one of three modules:

  • MSrad module: It uses long characteristics. The lateral boundaries have to be periodic. Top and bottom can be closed or open ("solar module'').
  • LHDrad module: It uses long characteristics and is restricted to an equidistant grid and open boundaries at all surfaces (old "supergiant module'').
  • SHORTrad module: It uses short characteristics and is restricted to an equidistant grid and open boundaries at all surfaces (new "supergiant module'').

The code was supplemented with an (optional) MHD version [Schaffenberger et al. (2005)] that can treat magnetic fields. There are also modules for the formation and advection of dust available. The current version now contains the treatment of chemical reaction networks, mostly used for the formation of molecules [Wedemeyer-Böhm et al. (2005)], and hydrogen ionization [Leenaarts & Wedemeyer-Böhm (2005)], too.

CO5BOLD is written in Fortran90. The parallelization is done with OpenMP directives.

[ascl:2003.008] CoastGuard: Automated timing data reduction pipeline

CoastGuard reduces Effelsberg data; it is written in python and based on PSRCHIVE (ascl:1105.014). Though primarily designed for Effelsberg PSRIX data, it contains components sufficiently general for use with psrchive-compatible data files from other observing systems. In particular, the radio frequency interference (RFI) removal algorithm has been applied to data from the Parkes Telescope and has also been adopted by the LOFAR pulsar timing data reduction pipeline.

[ascl:1910.019] Cobaya: Bayesian analysis in cosmology

Cobaya (Code for BAYesian Analysis) provides a framework for sampling and statistical modeling and enables exploration of an arbitrary prior or posterior using a range of Monte Carlo samplers, including the advanced MCMC sampler from CosmoMC (ascl:1106.025) and the advanced nested sampler PolyChord (ascl:1502.011). The results of the sampling can be analyzed with GetDist (ascl:1910.018). It supports MPI parallelization and is highly extensible, allowing the user to define priors and likelihoods and create new parameters as functions of other parameters.

It includes interfaces to the cosmological theory codes CAMB (ascl:1102.026) and CLASS (ascl:1106.020) and likelihoods of cosmological experiments, such as Planck, Bicep-Keck, and SDSS. Automatic installers are included for those external modules; Cobaya can also be used as a wrapper for cosmological models and likelihoods, and integrated it in other samplers and pipelines. The interfaces to most cosmological likelihoods are agnostic as to which theory code is used to compute the observables, which facilitates comparison between those codes. Those interfaces are also parameter-agnostic, allowing use of modified versions of theory codes and likelihoods without additional editing of Cobaya’s source.

[ascl:2002.016] Cobra: Bayesian pulsar searching

Cobra uses single pulse time series data to search for and time pulsars, performing a fully phase coherent timing analysis. The GPU-accelerated Bayesian analysis package, written in Python, incorporates models for both isolated and accelerated systems, as well as both Keplerian and relativistic binaries. Cobra builds a model pulse train that incorporates effects such as aliasing, scattering and binary motion and a simple Gaussian profile and compares this directly to the data; the software can thus combine data over multiple frequencies, epochs, or even across telescopes.

[ascl:1505.010] COBS: COnstrained B-Splines

COBS (COnstrained B-Splines), written in R, creates constrained regression smoothing splines via linear programming and sparse matrices. The method has two important features: the number and location of knots for the spline fit are established using the likelihood-based Akaike Information Criterion (rather than a heuristic procedure); and fits can be made for quantiles (e.g. 25% and 75% as well as the usual 50%) in the response variable, which is valuable when the scatter is asymmetrical or non-Gaussian. This code is useful for, for example, estimating cluster ages when there is a wide spread in stellar ages at a chosen absorption, as a standard regression line does not give an effective measure of this relationship.

[ascl:1406.017] COCO: Conversion of Celestial Coordinates

The COCO program converts star coordinates from one system to another. Both the improved IAU system, post-1976, and the old pre-1976 system are supported. COCO can perform accurate transformations between multiple coordinate systems. COCO’s user-interface is spartan but efficient and the program offers control over report resolution. All input is free-format, and defaults are provided where this is meaningful. COCO uses SLALIB (ascl:1403.025) and is distributed as part of the Starlink software collection (ascl:1110.012).

[ascl:1703.002] COCOA: Simulating Observations of Star Cluster Simulations

COCOA (Cluster simulatiOn Comparison with ObservAtions) creates idealized mock photometric observations using results from numerical simulations of star cluster evolution. COCOA is able to present the output of realistic numerical simulations of star clusters carried out using Monte Carlo or N-body codes in a way that is useful for direct comparison with photometric observations. The code can simulate optical observations from simulation snapshots in which positions and magnitudes of objects are known. The parameters for simulating the observations can be adjusted to mimic telescopes of various sizes. COCOA also has a photometry pipeline that can use standalone versions of DAOPHOT (ascl:1104.011) and ALLSTAR to produce photometric catalogs for all observed stars.

[ascl:1202.012] CoCoNuT: General relativistic hydrodynamics code with dynamical space-time evolution

CoCoNuT is a general relativistic hydrodynamics code with dynamical space-time evolution. The main aim of this numerical code is the study of several astrophysical scenarios in which general relativity can play an important role, namely the collapse of rapidly rotating stellar cores and the evolution of isolated neutron stars. The code has two flavors: CoCoA, the axisymmetric (2D) magnetized version, and CoCoNuT, the 3D non-magnetized version.

[ascl:2111.008] COCOPLOT: COlor COllapsed PLOTting software

The COCOPLOT (COlor COllapsed PLOTting) quick-look and context image code conveys spectral profile information from all of the spatial pixels in a 3D datacube as a single image using color. It can also identify and expose temporal behavior and display and highlight solar features. COCOPLOT thus aids in identifying regions of interest quickly. The software is available in Python and IDL, and can be used as a standalone package or integrated into other software.

[ascl:2306.041] COFFE: COrrelation Function Full-sky Estimator

COFFE (COrrelation Function Full-sky Estimator) computes quantities in linear perturbation theory. It computes the full-sky and flat-sky 2-point correlation function (2PCF) of galaxy number counts, taking into account all of the effects, including density, RSD, and lensing. It also determines the full-sky, flat-sky, and redshift-averaged multipoles of the 2PCF, and the flat-sky Gaussian covariance matrix of the multipoles of the 2PCF.

[ascl:2407.013] cola_halo: Parallel cosmological N-body simulator

cola_halo generates hundreds of realizations on the fly. This parallel cosmological N-body simulation code generates random Gaussian initial condition using 2LPTIC (ascl:1201.005), time evolves N-body particles with colacode (ascl:1602.021), and finds dark-matter halos with the Friends-of-Friends code (ascl:2407.012).

[ascl:1602.021] COLAcode: COmoving Lagrangian Acceleration code

COLAcode is a serial particle mesh-based N-body code illustrating the COLA (COmoving Lagrangian Acceleration) method; it solves for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). It differs from standard N-body code by trading accuracy at small-scales to gain computational speed without sacrificing accuracy at large scales. This is useful for generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing; such catalogs are needed to perform detailed error analysis for ongoing and future surveys of LSS.

[ascl:2306.047] COLASolver: Particle-Mesh N-body code

COLASolver creates Particle-Mesh (PM) N-body simulations; the code is fast and very flexible, and can compute a wide range of models. For models with complex dynamics (screened models), it provides several options from doing it exactly to approximate but fast to just simulating linear theory equations. Every time-consuming operation is parallelized over MPI and OpenMP. It uses a slab-based parallelization that works well for fast approximate (COLA) simulations but does not perform as well for high resolution simulations. COLASolver can also be used as an analysis code for results from other simulations.

[ascl:2309.006] CoLFI: Cosmological Likelihood-Free Inference

CoLFI (Cosmological Likelihood-Free Inference) estimates parameters directly from the observational data sets using neural density estimators (NDEs); it is a fully ANN-based framework that differs from the Bayesian inference. The package contains three NDEs that are used to estimate parameters: an artificial neural network (ANN), a mixture density network (MDN), and a mixture neural network (MNN). CoLFI can learn the conditional probability density using samples generated by models, and the posterior distribution can be obtained for given observational data.

[ascl:2305.021] COLIBRI: Cosmological libraries in Python

COLIBRÌ (which roughly stands for “Cosmological Libraries”) computes cosmological quantities such as ages, distances, power spectra, and correlation functions. It supports Lambda-CDM cosmologies plus extensions including massive neutrinos, non-flat geometries, evolving dark energy (w0-wa) models, and numerical recipes for f(R) gravity. COLIBRÌ is built especially for large-scale structure purposes and can interact with the Boltzmann solvers CAMB (ascl:1102.026) and CLASS (ascl:1106.020).

[ascl:1802.014] collapse: Spherical-collapse model code

collapse calculates the spherical−collapse for standard cosmological models as well as for dark energy models when the dark energy can be taken to be spatially homogeneous. The calculation is valid on sub−horizon scales and takes a top−hat perturbation to exist in an otherwise featureless cosmos and follows its evolution into the non−linear regime where it reaches a maximum size and then recollapses. collapse provides the user with the linear−collapse threshold (delta_c) and the virial overdensity (Delta_v) for the collapsed halo over a range of cosmic scale factors.

[ascl:2111.009] CoLoRe: Cosmological Lofty Realization

CoLoRe (Cosmological Lofty Realization) generates fast mock realizations of a given galaxy sample using a lognormal model or LPT for the matter density. Tt can simulate a variety of cosmological tracers, including photometric and spectroscopic galaxies, weak lensing, and intensity mapping. CoLoRe is a parallel C code, and its behavior is controlled primarily by the input param file.

[ascl:1508.005] ColorPro: PSF-corrected aperture-matched photometry

ColorPro automatically obtains robust colors across images of varied PSF. To correct for the flux lost in images with poorer PSF, the "detection image" is blurred to match the PSF of these other images, allowing observation of how much flux is lost. All photometry is performed in the highest resolution frame (images being aligned given WCS information in the FITS headers), and identical apertures are used in every image. Usually isophotal apertures are used, as determined by SExtractor (ascl:1010.064). Using SExSeg (ascl:1508.006), object aperture definitions can be pre-defined and object detections from different image filters can be combined automatically into a single comprehensive "segmentation map." After producing the final photometric catalog, ColorPro can automatically run BPZ (ascl:1108.011) to obtain Bayesian Photometric Redshifts.

[ascl:1501.016] Colossus: COsmology, haLO, and large-Scale StrUcture toolS

Colossus is a collection of Python modules for cosmology and dark matter halos calculations. It performs cosmological calculations with an emphasis on structure formation applications, implements general and specific density profiles, and provides a large range of models for the concentration-mass relation, including a conversion to arbitrary mass definitions.

[ascl:2306.034] COLT: Monte Carlo radiative transfer and simulation analysis toolkit

COLT (Cosmic Lyman-alpha Transfer) is a Monte Carlo radiative transfer (MCRT) solver for post-processing hydrodynamical simulations on arbitrary grids. These include a plane parallel slabs, spherical geometry, 3D Cartesian grids, adaptive resolution octrees, unstructured Voronoi tessellations, and secondary outputs. COLT also includes several visualization and analysis tools that exploit the underlying ray-tracing algorithms or otherwise benefit from an efficient hybrid MPI + OpenMP parallelization strategy within a flexible C++ framework.

[ascl:1606.007] COMB: Compact embedded object simulations

COMB supports the simulation on the sphere of compact objects embedded in a stochastic background process of specified power spectrum. Support is provided to add additional white noise and convolve with beam functions. Functionality to support functions defined on the sphere is provided by the S2 code (ascl:1606.008); HEALPix (ascl:1107.018) and CFITSIO (ascl:1010.001) are also required.

[ascl:1911.024] comb: Spectral line data reduction and analysis package

comb is a single-dish radio astronomy spectral line data reduction and analysis package developed at AT&T Bell labs and was used for data reduction for many single-dish telescopes, including Bell Labs 7-m, NRAO 12-m, DSN network, FCRAO 14-m, Arecibo, AST/RO, SEST, BIMA, and in 2011-2012, the Stratospheric Terahertz Observatory. A cookbook for the code is available.

[ascl:1708.024] ComEst: Completeness Estimator

ComEst calculates the completeness of CCD images conducted in astronomical observations saved in the FITS format. It estimates the completeness of the source finder SExtractor (ascl:1010.064) on the optical and near-infrared (NIR) imaging of point sources or galaxies as a function of flux (or magnitude) directly from the image itself. It uses PyFITS (ascl:1207.009) and GalSim (ascl:1402.009) to perform the end-to-end estimation of the completeness and can also estimate the purity of the source detection.

[ascl:2210.007] COMET: Emulated predictions of large-scale structure observables

COMET (Clustering Observables Modelled by Emulated perturbation Theory) provides emulated predictions of large-scale structure observables from models that are based on perturbation theory. It substantially speeds up these analytic computations without any relevant sacrifice in accuracy, enabling an extremely efficient exploration of large-scale structure likelihoods. At its core, COMET exploits an evolution mapping approach which gives it a high degree of flexibility and allows it to cover a wide cosmology parameter space at continuous redshifts up to z∼3z \sim 3z∼3. Among others, COMET supports parameters for cold dark matter density (ωc\omega_cωc​), baryon density (ωb\omega_bωb​), Scalar spectral index (nsn_sns​), Hubble expansion rate (hhh) and Curvature density (ΩK\Omega_KΩK​). The code can obtain the real-space galaxy power spectrum at one-loop order multipoles (monopole, quadrupole, hexadecapole) of the redshift-space, power spectrum at one-loop order, the linear matter power spectrum (with and without infrared resummation), Gaussian covariance matrices for the real-space power spectrum, and redshift-space multipoles and χ2\chi^2χ2's for arbitrary combinations of multipoles. COMET provides an easy-to-use interface for all of these computations.

[ascl:1404.008] Comet: Multifunction VOEvent broker

Comet is a Python implementation of the VOEvent Transport Protocol (VTP). VOEvent is the IVOA system for describing transient celestial events. Details of transients detected by many projects, including Fermi, Swift, and the Catalina Sky Survey, are currently made available as VOEvents, which is also the standard alert format by future facilities such as LSST and SKA. The core of Comet is a multifunction VOEvent broker, capable of receiving events either by subscribing to one or more remote brokers or by direct connection from authors; it can then both process those events locally and forward them to its own subscribers. In addition, Comet provides a tool for publishing VOEvents to the global VOEvent backbone.

[ascl:1402.028] Commander 2: Bayesian CMB component separation and analysis

Commander 2 is a Gibbs sampling code for joint CMB estimation and component separation. The Commander framework uses a parametrized physical model of the sky to perform statistically-rigorous analyses of multi-frequency, multi-resolution CMB data on the full and partial (flat) sky, as well as cross-correlation analyses with large-scale structure datasets.

[ascl:2106.007] CoMover: Bayesian probability of co-moving stars

CoMover determines the probability that two stars are co-moving and thus gravitationally bound. It uses the sky position, proper motion, parallax and optionally the heliocentric radial velocity of a host star (with their respective measurement errors), and compares it to the observables of a potential companion (with their respective measurement errors). The sky position and proper motion of the potential companion star are required, and its heliocentric radial velocity and parallax are facultative inputs to refine its co-moving probability.

If all kinematic observables of the host star are provided, a single spatial-kinematic model is built, consisting of a single 6-dimensional multivariate Gaussian in Galactic coordinates (XYZ) and space velocities (UVW). The observables of the potential companion are then compared to this model and a given field-stars model with Bayes' theorem by marginalizing over any missing kinematic observables of the companion star with analytical integral solutions. The field stars are modeled using a 10-component multivariate Gaussian, accurate for stars within a few hundred parsecs of the Sun. In the case where a heliocentric radial velocity is missing for the host star, the single host-star multivariate Gaussian model is replaced with a series of host star models and numerically marginalized over by taking the numerical sum of the host-star model probabilities.

[submitted] Compact Binary Chebyshev Polynomial Representation Ephemeris Kernel

The software used to transform the tabular USNO/AE98 asteroid ephemerides into a Chebyshev polynomial representations, and evaluate them at an arbitrary time is available. The USNO/AE98 consisted of the ephemerides of fifteen of the largest asteroids, and were used in The Astronomical Almanac from 2000 through 2015. These ephemerides are outdated and no longer available, but the software used to store and evaluate them is still available and provides a robust method for storing compact ephemerides of solar system bodies.

The object of the software is to provide a compact binary representation of solar system bodies with eccentric orbits, which can produce the body's position and velocity at an arbitrary instant within the ephemeris' time span. It uses a modification of the Newhall (1989) algorithm to achieve this objective. The Newhall algorithm is used to store both the Jet Propulsion Laboratory DE and the Institut de mécanique céleste et de calcul des éphémérides INPOP high accuracy planetary ephemerides. The Newhall algorithm breaks an ephemeris into a number time contiguous segments, and each segment is stored as a set of Chebyshev polynomial coefficients. The length of the time segments and the maximum degree Chebyshev polynomial coefficient is fixed for each body. This works well for bodies with small eccentricities, but it becomes inefficient for a body in a highly eccentric orbit. The time segment length and maximum order Chebyshev polynomial coefficient must be chosen to accommodate the strong curvature and fast motion near pericenter, while the body spends most of its time either moving slowly near apocenter or in the lower curvature mid-anomaly portions of its orbit. The solution is to vary the time segment length and maximum degree Chebyshev polynomial coefficient with the body's position. The portion of the software that converts tabular ephemerides into a Chebyshev polynomial representation (CPR) performs this compaction automatically, and the portion that evaluates that representation requires only a modest increase in the evaluation time.

The software also allows the user to choose the required tolerance of the CPR. Thus, if less accuracy is required a more compact, somewhat quicker to evaluate CPR can be manufactured and evaluated. Numerical tests show that a fractional precision of 4e-16 may be achieved, only a factor of 4 greater than the 1e-16 precision of a 64-bit IEEE (2019) compliant floating point number.

The software is written in C and designed to work with the C edition of the Naval Observatory Vector Astrometry Software (NOVAS). The programs may be used to convert tabular ephemerides of other solar system bodies as well. The included READ.ME file provides the details of the software and how to use it.

REFERENCES

IEEE Computer Society 2019, IEEE Standard for Floating-Point Arithmetic. IEEE STD 754-2019, IEEE, pp. 1–84

Newhall, X X 1989, 'Numerical Representation of Planetary Ephemerides,' Celest. Mech., 45, 305 - 310

[ascl:1606.009] Companion-Finder: Planets and binary companions in time series spectra

Companion-Finder looks for planets and binary companions in time series spectra by searching for the spectral lines of stellar companions to other stars observed with high-precision radial-velocity surveys.

[ascl:2105.005] COMPAS: Rapid binary population synthesis code

COMPAS (Compact Object Mergers: Population Astrophysics & Statistics) draws properties for a binary star system from a set of initial distributions and evolves it from zero-age main sequence to the end of its life as two compact remnants. Evolution prescriptions and model parameters are easily adjustable in the software. COMPAS has been used for inference from observations of gravitational-wave mergers, Galactic neutron stars, X-ray binaries, and luminous red novae.

[ascl:2312.008] CompressedFisher: Library for testing Fisher forecasts

The CompressedFisher library tests whether Fisher forecasts using simulated components are converged. The library contains tools to compute standard Fisher estimates, estimate the level of bias due to the finite number of simulations, and compute the compressed Fisher information. Typical usage of CompressedFisher requires two ensembles of simulations: one set of simulations is given at the fiducial parameters (𝜃) to estimate the covariance matrix. The second is a set of simulated derivatives; these can either be in the form of realizations of the derivatives themselves or simulations evaluate at a set of point in the neighborhood of the fiducial point that the code can use to estimate the derivatives.

[ascl:1403.015] computePk: Power spectrum computation

ComputePk computes the power spectrum in cosmological simulations. It is MPI parallel and has been tested up to a 4096^3 mesh. It uses the FFTW library. It can read Gadget-3 and GOTPM outputs, and computes the dark matter component. The user may choose between NGP, CIC, and TSC for the mass assignment scheme.

[ascl:2306.035] CONCEPT: COsmological N-body CodE in PyThon

CONCEPT (COsmological N-body CodE in PyThon) simulates cosmological structure formation. It can simulate matter particles evolving under self-gravity in an expanding background. The code offers multiple gravitational solvers and has adaptive time integration built in. In addition to particles, CONCEPT also evolves fluids at various levels of non-linearity, providing the means for the inclusion of more exotic species such as massive neutrinos, as well as for simulations consistent with general relativistic perturbation theory. Various non-standard species, such as decaying cold dark matter, are fully supported. CONCEPT includes a sophisticated initial condition generator and can output snapshots, power spectra, bispectra ,and several kinds of renders.

[ascl:2306.042] CONDUCT: Electron transport coefficients of magnetized stellar plasmas

CONDUCT calculates all components of kinetic tensors in fully ionized electron-ion plasmas at arbitrary magnetic field. It employs a thermal averaging with the Fermi distribution function and can be used when electrons are partially degenerate; it provides, along with the electrical and thermal conductivities, also thermopower (thermoelectric coefficient). CONDUCT takes into account collisions of the electrons with ions and (in solid phase) charged impurities as well as quantum effects on ionic motion in the solid phase. The code's outputs are the longitudinal, transverse, and off-diagonal (Hall) components of electrical and thermal conductivity tensors as well as the components of thermoelectric tensor.

[ascl:2207.027] ConeRot: Velocity perturbations extractor

ConeRot extracts velocity perturbations in protoplanetary disks from observed line centroids maps ν∘, by creating axially-symmetric centroid maps. It also derives 3D rotation curves in disk-centered cylindrical coordinates, and can estimate the disk orientation based on line data alone. It approximates the unit opacity surface of an axially symmetric disc by a series of cones whose orientations are fit to the observed velocity centroid in concentric radial domains, or regions, with the disc orientation and the rotation curve both optimized to fit ν∘ in each region. ConeRot extracts the perturbations directly from observations without strong assumptions about the underlying disk model and employs a reduced number of free parameters.

[submitted] Coniferest: Python package for active anomaly detection

Coniferest is a Python package designed for implementing anomaly detection algorithms and interactive active learning tools. The centerpiece of the package is an Isolation Forest algorithm, known for its superior scoring performance and multi-threading evaluation. This robust anomaly detection algorithm operates by constructing random decision trees.

In addition to the Isolation Forest algorithm, Coniferest also offers two modified versions for active learning: AAD Forest and Pineforest. The AAD Forest modifies the Isolation Forest by reweighting its leaves based on responses from human experts, providing a faster alternative to the ad_examples package.

On the other hand, Pineforest, developed by the SNAD team, employs a filtering algorithm that builds and dismantles trees with each new human-machine iteration step.

Coniferest provides a user-friendly interface for conducting interactive human-machine sessions, facilitating the use of these active anomaly detection algorithms. The SNAD team maintains and utilizes this package primarily for anomaly detection in the field of astronomy, with a particular focus on light-curve data from large time-domain surveys.

[ascl:2307.061] connect: COsmological Neural Network Emulator of CLASS using TensorFlow

connect (COsmological Neural Network Emulator of CLASS using TensorFlow) emulates cosmological parameters using neural networks. This includes both sampling of training data and training of the actual networks using the TensorFlow library. connect aids in cosmological parameter inference by immensely speeding up the process, which is achieved by substituting the cosmological Einstein-Boltzmann solver codes, needed for every evaluation of the likelihood, with a neural network with a 102 to 103 times faster evaluation time. The code requires CLASS (ascl:1106.020) and Monte Python (ascl:1307.002) if iterative sampling is used.

[ascl:1210.011] Consistent Trees: Gravitationally Consistent Halo Catalogs and Merger Trees for Precision Cosmology

Consistent Trees generates merger trees and halo catalogs which explicitly ensure consistency of halo properties (mass, position, velocity, radius) across timesteps. It has demonstrated the ability to improve both the completeness (through detecting and inserting otherwise missing halos) and purity (through detecting and removing spurious objects) of both merger trees and halo catalogs. Consistent Trees is able to robustly measure the self-consistency of halo finders and to directly measure the uncertainties in halo positions, halo velocities, and the halo mass function for a given halo finder based on consistency between snapshots in cosmological simulations.

[ascl:9905.001] CONSKY: A Sky CCD Integration Simulation

This program addresses the question of what resources are needed to produce a continuous data record of the entire sky down to a given limiting visual magnitude. Toward this end, the program simulates a small camera/telescope or group of small camera/telescopes collecting light from a large portion of the sky. From a given stellar density derived from a Bahcall - Soneira Galaxy model, the program first converts star densities at visual magnitudes between 5 and 20 to number of sky pixels needed to monitor each star simultaneously. From pixels, the program converts input CCD parameters to needed telescope attributes, needed data storage space, and the length of time needed to accumulate data of photometric quality for stars of each limiting visual magnitude over the whole sky. The program steps though photometric integrations one second at a time and includes the contribution from a bright background, read noise, dark current, and atmospheric absorption.

[ascl:2202.019] Contaminante: Identify blended targets in Kepler, TESS, and K2 data

contaminante helps find the contaminant transiting source in NASA's Kepler, K2 or TESS data. When hunting for transiting planets, sometimes signals come from neighboring contaminants. This package helps users identify where the transiting signal comes from in their data. The code uses pixel level modeling of the TargetPixelFile data from NASA's astrophysics missions that are processed with the Kepler pipeline. The output of contaminante is a Python dictionary containing the source location and transit depth, and a contaminant location and depth. It can also output a figure showing where the main target is centered in all available TPFs, what the phase curve looks like for the main target, where the transiting source is centered in all available TPFs, if a transiting source is located outside the main target, or the transiting source phase curve, if a transiting source is located outside the main target.

[ascl:1609.023] contbin: Contour binning and accumulative smoothing

Contbin bins X-ray data using contours on an adaptively smoothed map. The generated bins closely follow the surface brightness, and are ideal where the surface brightness distribution is not smooth, or the spectral properties are expected to follow surface brightness. Color maps can be used instead of surface brightness maps.

[ascl:2212.025] CONTROL: Colorado Ultraviolet Transit Experiment data reduction pipeline

CONTROL (CUTE autONomous daTa ReductiOn pipeLine) produces science-quality output with a single command line with zero user interference for CUTE (Colorado Ultraviolet Transit Experiment) data. It can be used for any single order spectral data in any wavelength without any modification. The pipeline is governed by a parameter file, which is available with this distribution. CONTROL is fully automated and works in a series of steps following standard CCD reduction techniques. It creates a reduction log to track processes carried out and any parameters used.

[ascl:1401.006] convolve_image.pro: Common-Resolution Convolution Kernels for Space- and Ground-Based Telescopes

The IDL package convolve_image.pro transforms images between different instrumental point spread functions (PSFs). It can load an image file and corresponding kernel and return the convolved image, thus preserving the colors of the astronomical sources. Convolution kernels are available for images from Spitzer (IRAC MIPS), Herschel (PACS SPIRE), GALEX (FUV NUV), WISE (W1 - W4), Optical PSFs (multi- Gaussian and Moffat functions), and Gaussian PSFs; they allow the study of the Spectral Energy Distribution (SED) of extended objects and preserve the characteristic SED in each pixel.

[ascl:1210.013] ConvPhot: A profile-matching algorithm for precision photometry

ConvPhot measures colors between two images having different resolutions. ConvPhot is designed to work especially for faint galaxies, accurately measuring colors in relatively crowded fields. It makes full use of the spatial and morphological information contained in the highest quality images to analyze multiwavelength data with inhomogeneous image quality.
Written in 2007, ConvPhot has been superseded by T-PHOT (ascl:1609.001)

[ascl:2306.024] COpops: Compute CO sizes and fluxes

COpops computes semi-analytically the CO flux of a disc (given initial conditions and age) under the assumption of LTE and optically thick emission. It then runs disc population synthesis using observationally-informed initial conditions. CO fluxes is one of the most easily accessible observables for studying disc evolution; COpops is a faster alternative to running computationally-expensive thermochemical models for hundreds of discs and is accurate, recovering agreement within a factor of three.

[ascl:1304.022] Copter: Cosmological perturbation theory

Copter is a software package for doing calculations in cosmological perturbation theory. Specifically, Copter includes code for computing statistical observables in the large-scale structure of matter using various forms of perturbation theory, including linear theory, standard perturbation theory, renormalized perturbation theory, and many others. Copter is written in C++ and makes use of the Boost C++ library headers.

[ascl:1112.012] CORA: Emission Line Fitting with Maximum Likelihood

CORA analyzes emission line spectra with low count numbers and fits them to a line using the maximum likelihood technique. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise, the software derives the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. CORA has been applied to an X-ray spectrum with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory.

[ascl:1603.002] CORBITS: Efficient Geometric Probabilities of Multi-Transiting Exoplanetary Systems

CORBITS (Computed Occurrence of Revolving Bodies for the Investigation of Transiting Systems) computes the probability that any particular group of exoplanets can be observed to transit from a collection of conjectured exoplanets orbiting a star. The efficient, semi-analytical code computes the areas bounded by circular curves on the surface of a sphere by applying elementary differential geometry. CORBITS is faster than previous algorithms, based on comparisons with Monte Carlo simulations, and tests show that it is extremely accurate even for highly eccentric planets.

[ascl:1406.003] CoREAS: CORSIKA-based Radio Emission from Air Showers simulator

CoREAS is a Monte Carlo code for the simulation of radio emission from extensive air showers; it is an update of and successor code to REAS3 (ascl:1107.009). It implements the endpoint formalism for the calculation of electromagnetic radiation directly in CORSIKA (ascl:1202.006). As such, it is parameter-free, makes no assumptions on the emission mechanism for the radio signals, and takes into account the complete complexity of the electron and positron distributions as simulated by CORSIKA.

[ascl:1702.002] corner.py: Corner plots

corner.py uses matplotlib to visualize multidimensional samples using a scatterplot matrix. In these visualizations, each one- and two-dimensional projection of the sample is plotted to reveal covariances. corner.py was originally conceived to display the results of Markov Chain Monte Carlo simulations and the defaults are chosen with this application in mind but it can be used for displaying many qualitatively different samples. An earlier version of corner.py was known as triangle.py.

[ascl:2405.018] coronagraph_noise: Coronagraph noise modeling routines

coronagraph_noise simulates coronagraph noise. Written in IDL, the code includes a generalized coronagraph routine and simulators for the WFIRST Shaped Pupil Coronagraph in both spectroscopy and imaging modes. Functions available include stellar and planetary flux functions, planet photon and zodiacal light count rates, planet-star flux ratio, and clock induced charge count rate, among others. coronagraph_noise also includes routines to smooth a plot by convolving with a Gaussian profile to convolve a spectrum with a given instrument resolution and to take a spectrum that is specified at high spectral resolution and degrade it to a lower resolution. A Python implementation of coronagraph_noise, coronagraph (ascl:2405.019), is also available.

[ascl:2405.019] coronagraph: Python noise model for directly imaging exoplanets

coronagraph provides a Python noise model for directly imaging exoplanets with a coronagraph-equipped telescope. Based on the original IDL code for this coronagraph model, coronograph_noise (ascl:2405.018), the Python version has been expanded in a few key ways. Most notably, the Telescope, Planet, and Star objects used for reflected light coronagraph noise modeling can now be used for transmission and emission spectroscopy noise modeling, making this model a general purpose exoplanet noise model for many different types of observations.

[ascl:1711.005] correlcalc: Two-point correlation function from redshift surveys

correlcalc calculates two-point correlation function (2pCF) of galaxies/quasars using redshift surveys. It can be used for any assumed geometry or Cosmology model. Using BallTree algorithms to reduce the computational effort for large datasets, it is a parallelised code suitable for running on clusters as well as personal computers. It takes redshift (z), Right Ascension (RA) and Declination (DEC) data of galaxies and random catalogs as inputs in form of ascii or fits files. If random catalog is not provided, it generates one of desired size based on the input redshift distribution and mangle polygon file (in .ply format) describing the survey geometry. It also calculates different realisations of (3D) anisotropic 2pCF. Optionally it makes healpix maps of the survey providing visualization.

[ascl:1211.004] CORRFIT: Cross-Correlation Routines

CORRFIT is a set of routines that use the cross-correlation method to extract parameters of the line-of-sight velocity distribution from galactic spectra and stellar templates observed on the same system. It works best when the broadening function is well sampled at the spectral resolution used (e.g. 200 km/s dispersion at 2 Angstrom resolution). Results become increasingly sensitive to the spectral match between galaxy and template if the broadening function is not well sampled. CORRFIT does not work well for dispersions less than the velocity sampling interval ('delta' in the code) unless the template is perfect.

[ascl:1703.003] Corrfunc: Blazing fast correlation functions on the CPU

Corrfunc is a suite of high-performance clustering routines. The code can compute a variety of spatial correlation functions on Cartesian geometry as well Landy-Szalay calculations for spatial and angular correlation functions on a spherical geometry and is useful for, for example, exploring the galaxy-halo connection. The code is written in C and can be used on the command-line, through the supplied python extensions, or the C API.

[ascl:1202.006] CORSIKA: An Air Shower Simulation Program

CORSIKA (COsmic Ray Simulations for KAscade) is a program for detailed simulation of extensive air showers initiated by high energy cosmic ray particles. Protons, light nuclei up to iron, photons, and many other particles may be treated as primaries. The particles are tracked through the atmosphere until they undergo reactions with the air nuclei or, in the case of unstable secondaries, decay. The hadronic interactions at high energies may be described by several reaction models. Hadronic interactions at lower energies are described, and in particle decays all decay branches down to the 1% level are taken into account. Options for the generation of Cherenkov radiation and neutrinos exist. CORSIKA may be used up to and beyond the highest energies of 100 EeV.

[ascl:1712.008] CosApps: Simulate gravitational lensing through ray tracing and shear calculation

Cosmology Applications (CosApps) provides tools to simulate gravitational lensing using two different techniques, ray tracing and shear calculation. The tool ray_trace_ellipse calculates deflection angles on a grid for light passing a deflecting mass distribution. Using MPI, ray_trace_ellipse may calculate deflection in parallel across network connected computers, such as cluster. The program physcalc calculates the gravitational lensing shear using the relationship of convergence and shear, described by a set of coupled partial differential equations.

[ascl:1010.040] Cosmic String Simulations

Complicated cosmic string loops will fragment until they reach simple, non-intersecting ("stable") configurations. Through extensive numerical study, these attractor loop shapes are characterized including their length, velocity, kink, and cusp distributions. An initial loop containing $M$ harmonic modes will, on average, split into 3M stable loops. These stable loops are approximately described by the degenerate kinky loop, which is planar and rectangular, independently of the number of modes on the initial loop. This is confirmed by an analytic construction of a stable family of perturbed degenerate kinky loops. The average stable loop is also found to have a 40% chance of containing a cusp. This new analytic scheme explicitly solves the string constraint equations.

[ascl:2107.023] cosmic_variance: Cosmic variance calculator

cosmic_variance calculates the cosmic variance during the Epoch of Reionization (EoR) for the UV Luminosity Function (UV LF), Stellar Mass Function (SMF), and Halo Mass Function (HMF). The three functions in the package provide the output as the cosmic variance expressed in percentage. The code is written in Python, and simple examples that show how to use the functions are provided.

[ascl:2108.018] Cosmic-CoNN: Cosmic ray detection toolkit

Cosmic-CoNN detects cosmic rays (CR) in CCD-captured astronomical images. It offers a PyTorch deep-learning framework to train generic, robust CR detection models for ground- and space-based imaging data as well as spectroscopic observations. Cosmic-CoNN also includes a suite of tools, including console commands, a web app, and Python APIs, to make deep-learning models easily accessible.

[ascl:2207.004] cosmic-kite: Auto-encoding the Cosmic Microwave Background

Cosmic-kite performs a fast estimation of the TT Cosmic Microwave Background (CMB) power spectra corresponding to a set of cosmological parameters; it can also estimate the maximum-likelihood cosmological parameters from a power spectra. This software is an auto-encoder that was trained and calibrated using power spectra from random cosmologies computed with the CAMB code (ascl:1102.026).

[ascl:2108.022] COSMIC: Compact Object Synthesis and Monte Carlo Investigation Code

COSMIC (Compact Object Synthesis and Monte Carlo Investigation Code) generates synthetic populations with an adaptive size based on how the shape of binary parameter distributions change as the number of simulated binaries increases. It implements stellar evolution using SSE (ascl:1303.015) and binary interactions using BSE (ascl:1303.014). COSMIC can also be used to simulate a single binary at a time, a list of multiple binaries, a grid of binaries, or a fixed population size as well as restart binaries at a mid point in their evolution. The code is included in CMC-COSMIC (ascl:2108.023).

[ascl:1010.030] CosmicEmu: Cosmic Emulator for the Dark Matter Power Spectrum

Many of the most exciting questions in astrophysics and cosmology, including the majority of observational probes of dark energy, rely on an understanding of the nonlinear regime of structure formation. In order to fully exploit the information available from this regime and to extract cosmological constraints, accurate theoretical predictions are needed. Currently such predictions can only be obtained from costly, precision numerical simulations. The "Coyote Universe'' simulation suite comprises nearly 1,000 N-body simulations at different force and mass resolutions, spanning 38 wCDM cosmologies. This large simulation suite enabled construct of a prediction scheme, or emulator, for the nonlinear matter power spectrum accurate at the percent level out to k~1 h/Mpc. This is the first cosmic emulator for the dark matter power spectrum.

[submitted] CosmicEmu: High Precision Emulator for the Nonlinear Matter Power Spectrum

Modern cosmological surveys are delivering datasets characterized by unprecedented quality and statistical completeness. In order to maximally extract cosmological information from these observations, matching theoretical predictions are needed. In the nonlinear regime of structure formation, cosmological simulations are the primary means of obtaining the required information but the computational cost of sufficiently resolved large-volume simulations makes it prohibitive to run very large ensembles. Nevertheless, precision emulators built on a tractable number of high-quality simulations can be used to build very fast prediction schemes to enable a variety of cosmological inference studies. The "Mira-Titan Universe" simulation suite covers the standard six cosmological parameters and, in addition, includes massive neutrinos and a dynamical dark energy equation of state. It is based on 111 cosmological simulations, each covering a (2.1Gpc)^3 volume and evolving 3200^3 particles, and augments these higher-resolution simulations with an additional set of 1776 lower-resolution simulations and TimeRG perturbation theory results to cover scales straddling the linear to mildly nonlinear regimes. The emulator built on this suite, the CosmicEmu, provides predictions at the two to three percent level of accuracy over a wide range of cosmological parameters. Presented in: https://arxiv.org/abs/2207.12345.

[ascl:1304.006] CosmicEmuLog: Cosmological Power Spectra Emulator

CosmicEmuLog is a simple Python emulator for cosmological power spectra. In addition to the power spectrum of the conventional overdensity field, it emulates the power spectra of the log-density as well as the Gaussianized density. It models fluctuations in the power spectrum at each k as a linear combination of contributions from fluctuations in each cosmological parameter. The data it uses for emulation consist of ASCII files of the mean power spectrum, together with derivatives of the power spectrum with respect to the five cosmological parameters in the space spanned by the Coyote Universe suite. This data can also be used for Fisher matrix analysis. At present, CosmicEmuLog is restricted to redshift 0.

[ascl:2307.027] CosmicFish: Cosmology forecasting tool

CosmicFish obtains expected bounds on cosmological parameters for a wide range of models and observables for cosmological forecasting. The package includes a Fortran library to produce Fisher matrices, a Python library that performs operations on the produced Fisher matrices, and a full set of plotting utilities. It works with many models, including CAMB (ascl:1102.026) and MGCAMB (ascl:1106.013), and can interface with any Boltzmann solver. The user can choose within a wide range of possible cosmological observables, including cosmic microwave background, weak lensing tomography, galaxy clustering, and redshift drift. CosmicFish is easy to customize; it provides a flexible package system and users can produce their own analyses and plotting pipelines following the default Python apps.

[ascl:1601.008] CosmicPy: Interactive cosmology computations

CosmicPy performs simple and interactive cosmology computations for forecasting cosmological parameters constraints; it computes tomographic and 3D Spherical Fourier-Bessel power spectra as well as Fisher matrices for galaxy clustering. Written in Python, it relies on a fast C++ implementation of Fourier-Bessel related computations, and requires NumPy, SciPy, and Matplotlib.

[ascl:9910.004] COSMICS: Cosmological initial conditions and microwave anisotropy codes

COSMICS is a package of Fortran programs useful for computing transfer functions and microwave background anisotropy for cosmological models, and for generating gaussian random initial conditions for nonlinear structure formation simulations of such models. Four programs are provided: linger_con and linger_syn integrate the linearized equations of general relativity, matter, and radiation in conformal Newtonian and synchronous gauge, respectively; deltat integrates the photon transfer functions computed by the linger codes to produce photon anisotropy power spectra; and grafic tabulates normalized matter power spectra and produces constrained or unconstrained samples of the matter density field.

[ascl:1505.013] cosmoabc: Likelihood-free inference for cosmology

Approximate Bayesian Computation (ABC) enables parameter inference for complex physical systems in cases where the true likelihood function is unknown, unavailable, or computationally too expensive. It relies on the forward simulation of mock data and comparison between observed and synthetic catalogs. cosmoabc is a Python Approximate Bayesian Computation (ABC) sampler featuring a Population Monte Carlo variation of the original ABC algorithm, which uses an adaptive importance sampling scheme. The code can be coupled to an external simulator to allow incorporation of arbitrary distance and prior functions. When coupled with the numcosmo library, it has been used to estimate posterior probability distributions over cosmological parameters based on measurements of galaxy clusters number counts without computing the likelihood function.

[ascl:1511.019] CosmoBolognaLib: Open source C++ libraries for cosmological calculations

CosmoBolognaLib contains numerical libraries for cosmological calculations; written in C++, it is intended to define a common numerical environment for cosmological investigations of the large-scale structure of the Universe. The software aids in handling real and simulated astronomical catalogs by measuring one-point, two-point and three-point statistics in configuration space and performing cosmological analyses. These open source libraries can be included in either C++ or Python codes.

[ascl:2006.005] CosmoCov: Configuration space covariances for projected galaxy 2-point statistics

CosmoCov computes configuration space covariances for projected galaxy 2-point statistics based on the CosmoLike (ascl:2006.006) framework. The package provides a flat sky covariance module, computed with the 2D-FFTLog (ascl:2006.004) algorithm, and a curved sky covariance module.

[ascl:2009.020] cosmoFns: Functions for observational cosmology

cosmoFns computes distances, times, luminosities, and other quantities useful in observational cosmology, including molecular line observations. Written in R and coded for a flat universe, it contains functions for rest-frame line and luminosities, cosmic lookback time given z and cosmological parameters, and differential comoving volume. cosmoFns also computes comoving, luminosity, and angular diameter distances and molecular mass, among other quantities.

[ascl:2007.023] CosmoGRaPH: Cosmological General Relativity and (Perfect fluid | Particle) Hydrodynamics

CosmoGRaPH explores cosmological problems in a fully general relativistic setting. Written in C++, it implements various novel methods for numerically solving the Einstein field equations, including an N-body solver, full AMR capabilities via SAMRAI, and raytracing.

[ascl:2306.032] CosmoGraphNet: Cosmological parameters and galaxy power spectrum from galaxy catalogs

CosmoGraphNet infers cosmological parameters or the galaxy power spectrum. It creates a graph from a galaxy catalog with information the 3D position and intrinsic galactic properties. A Graph Neural Network is then applied to predict the cosmological parameters or the galaxy power spectrum.

[ascl:1303.003] CosmoHammer: Cosmological parameter estimation with the MCMC Hammer

CosmoHammer is a Python framework for the estimation of cosmological parameters. The software embeds the Python package emcee by Foreman-Mackey et al. (2012) and gives the user the possibility to plug in modules for the computation of any desired likelihood. The major goal of the software is to reduce the complexity when one wants to extend or replace the existing computation by modules which fit the user's needs as well as to provide the possibility to easily use large scale computing environments. CosmoHammer can efficiently distribute the MCMC sampling over thousands of cores on modern cloud computing infrastructure.

[ascl:2311.012] CosmoLattice: Lattice simulator of scalar and gauge field dynamics in an expanding universe

CosmoLattice performs lattice simulations of field dynamics in an expanding universe. The code can simulate the dynamics of interacting scalar field theories, Abelian U(1) gauge theories, and non-Abelian SU(2) gauge theories, either in flat spacetime or an expanding FLRW background, including the case of self-consistent expansion sourced by the fields themselves. It can also compute gravitational waves sourced by U(1) Abelian Gauge fields. The CosmoLattice platform can implement any system of dynamical equations suitable for discretization on a lattice, as it introduces its own language describing fields and operations between them, and hence can implement new libraries to solve arbitrary field problems (related or not to cosmology).

[ascl:2312.007] CosmoLED: Cosmo code for Large Extra Dimension (LED) black holes

CosmoLED computes Hawking evaporation from black holes and set constraints on the fraction of black holes in dark matter. Based on ExoCLASS (ascl:1106.020), the code provides a DarkAges_LED module and C codes in class_LED to compute the evolution and energy deposition functions from LED black holes. Though CosmoLED is designed for large extra dimension black holes, it can also be used to study 4D black holes.

[ascl:2006.006] CosmoLike: Cosmological Likelihood analyses

CosmoLike analyzes cosmological data sets and forecasts future missions. It has been used in the analysis of the Dark Energy Survey and to optimize the Large Synoptic Survey Telescope and the Wide-Field Infrared Survey Telescope, and is useful for innovative theory projects that test new concepts and methods to enhance the constraining power of cosmological analyses.

[ascl:2009.017] CosmoloPy: Cosmology package for Python

CosmoloPy is a suite of cosmology routines built on NumPy/SciPy. Its capabilities include various cosmological densities, distance measures, and galaxy luminosity functions (Schecter functions). It also offers pre-defined sets of cosmological parameters (e.g., from WMAP), conversion in and out of the AB magnitude system, and the reionization of the IGM. Functions take cosmological parameters (which can be numpy arrays) as keywords and ignore any extra keywords, making it possible to build a dictionary of cosmological parameters and pass it to any function.

[ascl:1110.024] CosmoMC SNLS: CosmoMC Plug-in to Analyze SNLS3 SN Data

This module is a plug-in for CosmoMC and requires that software. Though programmed to analyze SNLS3 SN data, it can also be used for other SN data provided the inputs are put in the right form. In fact, this is probably a good idea, since the default treatment that comes with CosmoMC is flawed. Note that this requires fitting two additional SN nuisance parameters (alpha and beta), but this is significantly faster than attempting to marginalize over them internally.

[ascl:1106.025] CosmoMC: Cosmological MonteCarlo

We present a fast Markov Chain Monte-Carlo exploration of cosmological parameter space. We perform a joint analysis of results from recent CMB experiments and provide parameter constraints, including sigma_8, from the CMB independent of other data. We next combine data from the CMB, HST Key Project, 2dF galaxy redshift survey, supernovae Ia and big-bang nucleosynthesis. The Monte Carlo method allows the rapid investigation of a large number of parameters, and we present results from 6 and 9 parameter analyses of flat models, and an 11 parameter analysis of non-flat models. Our results include constraints on the neutrino mass (m_nu < 0.3eV), equation of state of the dark energy, and the tensor amplitude, as well as demonstrating the effect of additional parameters on the base parameter constraints. In a series of appendices we describe the many uses of importance sampling, including computing results from new data and accuracy correction of results generated from an approximate method. We also discuss the different ways of converting parameter samples to parameter constraints, the effect of the prior, assess the goodness of fit and consistency, and describe the use of analytic marginalization over normalization parameters.

[ascl:1110.019] CosmoNest: Cosmological Nested Sampling

CosmoNest is an algorithm for cosmological model selection. Given a model, defined by a set of parameters to be varied and their prior ranges, and data, the algorithm computes the evidence (the marginalized likelihood of the model in light of the data). The Bayes factor, which is proportional to the relative evidence of two models, can then be used for model comparison, i.e. to decide whether a model is an adequate description of data, or whether the data require a more complex model.

For convenience, CosmoNest, programmed in Fortran, is presented here as an optional add-on to CosmoMC (ascl:1106.025), which is widely used by the cosmological community to perform parameter fitting within a model using a Markov-Chain Monte-Carlo (MCMC) engine. For this reason it can be run very easily by anyone who is able to compile and run CosmoMC. CosmoNest implements a different sampling strategy, geared for computing the evidence very accurately and efficiently. It also provides posteriors for parameter fitting as a by-product.

[ascl:2001.010] CosMOPED: Compressed Planck likelihood

CosMOPED (Cosmological MOPED) uses the MOPED (Multiple/Massively Optimised Parameter Estimation and Data compression) compression scheme to compress the Planck power spectrum. This convenient and lightweight compressed likelihood code is implemented in Python. To compute the likelihood for the LambdaCDM model using CosMOPED, one needs only six compression vectors, one for each parameter, and six numbers from compressing the Planck data using the six compression vectors. Using these, the likelihood of a theory power spectrum given the Planck data is the product of six one-dimensional Gaussians. Extended cosmological models require computing extra compression vectors.

[ascl:1408.018] CosmoPhotoz: Photometric redshift estimation using generalized linear models

CosmoPhotoz determines photometric redshifts from galaxies utilizing their magnitudes. The method uses generalized linear models which reproduce the physical aspects of the output distribution. The code can adopt gamma or inverse gaussian families, either from a frequentist or a Bayesian perspective. A set of publicly available libraries and a web application are available. This software allows users to apply a set of GLMs to their own photometric catalogs and generates publication quality plots with no involvement from the user. The code additionally provides a Shiny application providing a simple user interface.

[ascl:1212.006] CosmoPMC: Cosmology sampling with Population Monte Carlo

CosmoPMC is a Monte-Carlo sampling method to explore the likelihood of various cosmological probes. The sampling engine is implemented with the package pmclib. It is called Population MonteCarlo (PMC), which is a novel technique to sample from the posterior. PMC is an adaptive importance sampling method which iteratively improves the proposal to approximate the posterior. This code has been introduced, tested and applied to various cosmology data sets.

[ascl:2405.025] CosmoPower: Machine learning-accelerated Bayesian inference

CosmoPower develops Bayesian inference pipelines that leverage machine learning to solve inverse problems in science. While the emphasis is on building algorithms to accelerate Bayesian inference in cosmology, the implemented methods allow for their application across a wide range of scientific fields. CosmoPower provides neural network emulators of matter and Cosmic Microwave Background power spectra, which can replace Boltzmann codes such as CAMB (ascl:1102.026) or CLASS (ascl:1106.020) in cosmological inference pipelines, to source the power spectra needed for two-point statistics analyses. This provides orders-of-magnitude acceleration to the inference pipeline and integrates naturally with efficient techniques for sampling very high-dimensional parameter spaces.

[ascl:1304.017] CosmoRec: Cosmological Recombination code

CosmoRec solves the recombination problem including recombinations to highly excited states, corrections to the 2s-1s two-photon channel, HI Lyn-feedback, n>2 two-photon profile corrections, and n≥2 Raman-processes. The code can solve the radiative transfer equation of the Lyman-series photon field to obtain the required modifications to the rate equations of the resolved levels, and handles electron scattering, the effect of HeI intercombination transitions, and absorption of helium photons by hydrogen. It also allows accounting for dark matter annihilation and optionally includes detailed helium radiative transfer effects.

[ascl:1705.001] COSMOS: Carnegie Observatories System for MultiObject Spectroscopy

COSMOS (Carnegie Observatories System for MultiObject Spectroscopy) reduces multislit spectra obtained with the IMACS and LDSS3 spectrographs on the Magellan Telescopes. It can be used for the quick-look analysis of data at the telescope as well as for pipeline reduction of large data sets. COSMOS is based on a precise optical model of the spectrographs, which allows (after alignment and calibration) an accurate prediction of the location of spectra features. This eliminates the line search procedure which is fundamental to many spectral reduction programs, and allows a robust data pipeline to be run in an almost fully automatic mode, allowing large amounts of data to be reduced with minimal intervention.

[ascl:2401.005] CosmosCanvas: Useful color maps for different astrophysical properties

CosmosCanvas creates perception-based color maps for different astrophysical properties such as spectral index and velocity fields. Three tutorials demonstrate how to use python code to exploit and adjust the boundaries in these divergent colour schemes. Intended to work with human physiology, each tutorial offers at least one default scheme that is monotonic in value both as a redundancy for supporting data information and an aid for colour blind viewers. This library relies on Gilles Ferrand's colourspace library.

[ascl:1409.012] CosmoSIS: Cosmological parameter estimation

CosmoSIS is a cosmological parameter estimation code. It structures cosmological parameter estimation to ease re-usability, debugging, verifiability, and code sharing in the form of calculation modules. Witten in python, CosmoSIS consolidates and connects existing code for predicting cosmic observables and maps out experimental likelihoods with a range of different techniques.

[ascl:1701.004] CosmoSlik: Cosmology sampler of likelihoods

CosmoSlik quickly puts together, runs, and analyzes an MCMC chain for analysis of cosmological data. It is highly modular and comes with plugins for CAMB (ascl:1102.026), CLASS (ascl:1106.020), the Planck likelihood, the South Pole Telescope likelihood, other cosmological likelihoods, emcee (ascl:1303.002), and more. It offers ease-of-use, flexibility, and modularity.

[ascl:1311.009] CosmoTherm: Thermalization code

CosmoTherm allows precise computation of CMB spectral distortions caused by energy release in the early Universe. Different energy-release scenarios (e.g., decaying or annihilating particles) are implemented using the Green's function of the cosmological thermalization problem, allowing fast computation of the distortion signal. The full thermalization problem can be solved on a case-by-case basis for a wide range of energy-release scenarios using the full PDE solver of CosmoTherm. A simple Monte-Carlo toolkit is included for parameter estimation and forecasts using the Green's function method.

[ascl:1504.010] CosmoTransitions: Cosmological Phase Transitions

CosmoTransitions analyzes early-Universe finite-temperature phase transitions with multiple scalar fields. The code enables analysis of the phase structure of an input theory, determines the amount of supercooling at each phase transition, and finds the bubble-wall profiles of the nucleated bubbles that drive the transitions.

[ascl:1307.010] cosmoxi2d: Two-point galaxy correlation function calculation

Cosmoxi2d is written in C and computes the theoretical two-point galaxy correlation function as a function of cosmological and galaxy nuisance parameters. It numerically evaluates the model described in detail in Reid and White 2011 (arxiv:1105.4165) and Reid et al. 2012 (arxiv:1203.6641) for the multipole moments (up to ell = 4) for the observed redshift space correlation function of biased tracers as a function of cosmological (though an input linear matter power spectrum, growth rate f, and Alcock-Paczynski geometric factors alphaperp and alphapar) as well as nuisance parameters describing the tracers (bias and small scale additive velocity dispersion, isotropicdisp1d).

This model works best for highly biased tracers where the 2nd order bias term is small. On scales larger than 100 Mpc, the code relies on 2nd order Lagrangian Perturbation theory as detailed in Matsubara 2008 (PRD 78, 083519), and uses the analytic version of Reid and White 2011 on smaller scales.

[ascl:1512.013] CounterPoint: Zeeman-split absorption lines

CounterPoint works in concert with MoogStokes (ascl:1308.018). It applies the Zeeman effect to the atomic lines in the region of study, splitting them into the correct number of Zeeman components and adjusting their relative intensities according to the predictions of Quantum Mechanics, and finally creates a Moog-readable line list for use with MoogStokes. CounterPoint has the ability to use VALD and HITRAN line databases for both atomic and molecular lines.

[ascl:1904.028] covdisc: Disconnected covariance of 2-point functions in large-scale structure of the Universe

covdisc computes the disconnected part of the covariance matrix of 2-point functions in large-scale structure studies, accounting for the survey window effect. This method works for both power spectrum and correlation function, and applies to the covariances for various probes including the multi- poles and the wedges of 3D clustering, the angular and the projected statistics of clustering and lensing, as well as their cross covariances.

[ascl:2201.011] COWS: Cosmic web filament finder

COWS (COsmic Web Skeleton) implements the cosmic filament finder COsmic Web Skeleton (COWS). Written in Python, the cosmic filament finder works on Hessian-based cosmic web identifiers (such as the V-web) and returns a catalogue of filament spines. The code identifies the medial axis, or skeleton, of cosmic web filaments and then separates this skeleton into individual filaments.

[ascl:1808.003] CPF: Corral Pipeline Framework

Corral generates astronomical pipelines. Data processing pipelines represent an important slice of the astronomical software library that include chains of processes that transform raw data into valuable information via data reduction and analysis. Written in Python, Corral features a Model-View-Controller design pattern on top of an SQL Relational Database capable of handling custom data models, processing stages, and communication alerts. It also provides automatic quality and structural metrics based on unit testing. The Model-View-Controller provides concept separation between the user logic and the data models, delivering at the same time multi-processing and distributed computing capabilities.

[ascl:1402.010] CPL: Common Pipeline Library

The Common Pipeline Library (CPL) is a set of ISO-C libraries that provide a comprehensive, efficient and robust software toolkit to create automated astronomical data reduction pipelines. Though initially developed as a standardized way to build VLT instrument pipelines, the CPL may be more generally applied to any similar application. The code also provides a variety of general purpose image- and signal-processing functions, making it an excellent framework for the creation of more generic data handling packages. The CPL handles low-level data types (images, tables, matrices, strings, property lists, etc.) and medium-level data access methods (a simple data abstraction layer for FITS files). It also provides table organization and manipulation, keyword/value handling and management, and support for dynamic loading of recipe modules using programs such as EsoRex (ascl:1504.003).

[ascl:2205.021] CPNest: Parallel nested sampling

CPNest performs Bayesian inference using the nested sampling algorithm. It is designed to be simple for the user to provide a model via a set of parameters, their bounds and a log-likelihood function. An optional log-prior function can be given for non-uniform prior distributions. The nested sampling algorithm is then used to compute the marginal likelihood or evidence. As a by-product the algorithm produces samples from the posterior probability distribution. The implementation is based on an ensemble MCMC sampler which can use multiple cores to parallelize computation.

[ascl:1710.009] CppTransport: Two- and three-point function transport framework for inflationary cosmology

CppTransport solves the 2- and 3-point functions of the perturbations produced during an inflationary epoch in the very early universe. It is implemented for models with canonical kinetic terms, although the underlying method is quite general and could be scaled to handle models with a non-trivial field-space metric or an even more general non-canonical Lagrangian.

[ascl:1102.012] CPROPS: Bias-free Measurement of Giant Molecular Cloud Properties

CPROPS, written in IDL, processes FITS data cubes containing molecular line emission and returns the properties of molecular clouds contained within it. Without corrections for the effects of beam convolution and sensitivity to GMC properties, the resulting properties may be severely biased. This is particularly true for extragalactic observations, where resolution and sensitivity effects often bias measured values by 40% or more. We correct for finite spatial and spectral resolutions with a simple deconvolution and we correct for sensitivity biases by extrapolating properties of a GMC to those we would expect to measure with perfect sensitivity. The resulting method recovers the properties of a GMC to within 10% over a large range of resolutions and sensitivities, provided the clouds are marginally resolved with a peak signal-to-noise ratio greater than 10. We note that interferometers systematically underestimate cloud properties, particularly the flux from a cloud. The degree of bias depends on the sensitivity of the observations and the (u,v) coverage of the observations. In the Appendix to the paper we present a conservative, new decomposition algorithm for identifying GMCs in molecular-line observations. This algorithm treats the data in physical rather than observational units, does not produce spurious clouds in the presence of noise, and is sensitive to a range of morphologies. As a result, the output of this decomposition should be directly comparable among disparate data sets.

The CPROPS package contains within it a distribution of the CLUMPFIND code (ascl:1107.014) written by Jonathan Williams and described in Williams, de Geus, and Blitz (1994). If you make use of the CLUMPFIND functionality in the CPROPS package for a publication, please cite Jonathan's original article.

[ascl:2002.021] CR-SISTEM: Symplectic integrator for lunar core-mantle and orbital dynamics

CR-SISTEM models lunar orbital and rotational dynamics, taking into account the effects of a liquid core. Orbits of the Moon and Earth are fully integrated, and other planets (or additional point-mass satellites) may be included in the integration. Lunar and solar tides on Earth, eccentricity and obliquity tides on the Moon, and lunar core-mantle friction are included. The integrator is one file (crsistem5.for) written in FORTRAN 90, uses seven input files (settings.in, planets.in, moons.in, tidal.in, lunar.in, precess.in and core.in), and has at least eight output files (planet101.out, moon101.out, pole.out, spin_orb.out, spin_ecl.out, cspin_ecl.out, long.out and clong.out); additional moons and planets would add more output. The input files provided with the code set up a 1 Myr simulation of a slow-spinning Moon on an orbit of 40 Earth radii, which will then dynamically relax to the lowest-energy state (in this case it is a synchronous rotation with a core spinning separately from the mantle).

[ascl:2009.018] CRAC: Cosmology R Analysis Code

CRAC (Cosmology R Analysis Code) provides R functions for cosmology. Its main functions are similar to the Python library CosmoloPy (ascl:2009.017); for example, it implements functions to compute spherical geometric quantities for cosmological research.

[ascl:1101.008] CRASH: A Block-Adaptive-Mesh Code for Radiative Shock Hydrodynamics

CRASH (Center for Radiative Shock Hydrodynamics) is a block adaptive mesh code for multi-material radiation hydrodynamics. The implementation solves the radiation diffusion model with the gray or multigroup method and uses a flux limited diffusion approximation to recover the free-streaming limit. The electrons and ions are allowed to have different temperatures and we include a flux limited electron heat conduction. The radiation hydrodynamic equations are solved in the Eulerian frame by means of a conservative finite volume discretization in either one, two, or three-dimensional slab geometry or in two-dimensional cylindrical symmetry. An operator split method is used to solve these equations in three substeps: (1) solve the hydrodynamic equations with shock-capturing schemes, (2) a linear advection of the radiation in frequency-logarithm space, and (3) an implicit solve of the stiff radiation diffusion, heat conduction, and energy exchange. We present a suite of verification test problems to demonstrate the accuracy and performance of the algorithms. The CRASH code is an extension of the Block-Adaptive Tree Solarwind Roe Upwind Scheme (BATS-R-US) code with this new radiation transfer and heat conduction library and equation-of-state and multigroup opacity solvers. Both CRASH and BATS-R-US are part of the publicly available Space Weather Modeling Framework (SWMF).

[ascl:2206.008] Craterstats2: Planetary surface dating from crater size-frequency distribution measurements

Craterstats2 plots crater counts and determining surface ages. The software plots isochrons in cumulative, differential, R-plot and Hartmann presentations, and makes isochron fits to both cumulative and differential data. Hartmann-style piecewise production functions may also be used. A Python implementation of the software, Craterstats3, is also available.

[ascl:2206.009] Craterstats3: Analyze and plot crater count data for planetary surface dating

Craterstats3 analyzes and plots crater count data for planetary surface dating. It is a Python implementation of Craterstats2 (ascl:2206.008) and is designed to replicate the output of the previous version as closely as possible. As before, it produces plots in cumulative, differential, Hartmann, and R-plot styles with possible overlays of crater counts, isochrons, equilibrium functions and epoch boundaries, as well aschronology and impact rate functions. Data can be shown with various binnings or unbinned, and age estimates made by either cumulative fitting, differential fitting, or Poisson timing evaluation. Numerical results can be output as text for further processing elsewhere. A number of published chronology systems are already set up for use, but new ones may be added by the user. The software is designed to be easily integrated into other software, which could allow the addition of a graphical interface or the inclusion of some Craterstats functions into a GIS.

[ascl:1111.002] CRBLASTER: A Parallel-Processing Computational Framework for Embarrassingly-Parallel Image-Analysis Algorithms

The development of parallel-processing image-analysis codes is generally a challenging task that requires complicated choreography of interprocessor communications. If, however, the image-analysis algorithm is embarrassingly parallel, then the development of a parallel-processing implementation of that algorithm can be a much easier task to accomplish because, by definition, there is little need for communication between the compute processes. I describe the design, implementation, and performance of a parallel-processing image-analysis application, called CRBLASTER, which does cosmic-ray rejection of CCD (charge-coupled device) images using the embarrassingly-parallel L.A.COSMIC algorithm. CRBLASTER is written in C using the high-performance computing industry standard Message Passing Interface (MPI) library. The code has been designed to be used by research scientists who are familiar with C as a parallel-processing computational framework that enables the easy development of parallel-processing image-analysis programs based on embarrassingly-parallel algorithms. The CRBLASTER source code is freely available at the official application website at the National Optical Astronomy Observatory. Removing cosmic rays from a single 800x800 pixel Hubble Space Telescope WFPC2 image takes 44 seconds with the IRAF script lacos_im.cl running on a single core of an Apple Mac Pro computer with two 2.8-GHz quad-core Intel Xeon processors. CRBLASTER is 7.4 times faster processing the same image on a single core on the same machine. Processing the same image with CRBLASTER simultaneously on all 8 cores of the same machine takes 0.875 seconds -- which is a speedup factor of 50.3 times faster than the IRAF script. A detailed analysis is presented of the performance of CRBLASTER using between 1 and 57 processors on a low-power Tilera 700-MHz 64-core TILE64 processor.

[ascl:1308.009] CReSyPS: Stellar population synthesis code

CReSyPS (Code Rennais de Synthèse de Populations Stellaires) is a stellar population synthesis code that determines core overshooting amount for Magellanic clouds main sequence stars.

[ascl:1612.009] CRETE: Comet RadiativE Transfer and Excitation

CRETE (Comet RadiativE Transfer and Excitation) is a one-dimensional water excitation and radiation transfer code for sub-millimeter wavelengths based on the RATRAN code (ascl:0008.002). The code considers rotational transitions of water molecules given a Haser spherically symmetric distribution for the cometary coma and produces FITS image cubes that can be analyzed with tools like MIRIAD (ascl:1106.007). In addition to collisional processes to excite water molecules, the effect of infrared radiation from the Sun is approximated by effective pumping rates for the rotational levels in the ground vibrational state.

[ascl:2103.017] CRIME: Cosmological Realizations for Intensity Mapping Experiments

CRIME (Cosmological Realizations for Intensity Mapping Experiments) generates mock realizations of intensity mapping observations of the neutral hydrogen distribution. It contains three separate tools, GetHI, ForGet, and JoinT. GetHI generates realizations of the temperature fluctuations due to the 21cm emission of neutral hydrogen. Optionally it can also generate a realization of the point-source continuum emission (for a given population) by sampling the same density distribution, though using this feature greatly affects performance. ForGet generates realizations of the different galactic and extra-galactic foregrounds relevant for intensity mapping experiments using some external datasets (e.g. the Haslam 408 MHz map) stored in the "data"folder. JoinT is provided for convenience; it joins the temperature maps generated by GetHI and ForGet and includes several instrument-dependent effects (in an overly simplistic way).

[ascl:1708.003] CRISPRED: CRISP imaging spectropolarimeter data reduction pipeline

CRISPRED reduces data from the CRISP imaging spectropolarimeter at the Swedish 1 m Solar Telescope (SST). It performs fitting routines, corrects optical aberrations from atmospheric turbulence as well as from the optics, and compensates for inter-camera misalignments, field-dependent and time-varying instrumental polarization, and spatial variation in the detector gain and in the zero level offset (bias). It has an object-oriented IDL structure with computationally demanding routines performed in C subprograms called as dynamically loadable modules (DLMs).

[ascl:1110.020] CROSS_CMBFAST: ISW-correlation Code

This code is an extension of CMBFAST4.5.1 to compute the ISW-correlation power spectrum and the 2-point angular ISW-correlation function for a given galaxy window function. It includes dark energy models specified by a constant equation of state (w) or a linear parameterization in the scale factor (w0,wa) and a constant sound speed (c2de). The ISW computation is limited to flat geometry. Differently from the original CMBFAST4.5 version dark energy perturbations are implemented for a general dark energy fluid specified by w(z) and c2de in synchronous gauge. For time varying dark energy models it is suggested not to cross the w=-1 line, as Dr. Wenkman says: "never cross the streams", bad things can happen.

[ascl:2106.004] crowdsource: Crowded field photometry pipeline

crowdsource removes a rough sky (the median), find the brighter peaks and fits these sources, computes centroids, and then computes an improved PSF. With this model of the image, the code then iteratively subtracts it and recomputes the median to get a better sky estimate, finds fainter peaks, and calculates a better PSF. crowdsource performs at least four iterations, evaluates the results, and continues until certain thresholds are met. Once the iterative passes are complete, it makes one last pass. If no sources are detected and positions do not vary, it performs photometry for the existing list of stellar positions.

[submitted] CRPropa 3.2

The landscape of high- and ultra-high-energy astrophysics has changed in the last decade, largely due to the inflow of data collected by large-scale cosmic-ray, gamma-ray, and neutrino observatories. At the dawn of the multimessenger era, the interpretation of these observations within a consistent framework is important to elucidate the open questions in this field. CRPropa 3.2 is a Monte Carlo code for simulating the propagation of high-energy particles in the Universe. This version represents a major leap forward, significantly expanding the simulation framework and opening up the possibility for many more astrophysical applications. This includes, among others: efficient simulation of high-energy particles in diffusion-dominated domains, self-consistent and fast modelling of electromagnetic cascades with an extended set of channels for photon production, and studies of cosmic-ray diffusion tensors based on updated coherent and turbulent magnetic-field models. Furthermore, several technical updates and improvements are introduced with the new version, such as: enhanced interpolation, targeted emission of sources, and a new propagation algorithm (Boris push). The detailed description of all novel features is accompanied by a discussion and a selected number of example applications.

[ascl:1412.013] CRPropa: Numerical tool for the propagation of UHE cosmic rays, gamma-rays and neutrinos

CRPropa computes the observable properties of UHECRs and their secondaries in a variety of models for the sources and propagation of these particles. CRPropa takes into account interactions and deflections of primary UHECRs as well as propagation of secondary electromagnetic cascades and neutrinos. CRPropa makes use of the public code SOPHIA (ascl:1412.014), and the TinyXML, CFITSIO (ascl:1010.001), and CLHEP libraries. A major advantage of CRPropa is its modularity, which allows users to implement their own modules adapted to specific UHECR propagation models. An updated version, CRPropa3 (ascl:2208.016), is available.

[ascl:2208.016] CRPropa3: Simulation framework for propagating extraterrestrial ultra-high energy particles

CRPropa3, an improved version of CRPropa2 (ascl:1412.013), provides a simulation framework to study the propagation of ultra-high-energy nuclei up to iron on their voyage through an (extra)galactic environment. It takes into account pion production, photodisintegration, and energy losses by pair production of all relevant isotopes in the ambient low-energy photon fields, as well as nuclear decay. CRPropa3 can model the deflection in (inter)galactic magnetic fields, the propagation of secondary electromagnetic cascades, and neutrinos for a multitude of scenarios for different source distributions and magnetic environments. It enables the user to predict the spectra of UHECR (and of their secondaries), their composition and arrival direction distribution. Additionally, the low-energy Galactic propagation can be simulated by solving the transport equation using stochastic differential equations. CRPropa3 features a very flexible simulation setup with python steering and shared-memory parallelization.

[ascl:2401.016] CRR: Convex Ridge Regularizer

CRR (Convex Ridge Regularizer) takes the gradient of regularizers that are the sum of convex-ridge functions and parameterizes them using a neural network that has a single hidden layer with increasing and learnable activation functions. The neural network is trained within a few minutes as a multistep Gaussian denoiser, and offers improvements for denoising and image reconstruction over other methods with similar reliability.

[ascl:1202.007] CRUNCH3D: Three-dimensional compressible MHD code

CRUNCH3D is a massively parallel, viscoresistive, three-dimensional compressible MHD code. The code employs a Fourier collocation spatial discretization, and uses a second-order Runge-Kutta temporal discretization. CRUNCH3D can be applied to MHD turbulence and magnetic fluxtube reconnection research.

[ascl:1308.011] CRUSH: Comprehensive Reduction Utility for SHARC-2 (and more...)

CRUSH is an astronomical data reduction/imaging tool for certain imaging cameras, especially at the millimeter, sub-millimeter, and far-infrared wavelengths. It supports the SHARC-2, LABOCA, SABOCA, ASZCA, p-ArTeMiS, PolKa, GISMO, MAKO and SCUBA-2 instruments. The code is written entirely in Java, allowing it to run on virtually any platform. It is normally run from the command-line with several arguments.

[ascl:2205.015] CS-ROMER: Compressed Sensing ROtation MEasure Reconstruction

CS-ROMER (Compressed Sensing ROtation MEasure Reconstruction) is a compressed sensing reconstruction framework for Faraday depth spectra. It can simulation Faraday depth sources, subtract Galactic RM, and reconstruct Faraday depth sources from linearly polarized data and Faraday depth sources using Compressed Sensing.

[ascl:0104.002] CSENV: A code for the chemistry of CircumStellar ENVelopes

CSENV is a code that computes the chemical abundances for a desired set of species as a function of radius in a stationary, non-clumpy, CircumStellar ENVelope. The chemical species can be atoms, molecules, ions, radicals, molecular ions, and/or their specific quantum states. Collisional ionization or excitation can be incorporated through the proper chemical channels. The chemical species interact with one another and can are subject to photo-processes (dissociation of molecules, radicals, and molecular ions as well as ionization of all species). Cosmic ray ionization can be included. Chemical reaction rates are specified with possible activation temperatures and additional power-law dependences. Photo-absorption cross-sections vs. wavelength, with appropriate thresholds, can be specified for each species, while for H2+ a photoabsorption cross-section is provided as a function of wavelength and temperature. The photons originate from both the star and the external interstellar medium. The chemical species are shielded from the photons by circumstellar dust, by other species and by themselves (self-shielding). Shielding of continuum-absorbing species by these species (self and mutual shielding), line-absorbing species, and dust varies with radial optical depth. The envelope is spherical by default, but can be made bipolar with an opening solid-angle that varies with radius. In the non-spherical case, no provision is made for photons penetrating the envelope from the sides. The envelope is subject to a radial outflow (or wind), constant velocity by default, but the wind velocity can be made to vary with radius. The temperature of the envelope is specified (and thus not computed self-consistently).

[ascl:1106.019] csra: Application of Compressive Sampling to Radio Astronomy I: Deconvolution

Compressive sampling is a new paradigm for sampling, based on sparseness of signals or signal representations. It is much less restrictive than Nyquist-Shannon sampling theory and thus explains and systematises the widespread experience that methods such as the Högbom CLEAN can violate the Nyquist-Shannon sampling requirements. In this paper, a CS-based deconvolution method for extended sources is introduced. This method can reconstruct both point sources and extended sources (using the isotropic undecimated wavelet transform as a basis function for the reconstruction step). We compare this CS-based deconvolution method with two CLEAN-based deconvolution methods: the Högbom CLEAN and the multiscale CLEAN. This new method shows the best performance in deconvolving extended sources for both uniform and natural weighting of the sampled visibilities. Both visual and numerical results of the comparison are provided.

[ascl:2406.011] CTC: Color transformations calculator

Color transformations calculator determines the magnitude of a galaxy in a needed photometric band, given its color and magnitude in the original band. It supports various optical and near intrared surveys, including SDSS, DECaLS, DELVE, UKIDSS, VHS, and VIKING, and provides conversions for both total and aperture magnitudes with apertures of 1.5", 2" or 3" diameters. The source code, useful for performing bulk calculations, is available in Python and IDL; the calculator is also offered as a web service.

[ascl:1307.015] CTI Correction Code

Charge Transfer Inefficiency (CTI) due to radiation damage above the Earth's atmosphere creates spurious trailing in images from Charge-Coupled Device (CCD) imaging detectors. Radiation damage also creates unrelated warm pixels, which can be used to measure CTI. This code provides pixel-based correction for CTI and has proven effective in Hubble Space Telescope Advanced Camera for Surveys raw images, successfully reducing the CTI trails by a factor of ~30 everywhere in the CCD and at all flux levels. The core is written in java for speed, and a front-end user interface is provided in IDL. The code operates on raw data by returning individual electrons to pixels from which they were unintentionally dragged during readout. Correction takes about 25 minutes per ACS exposure, but is trivially parallelisable to multiple processors.

[ascl:1601.005] ctools: Cherenkov Telescope Science Analysis Software

ctools provides tools for the scientific analysis of Cherenkov Telescope Array (CTA) data. Analysis of data from existing Imaging Air Cherenkov Telescopes (such as H.E.S.S., MAGIC or VERITAS) is also supported, provided that the data and response functions are available in the format defined for CTA. ctools comprises a set of ftools-like binary executables with a command-line interface allowing for interactive step-wise data analysis. A Python module allows control of all executables, and the creation of shell or Python scripts and pipelines is supported. ctools provides cscripts, which are Python scripts complementing the binary executables. Extensions of the ctools package by user defined binary executables or Python scripts is supported. ctools are based on GammaLib (ascl:1110.007).

[ascl:2104.005] CTR: Coronal Temperature Reconstruction

CTR (Coronal Temperature Reconstruction) reconstructs differential emission measures (DEMs) in the solar corona. Written in IDL, the code guarantees positivity of the recovered DEM, enforces an explicit smoothness constraint, returns a featureless (flat) solution in the absence of information, and converges quickly. The algorithm is robust and can be extended to other wavelengths where the DEM treatment is valid.

[ascl:1608.008] Cuba: Multidimensional numerical integration library

The Cuba library offers four independent routines for multidimensional numerical integration: Vegas, Suave, Divonne, and Cuhre. The four algorithms work by very different methods, and can integrate vector integrands and have very similar Fortran, C/C++, and Mathematica interfaces. Their invocation is very similar, making it easy to cross-check by substituting one method by another. For further safeguarding, the output is supplemented by a chi-square probability which quantifies the reliability of the error estimate.

[ascl:1609.010] CuBANz: Photometric redshift estimator

CuBANz is a photometric redshift estimator code for high redshift galaxies that uses the back propagation neural network along with clustering of the training set, making it very efficient. The training set is divided into several self learning clusters with galaxies having similar photometric properties and spectroscopic redshifts within a given span. The clustering algorithm uses the color information (i.e. u-g, g-r etc.) rather than the apparent magnitudes at various photometric bands, as the photometric redshift is more sensitive to the flux differences between different bands rather than the actual values. The clustering method enables accurate determination of the redshifts. CuBANz considers uncertainty in the photometric measurements as well as uncertainty in the neural network training. The code is written in C.

[ascl:1805.018] CUBE: Information-optimized parallel cosmological N-body simulation code

CUBE, written in Coarray Fortran, is a particle-mesh based parallel cosmological N-body simulation code. The memory usage of CUBE can approach as low as 6 bytes per particle. Particle pairwise (PP) force, cosmological neutrinos, spherical overdensity (SO) halofinder are included.

[ascl:2208.023] CubeFit: Regularized 3D fitting for spectro-imaging data

Cubefit is an OXY class that performs spectral fitting with spatial regularization in a spectro-imaging context. The 3D model is based on a 1D model and 2D parameter maps; the 2D maps are regularized using an L1L2 regularization by default. The estimator is a compound of a chi^2 based on the 1D model, a regularization term based of the 2D regularization of the various 2D parameter maps, and an optional decorrelation term based on the cross-correlation of specific pairs of parameter maps.

[ascl:1512.010] CubeIndexer: Indexer for regions of interest in data cubes

CubeIndexer indexes regions of interest (ROIs) in data cubes reducing the necessary storage space. The software can process data cubes containing megabytes of data in fractions of a second without human supervision, thus allowing it to be incorporated into a production line for displaying objects in a virtual observatory. The software forms part of the Chilean Virtual Observatory (ChiVO) and provides the capability of content-based searches on data cubes to the astronomical community.

[ascl:1208.018] CUBEP3M: High performance P3M N-body code

CUBEP3M is a high performance cosmological N-body code which has many utilities and extensions, including a runtime halo finder, a non-Gaussian initial conditions generator, a tuneable accuracy, and a system of unique particle identification. CUBEP3M is fast, has a memory imprint up to three times lower than other widely used N-body codes, and has been run on up to 20,000 cores, achieving close to ideal weak scaling even at this problem size. It is well suited and has already been used for a broad number of science applications that require either large samples of non-linear realizations or very large dark matter N-body simulations, including cosmological reionization, baryonic acoustic oscillations, weak lensing or non-Gaussian statistics.

[ascl:1805.031] CubiCal: Suite for fast radio interferometric calibration

CubiCal implements several accelerated gain solvers which exploit complex optimization for fast radio interferometric gain calibration. The code can be used for both direction-independent and direction-dependent self-calibration. CubiCal is implemented in Python and Cython, and multiprocessing is fully supported.

A successor to CubiCal, QuartiCal (ascl:2305.006), is available.

[ascl:1111.007] CUBISM: CUbe Builder for IRS Spectra Maps

CUBISM, written in IDL, constructs spectral cubes, maps, and arbitrary aperture 1D spectral extractions from sets of mapping mode spectra taken with Spitzer's IRS spectrograph. CUBISM is optimized for non-sparse maps of extended objects, e.g. the nearby galaxy sample of SINGS, but can be used with data from any spectral mapping AOR (primarily validated for maps which are designed as suggested by the mapping HOWTO).

[ascl:2105.016] CUDAHM: MCMC sampling of hierarchical models with GPUs

CUDAHM accelerates Bayesian inference of Hierarchical Models using Markov Chain Monte Carlo by constructing a Metropolis-within-Gibbs MCMC sampler for a three-level hierarchical model, requiring the user to supply only a minimimal amount of CUDA code. CUDAHM assumes that a set of measurements are available for a sample of objects, and that these measurements are related to an unobserved set of characteristics for each object. For example, the measurements could be the spectral energy distributions of a sample of galaxies, and the unknown characteristics could be the physical quantities of the galaxies, such as mass, distance, or age. The measured spectral energy distributions depend on the unknown physical quantities, which enables one to derive their values from the measurements. The characteristics are also assumed to be independently and identically sampled from a parent population with unknown parameters (e.g., a Normal distribution with unknown mean and variance). CUDAHM enables one to simultaneously sample the values of the characteristics and the parameters of their parent population from their joint posterior probability distribution.

[ascl:2404.021] cudisc: CUDA-accelerated 2D code for protoplanetary disc evolution simulations

cuDisc simulates the evolution of protoplanetary discs in both the radial and vertical dimensions, assuming axisymmetry. The code performs 2D dust advection-diffusion, dust coagulation/fragmentation, and radiative transfer. A 1D evolution model is also included, with the 2D gas structure calculated via vertical hydrostatic equilibrium. cuDisc requires a NVIDIA GPU.

[ascl:2408.009] Cue: Nebular emission modeling

Cue interprets nebular emission across a wide range of ionizing conditions of galaxies. The software, based on Cloudy (ascl:9910.001), emulates a neural net. It does not require a specific ionizing spectrum as a source, instead approximating the ionizing spectrum with a 4-part piece-wise power-law. Along with the flexible ionizing spectra, Cue allows freedom in [O/H], [N/O], [C/O], gas density, and total ionizing photon budget.

[ascl:1810.015] cuFFS: CUDA-accelerated Fast Faraday Synthesis

cuFFS (CUDA-accelerated Fast Faraday Synthesis) performs Faraday rotation measure synthesis; it is particularly well-suited for performing RM synthesis on large datasets. Compared to a fast single-threaded and vectorized CPU implementation, depending on the structure and format of the data cubes, cuFFs achieves an increase in speed of up to two orders of magnitude. The code assumes that the pixels values are IEEE single precision floating points (BITPIX=-32), and the input cubes must have 3 axes (2 spatial dimensions and 1 frequency axis) with frequency axis as NAXIS1. A package is included to reformat data with individual stokes Q and U channel maps to the required format. The code supports both the HDFITS format and the standard FITS format, and is written in C with GPU-acceleration achieved using Nvidia's CUDA parallel computing platform.

[ascl:1109.013] CULSP: Fast Calculation of the Lomb-Scargle Periodogram Using Graphics Processing Units

I introduce a new code for fast calculation of the Lomb-Scargle periodogram, that leverages the computing power of graphics processing units (GPUs). After establishing a background to the newly emergent field of GPU computing, I discuss the code design and narrate key parts of its source. Benchmarking calculations indicate no significant differences in accuracy compared to an equivalent CPU-based code. However, the differences in performance are pronounced; running on a low-end GPU, the code can match 8 CPU cores, and on a high-end GPU it is faster by a factor approaching thirty. Applications of the code include analysis of long photometric time series obtained by ongoing satellite missions and upcoming ground-based monitoring facilities; and Monte-Carlo simulation of periodogram statistical properties.

[ascl:1311.007] CUPID: Clump Identification and Analysis Package

The CUPID package allows the identification and analysis of clumps of emission within 1, 2 or 3 dimensional data arrays. Whilst targeted primarily at sub-mm cubes, it can be used on any regularly gridded 1, 2 or 3D data. A variety of clump finding algorithms are implemented within CUPID, including the established ClumpFind (ascl:1107.014) and GAUSSCLUMPS (ascl:1406.018) algorithms. In addition, two new algorithms called FellWalker and Reinhold are also provided. CUPID allows easy inter-comparison between the results of different algorithms; the catalogues produced by each algorithm contains a standard set of columns containing clump peak position, clump centroid position, the integrated data value within the clump, clump volume, and the dimensions of the clump. In addition, pixel masks are produced identifying which input pixels contribute to each clump. CUPID is distributed as part of the Starlink (ascl:1110.012) software collection.

[ascl:1311.008] CUPID: Customizable User Pipeline for IRS Data

Written in c, the Customizable User Pipeline for IRS Data (CUPID) allows users to run the Spitzer IRS Pipelines to re-create Basic Calibrated Data and extract calibrated spectra from the archived raw files. CUPID provides full access to all the parameters of the BCD, COADD, BKSUB, BKSUBX, and COADDX pipelines, as well as the opportunity for users to provide their own calibration files (e.g., flats or darks). CUPID is available for Mac, Linux, and Solaris operating systems.

[ascl:1405.015] CURSA: Catalog and Table Manipulation Applications

The CURSA package manipulates astronomical catalogs and similar tabular datasets. It provides facilities for browsing or examining catalogs; selecting subsets from a catalog; sorting and copying catalogs; pairing two catalogs; converting catalog coordinates between some celestial coordinate systems; and plotting finding charts and photometric calibration. It can also extract subsets from a catalog in a format suitable for plotting using other Starlink packages such as PONGO. CURSA can access catalogs held in the popular FITS table format, the Tab-Separated Table (TST) format or the Small Text List (STL) format. Catalogs in the STL and TST formats are simple ASCII text files. CURSA also includes some facilities for accessing remote on-line catalogs via the Internet. It is part of the Starlink software collection (ascl:1110.012).

[ascl:2101.013] Curvit: Create light curves from UVIT data

Curvit produces light curves from UVIT (Ultraviolet Imaging Telescope) data. It uses the events list from the official UVIT L2 pipeline (version 6.3 onwards) as input. The makecurves function of curvit automatically detects sources from events list and creates light curves. Curvit provides source coordinates only in the instrument coordinate system. If you already have the source coordinates, the curve function of curvit can be used to create light curves. The package has several parameters that can be set by the user; some of these parameters have default values. Curvit is available on PyPI.

[ascl:2206.025] CuspCore: Core formation in dark matter haloes and ultra-diffuse galaxies by outflow episodes

CuspCore describes the formation of flat cores in dark matter haloes and ultra-diffuse galaxies from feedback-driven outflow episodes. The halo response is divided into an instantaneous change of potential at constant velocities followed by an energy-conserving relaxation. The core assumption of the model is that the total energy E=U+K is conserved for each shell enclosing a given dark matter mass, which is treated in the code as a least-square minimization of the difference between the final and the initial energy of each shell.

[ascl:1505.016] CUTE: Correlation Utilities and Two-point Estimation

CUTE (Correlation Utilities and Two-point Estimation) extracts any two-point statistic from enormous datasets with hundreds of millions of objects, such as large galaxy surveys. The computational time grows with the square of the number of objects to be correlated; technology provides multiple means to massively parallelize this problem and CUTE is specifically designed for these kind of calculations. Two implementations are provided: one for execution on shared-memory machines using OpenMP and one that runs on graphical processing units (GPUs) using CUDA.

[ascl:1708.018] CUTEX: CUrvature Thresholding EXtractor

CuTEx analyzes images in the infrared bands and extracts sources from complex backgrounds, particularly star-forming regions that offer the challenges of crowding, having a highly spatially variable background, and having no-psf profiles such as protostars in their accreting phase. The code is composed of two main algorithms, the first an algorithm for source detection, and the second for flux extraction. The code is originally written in IDL language and it was exported in the license free GDL language. CuTEx could be used in other bands or in scientific cases different from the native case.

This software is also available as an on-line tool from the Multi-Mission Interactive Archive web pages dedicated to the Herschel Observatory.

[ascl:2210.030] cuvarbase: fast period finding utilities for GPUs

cuvarbase provides a Python library for performing period finding (Lomb-Scargle, Phase Dispersion Minimization, Conditional Entropy, Box-least squares) on astronomical time-series datasets. Speedups over CPU implementations depend on the algorithm, dataset, and GPU capabilities but are typically ~1-2 orders of magnitude and are especially high for BLS and Lomb-Scargle.

[ascl:2008.017] CVXOPT: Convex Optimization

CVXOPT makes the development of software for convex optimization applications straightforward by building on Python’s extensive standard library and on the strengths of Python as a high-level programming language. It offers efficient Python classes for dense and sparse matrices (real and complex) with Python indexing and slicing and overloaded operations for matrix arithmetic, an interface to the fast Fourier transform routines from FFTW, and an interface to most of the double-precision real and complex BLAS. It contains routines for linear, second-order cone, and semidefinite programming problems, and for nonlinear convex optimization. CVXOPT also provides an interface to LAPACK routines for solving linear equations and least-squares problems, matrix factorizations (LU, Cholesky, LDLT and QR), symmetric eigenvalue and singular value decomposition, and Schur factorization, and a modeling tool for specifying convex piecewise-linear optimization problems.

[ascl:2011.028] CWITools: Tools for Cosmic Web Imager data

CWITools analyzes integral field spectroscopy data from the Palomar and Keck Cosmic Web Imagers, and can be adapted for any three-dimensional integral field spectroscopy data. The package is modular, allowing users to construct data analysis pipelines to suit their own scientific needs, and includes tools for reducing data cubes, extracting a target signal, making emission maps, spectra, and other products. It also fits emission line and radial profiles and obtains final scalar quantities such as size and luminosity, among other tasks. It also contains helper functions that can, for example, obtain the wavelength axis from a 3D header, and create an auto-populated list of nebular emission lines or sky lines.

[ascl:1606.003] Cygrid: Cython-powered convolution-based gridding module for Python

The Python module Cygrid grids (resamples) data to any collection of spherical target coordinates, although its typical application involves FITS maps or data cubes. The module supports the FITS world coordinate system (WCS) standard; its underlying algorithm is based on the convolution of the original samples with a 2D Gaussian kernel. A lookup table scheme allows parallelization of the code and is combined with the HEALPix tessellation of the sphere for fast neighbor searches. Cygrid's runtime scales between O(n) and O(nlog n), with n being the number of input samples.

[ascl:2303.001] cysgp4: Wrapper for C++ SGP4 satellite library

The cysgp4 Cython-powered package wraps the C++ SGP4 Library for computing satellite positions from two-line elements (TLE). It provides similar functionality as the sgp4 Python package, though also works well with arrays of TLEs and/or observing times and makes use of multi-core platforms (via OpenMP) to improve processing times.

[ascl:2203.010] D2O: Distributed Data Object

D2O acts as a layer of abstraction between algorithm code and data-distribution logic to manage cluster-distributed multi-dimensional numerical arrays; this provides usability without losing numerical performance and scalability. D2O's global interface makes the cluster node's local data directly accessible for use in customized high-performance modules. D2O is written in Python; the code is portable and easy to use and modify. Expensive operations are carried out by dedicated external libraries like numpy and mpi4py and performance scales well when moving to an MPI cluster. In combination with NIFTy, D2O enables supercomputer based astronomical imaging via RESOLVE (ascl:1505.028) and D3PO (ascl:1504.018).

[ascl:1504.018] D3PO: Denoising, Deconvolving, and Decomposing Photon Observations

D3PO (Denoising, Deconvolving, and Decomposing Photon Observations) addresses the inference problem of denoising, deconvolving, and decomposing photon observations. Its primary goal is the simultaneous but individual reconstruction of the diffuse and point-like photon flux given a single photon count image, where the fluxes are superimposed. A hierarchical Bayesian parameter model is used to discriminate between morphologically different signal components, yielding a diffuse and a point-like signal estimate for the photon flux components.

[ascl:1612.007] dacapo_calibration: Photometric calibration code

dacapo_calibration implements the DaCapo algorithm used in the Planck/LFI 2015 data release for photometric calibration. The code takes as input a set of TODs and calibrates them using the CMB dipole signal. DaCapo is a variant of the well-known family of destriping algorithms for map-making.

[ascl:1804.005] DaCHS: Data Center Helper Suite

DaCHS, the Data Center Helper Suite, is an integrated package for publishing astronomical data sets to the Virtual Observatory. Network-facing, it speaks the major VO protocols (SCS, SIAP, SSAP, TAP, Datalink, etc). Operator-facing, many input formats, including FITS/WCS, ASCII files, and VOTable, can be processed to publication-ready data. DaCHS puts particular emphasis on integrated metadata handling, which facilitates a tight integration with the VO's Registry

[ascl:1507.015] DALI: Derivative Approximation for LIkelihoods

DALI (Derivative Approximation for LIkelihoods) is a fast approximation of non-Gaussian likelihoods. It extends the Fisher Matrix in a straightforward way and allows for a wider range of posterior shapes. The code is written in C/C++.

[ascl:1912.004] DALiuGE: Data Activated Liu Graph Engine

DALiuGE provides a distributed data management platform and a scalable pipeline execution environment to support continuous, soft real-time, data-intensive processing for producing radio astronomy data products; it originated from a prototyping activity as part of the SKA SDP Consortium called Data Flow Management System (DFMS). Though the development of DALiuGE is largely based on radio astronomy processing requirements, it has adopted a generic, data-driven framework architecture potentially applicable to many other data-intensive applications.

[ascl:1803.001] DaMaSCUS-CRUST: Dark Matter Simulation Code for Underground Scatterings - Crust Edition

DaMaSCUS-CRUST determines the critical cross-section for strongly interacting DM for various direct detection experiments systematically and precisely using Monte Carlo simulations of DM trajectories inside the Earth's crust, atmosphere, or any kind of shielding. Above a critical dark matter-nucleus scattering cross section, any terrestrial direct detection experiment loses sensitivity to dark matter, since the Earth crust, atmosphere, and potential shielding layers start to block off the dark matter particles. This critical cross section is commonly determined by describing the average energy loss of the dark matter particles analytically. However, this treatment overestimates the stopping power of the Earth crust; therefore, the obtained bounds should be considered as conservative. DaMaSCUS-CRUST is a modified version of DaMaSCUS (ascl:1706.003) that accounts for shielding effects and returns a precise exclusion band.

[ascl:2102.018] DaMaSCUS-SUN: Dark Matter Simulation Code for Underground Scatterings - Sun Edition

DaMaSCUS-SUN is a Monte Carlo tool simulating the process of solar reflection of dark matter (DM) particles. It provides precise estimates of the DM particle flux reflected by the Sun and passing through a direct detection experiment on Earth. One application is to compute exclusion limits for low DM masses based on nuclear and electron recoil experiments.

[ascl:1706.003] DaMaSCUS: Dark Matter Simulation Code for Underground Scatterings

DaMaSCUS calculates the density and velocity distribution of dark matter (DM) at any detector of given depth and latitude to provide dark matter particle trajectories inside the Earth. Provided a strong enough DM-matter interaction, the particles scatter on terrestrial atoms and get decelerated and deflected. The resulting local modifications of the DM velocity distribution and number density can have important consequences for direct detection experiments, especially for light DM, and lead to signatures such as diurnal modulations depending on the experiment's location on Earth. The code involves both the Monte Carlo simulation of particle trajectories and generation of data as well as the data analysis consisting of non-parametric density estimation of the local velocity distribution functions and computation of direct detection event rates.

[ascl:1011.006] DAME: A Web Oriented Infrastructure for Scientific Data Mining & Exploration

DAME (DAta Mining & Exploration) is an innovative, general purpose, Web-based, VObs compliant, distributed data mining infrastructure specialized in Massive Data Sets exploration with machine learning methods. Initially fine tuned to deal with astronomical data only, DAME has evolved in a general purpose platform which has found applications also in other domains of human endeavor.

[ascl:1412.004] DAMIT: Database of Asteroid Models from Inversion Techniques

DAMIT (Database of Asteroid Models from Inversion Techniques) is a database of three-dimensional models of asteroids computed using inversion techniques; it provides access to reliable and up-to-date physical models of asteroids, i.e., their shapes, rotation periods, and spin axis directions. Models from DAMIT can be used for further detailed studies of individual objects as well as for statistical studies of the whole set. The source codes for lightcurve inversion routines together with brief manuals, sample lightcurves, and the code for the direct problem are available for download.

[ascl:1807.023] DAMOCLES: Monte Carlo line radiative transfer code

The Monte Carlo code DAMOCLES models the effects of dust, composed of any combination of species and grain size distributions, on optical and NIR emission lines emitted from the expanding ejecta of a late-time (> 1 yr) supernova. The emissivity and dust distributions follow smooth radial power-law distributions; any arbitrary distribution can be specified by providing the appropriate grid. DAMOCLES treats a variety of clumping structures as specified by a clumped dust mass fraction, volume filling factor, clump size and clump power-law distribution, and the emissivity distribution may also initially be clumped. The code has a large number of variable parameters ranging from 5 dimensions in the simplest models to > 20 in the most complex cases.

[ascl:1709.005] DanIDL: IDL solutions for science and astronomy

DanIDL provides IDL functions and routines for many standard astronomy needs, such as searching for matching points between two coordinate lists of two-dimensional points where each list corresponds to a different coordinate space, estimating the full-width half-maximum (FWHM) and ellipticity of the PSF of an image, calculating pixel variances for a set of calibrated image data, and fitting a 3-parameter plane model to image data. The library also supplies astrometry, general image processing, and general scientific applications.

[ascl:1104.011] DAOPHOT: Crowded-field Stellar Photometry Package

The DAOPHOT program exploits the capability of photometrically linear image detectors to perform stellar photometry in crowded fields. Raw CCD images are prepared prior to analysis, and following the obtaining of an initial star list with the FIND program, synthetic aperture photometry is performed on the detected objects with the PHOT routine. A local sky brightness and a magnitude are computed for each star in each of the specified stellar apertures, and for crowded fields, the empirical point-spread function must then be obtained for each data frame. The GROUP routine divides the star list for a given frame into optimum subgroups, and then the NSTAR routine is used to obtain photometry for all the stars in the frame by means of least-squares profile fits.

[ascl:1011.002] DAOSPEC: An Automatic Code for Measuring Equivalent Widths in High-resolution Stellar Spectra

DAOSPEC is a Fortran code for measuring equivalent widths of absorption lines in stellar spectra with minimal human involvement. It works with standard FITS format files and it is designed for use with high resolution (R>15000) and high signal-to-noise-ratio (S/N>30) spectra that have been binned on a linear wavelength scale. First, we review the analysis procedures that are usually employed in the literature. Next, we discuss the principles underlying DAOSPEC and point out similarities and differences with respect to conventional measurement techniques. Then experiments with artificial and real spectra are discussed to illustrate the capabilities and limitations of DAOSPEC, with special attention given to the issues of continuum placement; radial velocities; and the effects of strong lines and line crowding. Finally, quantitative comparisons with other codes and with results from the literature are also presented.

[ascl:2401.008] DARC: Dirac Atomic R-matrix Codes

DARC (Dirac Atomic R-matrix Codes) enables the study of continuum processes for a general atomic system. The suite of programs calculate electron-atom or electron-ion collision cross-sections. In addition, the programs include code for bound-state and photoionization calculations.

[ascl:1706.004] Dark Sage: Semi-analytic model of galaxy evolution

DARK SAGE is a semi-analytic model of galaxy formation that focuses on detailing the structure and evolution of galaxies' discs. The code-base, written in C, is an extension of SAGE (ascl:1601.006) and maintains the modularity of SAGE. DARK SAGE runs on any N-body simulation with trees organized in a supported format and containing a minimum set of basic halo properties.

[ascl:2201.006] dark-photons-perturbations: Dark photon conversions in our inhomogeneous Universe

dark-photons-perturbations determines constraints from Cosmic Microwave Background photons oscillating into dark photons, and from heating of the primordial plasma due to dark photon dark matter converting into low-energy photons in an inhomogeneous universe.

[ascl:2112.011] DarkARC: Dark Matter-induced Atomic Response Code

DarkARC computes and tabulates atomic response functions for direct sub-GeV dark matter (DM) searches. The tabulation of the atomic response functions is separated into two steps: 1.) the computation and tabulation of three radial integrals, and 2.) their combination into the response function tables. The computations are performed in parallel using the multiprocessing library.

[ascl:2011.029] DarkBit: Dark matter constraints calculator

DarkBit computes dark matter constraints on extensions to the Standard Model of particle physics. Written in the GAMBIT (ascl:1708.030) framework, it seamlessly integrates with other tools in the statistical fitting framework; it is also available as a standalone tool. It offers a signal yield calculator for gamma-ray observations, provides likelihoods for arbitrary combinations of spin-independent and spin-dependent scattering processes, and provides a general solution for studying complex particle physics models that predict dark matter annihilation to a multitude of final states.

[ascl:2011.005] DarkCapPy: Dark Matter Capture and Annihilation

DarkCapPy calculates rates associated with dark matter capture in the Earth, annihilation into light mediators, and observable decay of the light mediators near the surface of the Earth. This Python/Jupyter package can calculate the Sommerfeld enhancement at the center of the Earth and the timescale for capture-annihilation equilibrium, and can be modified for other compact astronomical objects and mediator spins.

[ascl:2103.009] DarkEmulator: Cosmological emulation code for halo clustering statistics

The cosmology code DarkEmulator calculates summary statistics of large scale structure constructed as a part of Dark Quest Project. The “dark_emulator” python package enables fast and accurate computations of halo clustering quantities. The code supports the halo mass function, halo-matter cross-correlation, and halo auto-correlation as a function of halo masses, redshift, separations and cosmological models.

[ascl:2204.019] DarkFlux: Dark Matter annihilation spectrum computer

DarkFlux analyzes indirect-detection signatures for next-generation models of dark matter (DM) with multiple annihilation channels. Input is user-generated models with 2 → 2 tree-level dark matter annihilation to pairs of Standard Model (SM) particles. The code analyzes DM annihilation to γ rays using three modules; one computes the fractional annihilation rate, the second computes the total flux at Earth due to DM annihilation, and the third compares the total flux to observational data and computes the upper limit at 95% confidence level (CL) on the total thermally averaged DM annihilation cross section.

[ascl:2007.010] DarkHistory: Modified cosmic ionization and thermal histories calculator

DarkHistory calculates the global temperature and ionization history of the universe given an exotic source of energy injection, such as dark matter annihilation or decay. The software simultaneously solves for the evolution of the free electron fraction and gas temperature, and for the cooling of annihilation/decay products and the secondary particles produced in the process. Consequently, we can self-consistently include the effects of both astrophysical and exotic sources of heating and ionization, and automatically take into account backreaction, where modifications to the ionization/temperature history in turn modify the energy-loss processes for injected particles.

[ascl:2305.011] DarkMappy: Mapping the dark universe

DarkMappy reconstructs maximum a posteriori (MAP) convergence maps by formulating an unconstrained Bayesian inference problem in order to implement hybrid Bayesian dark-matter reconstruction techniques on the plane and on the celestial sphere. These convergence maps support principled uncertainty quantification and provide hypothesis testing of structure, from which it is possible to distinguish between physical objects and artifacts of the reconstruction.

[ascl:2106.032] DarkSirensStat: Measuring modified GW propagation and the Hubble parameter

DarkSirensStat statistically measures modified gravitational wave (GW) propagation and the Hubble parameter. The package implements a hierarchical Bayesian framework for constraining the Hubble parameter and modified GW propagation with dark sirens and galaxy catalogs. The package downloads the needed data; which include the GLADE galaxy catalog, O2 and O3 skymaps from the LVC official data releases, and O2 and O3 strain sensitivities. The default options are for running inference for H0 on the O3 BBH events, with flat prior between 20 and 140, mask completeness with 9 masks, interpolation between multiplicative and homogeneous completion, B-band luminosity weights, and a completeness threshold of 50%. The selection effects are computed with MC.

[ascl:1110.002] DarkSUSY: Supersymmetric Dark Matter Calculations

DarkSUSY, written in Fortran, is a publicly-available advanced numerical package for neutralino dark matter calculations. In DarkSUSY one can compute the neutralino density in the Universe today using precision methods which include resonances, pair production thresholds and coannihilations. Masses and mixings of supersymmetric particles can be computed within DarkSUSY or with the help of external programs such as FeynHiggs, ISASUGRA and SUSPECT. Accelerator bounds can be checked to identify viable dark matter candidates. DarkSUSY also computes a large variety of astrophysical signals from neutralino dark matter, such as direct detection in low-background counting experiments and indirect detection through antiprotons, antideuterons, gamma-rays and positrons from the Galactic halo or high-energy neutrinos from the center of the Earth or of the Sun.

[ascl:2101.015] DarpanX: X-ray reflectivity of multilayer mirrors

DarpanX computes reflectivity and other specular optical functions of a multilayer or single layer mirror for different energy and angles as well as to fit the XRR measurements of the mirrors. It can be used as a standalone package. It has also been implemented as a local module for XSPEC (ascl:9910.005), which is accessible through and requires PyXspec (ascl:2101.014), and can accurately fit experimentally measured X-ray reflectivity data. DarpanX is implemented as a Python 3 module and an API is provided to access the underlying algorithms.

[ascl:2409.001] DarsakX: X-ray telescope design and imaging performance analyzer

Written in Python, DarsakX is used to design and analyze the imaging performance of a multi-shell X-ray telescope with an optical configuration similar to Wolter-1 optics for astronomical purposes. It can also assess the impact of figure error on the telescope's imaging performance and optimize the optical design to improve angular resolution for wide-field telescopes. By default, DarsakX uses DarpanX (ascl:2101.015) to calculate the mirror's reflectivity.

[ascl:1402.027] Darth Fader: Galaxy catalog cleaning method for redshift estimation

Darth Fader is a wavelet-based method for extracting spectral features from very noisy spectra. Spectra for which a reliable redshift cannot be measured are identified and removed from the input data set automatically, resulting in a clean catalogue that gives an extremely low rate of catastrophic failures even when the spectra have a very low S/N. This technique may offer a significant boost in the number of faint galaxies with accurately determined redshifts.

[ascl:2002.009] DASH: Deep Automated Supernova and Host classifier

DASH classifies the type, age, redshift and host for any supernova spectra based on the learned features, through use of a deep convolutional neural network to train a matching algorithm, of each supernova’s type and age. The Python library allows a user to classify spectra; the software is fast and can classify thousands of spectra in seconds. A graphical interface that enables a user to view and classify a spectrum is also available.

[ascl:2009.023] DASTCOM5: JPL small-body data browser

DASTCOM5 is a portable direct-access database containing all NASA/JPL asteroid and comet orbit solutions, and the software to access it. Available data include orbital elements, orbit diagrams, physical parameters, and discovery circumstances. A JPL implementation of the software is available at http://ssd.jpl.nasa.gov/sbdb.cgi.

[submitted] Data modelling approaches to astronomical data - Mapping large spectral line data cubes to dimensional data models

As a new generation of large-scale telescopes are expected to produce single data products in the range of hundreds of GBs to multiple TBs, different approaches to I/O efficient data interaction and extraction need to be investigated and made available to researchers. This will become increasingly important as the downloading and distribution of TB scale data products will become unsustainable, and researchers will have to take their processing analysis to the data. We present a methodology to extract 3 dimensional spatial-spectral data from dimensionally modelled tables in Parquet format on a Hadoop system. The data is loaded into the Parquet tables from FITS cube files using a dedicated process. We compare the performance of extracting data using the Apache Spark parallel compute framework on top of the Parquet-Hadoop ecosystem with data extraction from the original source files on a shared file system. We have found that the Spark-Parquet-Hadoop solution provides significant performance benefits, particularly in a multi user environment. We present a detailed analysis of the single and multi-user experiments conducted and also discuss the benefits and limitations of the platform used for this study.

[ascl:2307.016] DataComb: Combining data for better images

DataComb combines radio interferometric and single dish observations and obtains quantitative measures of how different techniques perform to obtain better fidelity images. The package relies on CASA (ascl:1107.013) for the combinations and on AstroPy (ascl:1304.002) for making quantitative
comparisons between different images produced by different methods. Model images and simulations are also used to assess the different combination methods.

[ascl:1405.011] DATACUBE: A datacube manipulation package

DATACUBE is a command-line package for manipulating and visualizing data cubes. It was designed for integral field spectroscopy but has been extended to be a generic data cube tool, used in particular for sub-millimeter data cubes from the James Clerk Maxwell Telescope. It is part of the Starlink software collection (ascl:1110.012).

[ascl:1903.012] DAVE: Discovery And Vetting of K2 Exoplanets

DAVE implements a pipeline to find and vet planets planets using data from NASA's K2 mission. The pipeline contains several modules tailored to particular aspects of the vetting procedures, using photocenter analysis to rule out background eclipsing binaries and flux time-series analysis to rule out odd–even differences, secondary eclipses, low-S/N events, variability other than a transit, and size of the transiting object.

[ascl:2108.020] DBSP_DRP: DBSP Data Reduction Pipeline

DBSP_DRP reduces data from the Palomar spectrograph DBSP. Built on top of PypeIt (ascl:1911.004), it automates the reduction, fluxing, telluric correction, and combining of the red and blue sides of one night's data. The pipeline also provides several GUIs for easier control of the reduction, with one for selecting which data to reduce, and verifying the correctness of FITS headers in an editable table. Another GUI manually places traces for a sort of manually "forced" spectroscopy with the -m option, and after manually placing traces, manually selects sky regions and tweaks the FWHM of the manual traces.

[ascl:1709.006] DCMDN: Deep Convolutional Mixture Density Network

Deep Convolutional Mixture Density Network (DCMDN) estimates probabilistic photometric redshift directly from multi-band imaging data by combining a version of a deep convolutional network with a mixture density network. The estimates are expressed as Gaussian mixture models representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) are applied as performance criteria. DCMDN is able to predict redshift PDFs independently from the type of source, e.g. galaxies, quasars or stars and renders pre-classification of objects and feature extraction unnecessary; the method is extremely general and allows the solving of any kind of probabilistic regression problems based on imaging data, such as estimating metallicity or star formation rate in galaxies.

[ascl:1207.006] dcr: Cosmic Ray Removal

This code provides a method for detecting cosmic rays in single images. The algorithm is based on a simple analysis of the histogram of the image data and does not use any modeling of the picture of the object. It does not require a good signal-to-noise ratio in the image data. Identification of multiple-pixel cosmic-ray hits is realized by running the procedure for detection and replacement iteratively. The method is very effective when applied to the images with spectroscopic data, and is also very fast in comparison with other single-image algorithms found in astronomical data-processing packages. Practical implementation and examples of application are presented in the code paper.

[ascl:2011.030] DDCalc: Dark matter direct detection phenomenology package

DDCalc performs various dark matter direct detection calculations, including signal rate predictions, constraints on light DM, and likelihoods for several experiments. It offers eighteen non-relativistic effective operators to describe velocity and momentum transfer, and elastic scattering of DM particles off nucleons, and has an extended detector interface.

[ascl:2305.008] DDFacet: Facet-based radio imaging package

DDFacet provides a wideband wide-field spectral imaging and deconvolution framework that accounts for generic direction-dependent effects (DDEs). It implements a wide-field coplanar faceting scheme and uses nontrivial facet-dependent w-kernels to correct for noncoplanarity within the facets. In the imaging and deconvolution steps, DDFacet can handle generic, spatially discrete, time-frequency-baseline-direction-dependent full polarization Jones matrices, and computes a direction dependent PSF for use in the minor cycle of deconvolution for time-frequency-baseline dependent Mueller matrices. The code also allows for the effects of time and bandwidth averaging to be explicitly incorporated into deconvolution. DDFacet has been successfully tested with data diverse telescopes such as LOFAR, VLA, MeerKAT AR1, and ATCA.

[ascl:1212.012] ddisk: Debris disk time-evolution

ddisk is an IDL script that calculates the time-evolution of a circumstellar debris disk. It calculates dust abundances over time for a debris-disk that is produced by a planetesimal disk that is grinding away due to collisional erosion.

[ascl:1810.020] DDS: Debris Disk Radiative Transfer Simulator

DDS simulates scattered light and thermal reemission in arbitrary optically dust distributions with spherical, homogeneous grains where the dust parameters (optical properties, sublimation temperature, grain size) and SED of the illuminating/ heating radiative source can be arbitrarily defined. The code is optimized for studying circumstellar debris disks where large grains (i.e., with large size parameters) are expected to determine the far-infrared through millimeter dust reemission spectral energy distribution. The approach to calculate dust temperatures and dust reemission spectra is only valid in the optically thin regime. The validity of this constraint is verified for each model during the runtime of the code. The relative abundances of different grains can be arbitrarily chosen, but must be constant outside the dust sublimation region., i.e., the shape of the (arbitrary) radial dust density distribution outside the dust sublimation region is the same for all grain sizes and chemistries.

[ascl:0008.001] DDSCAT: The discrete dipole approximation for scattering and absorption of light by irregular particles

DDSCAT is a freely available software package which applies the "discrete dipole approximation" (DDA) to calculate scattering and absorption of electromagnetic waves by targets with arbitrary geometries and complex refractive index. The DDA approximates the target by an array of polarizable points. DDSCAT.5a requires that these polarizable points be located on a cubic lattice. DDSCAT allows accurate calculations of electromagnetic scattering from targets with "size parameters" 2 pi a/lambda < 15 provided the refractive index m is not large compared to unity (|m-1| < 1). The DDSCAT package is written in Fortran and is highly portable. The program supports calculations for a variety of target geometries (e.g., ellipsoids, regular tetrahedra, rectangular solids, finite cylinders, hexagonal prisms, etc.). Target materials may be both inhomogeneous and anisotropic. It is straightforward for the user to import arbitrary target geometries into the code, and relatively straightforward to add new target generation capability to the package. DDSCAT automatically calculates total cross sections for absorption and scattering and selected elements of the Mueller scattering intensity matrix for specified orientation of the target relative to the incident wave, and for specified scattering directions. This User Guide explains how to use DDSCAT to carry out EM scattering calculations. CPU and memory requirements are described.

[ascl:2401.007] deal.II: Finite element library

deal.II computes solutions to partial differential equations (PDEs) using adaptive finite elements. The code provides an interface for processing PDEs accessible to both laptops and supercomputers, and has been used to investigate the local and global waveform effects of gravitational waves by numerical simulation. deal.II supports massively parallel computing of very large linear systems of equations and provides access to triangulation of various geometries of the simulation domain.

[ascl:1510.004] DEBiL: Detached Eclipsing Binary Light curve fitter

DEBiL rapidly fits a large number of light curves to a simple model. It is the central component of a pipeline for systematically identifying and analyzing eclipsing binaries within a large dataset of light curves; the results of DEBiL can be used to flag light curves of interest for follow-up analysis.

[ascl:2001.008] DebrisDiskFM: Debris Disk Forward Modeling

DebrisDiskFM provides forward modeling for circumstellar debris disks in scattered light using the MCFOST disk modeling software to generate disk model images using given input parameters and emcee (ascl:1303.002) to obtain the posterior distributions for these parameters.

[ascl:1501.005] DECA: Decomposition of images of galaxies

DECA performs photometric analysis of images of disk and elliptical galaxies having a regular structure. It is written in Python and combines the capabilities of several widely used packages for astronomical data processing such as IRAF (ascl:9911.002), SExtractor (ascl:1010.064), and the GALFIT (ascl:1104.010) code to perform two-dimensional decomposition of galaxy images into several photometric components (bulge+disk). DECA can be applied to large samples of galaxies with different orientations with respect to the line of sight (including edge-on galaxies) and requires minimum human intervention.

[ascl:2302.002] deconfuser: Fast orbit fitting to directly imaged multi-planetary systems

Deconfuser performs fast orbit fitting to directly imaged multi-planetary systems. It quickly fits orbits to planet detections in 2D images and ensures that all orbits within a certain tolerance are found. The code also tests all groupings of detections by planets (which detection belongs to which planet), and ranks partitions of detections by planets by deciding which assignment of detection-to-planet best fits the data.

[ascl:1801.006] DecouplingModes: Passive modes amplitudes

DecouplingModes calculates the amplitude of the passive modes, which requires solving the Einstein equations on superhorizon scales sourced by the anisotropic stress from the magnetic fields (prior to neutrino decoupling), and the magnetic and neutrino stress (after decoupling). The code is available as a Mathematica notebook.

[ascl:1603.015] Dedalus: Flexible framework for spectrally solving differential equations

Dedalus solves differential equations using spectral methods. It implements flexible algorithms to solve initial-value, boundary-value, and eigenvalue problems with broad ranges of custom equations and spectral domains. Its primary features include symbolic equation entry, multidimensional parallelization, implicit-explicit timestepping, and flexible analysis with HDF5. The code is written primarily in Python and features an easy-to-use interface. The numerical algorithm produces highly sparse systems for many equations which are efficiently solved using compiled libraries and MPI.

[submitted] Deep Embedded Clustering for Open Cluster Characterization with Gaia DR2 Data

Characterize and understandOpen Clusters(OCs) allow us to understand better properties and mechanisms about the Universe such as stellar formation and the regions where these events occur. They also provide information about stellar processes and the evolution of the galactic disk.

In this paper, we present a novel method to characterize OCs. Our method employs a model built on Artificial Neural Networks(ANNs). More specifically, we adapted a state of the art model, the Deep Embedded Clustering(DEC) model for our purpose. The developed method aims to improve classical state of the arts techniques. We improved not only in terms of computational efficiency (with lower computational requirements), but inusability (reducing the number of hyperparameters to get a good characterization of the analyzed clusters). For our experiments, we used the Gaia DR2 database as the data source, and compared our model with the clustering technique K-Means. Our method achieves good results, becoming even better (in some of the cases) than current techniques.

[ascl:2309.005] DeepGlow: Neural network emulator for BOXFIT

The feed-forward neural network DeepGlow emulates BOXFIT (ascl:2306.059) simulation data of gamma-ray burst (GRB) afterglows. The package provides an easy interface to generate GRB afterglow spectra and light curves mimicking those generated through BOXFIT with high accuracy. The code used to generate the training data and to train the neural networks is also included.

[ascl:2112.017] deeplenstronomy: Pipeline for versatile strong lens sample simulations

deeplenstronomy simulates large datasets for applying deep learning to strong gravitational lensing. It wraps the functionalities of lenstronomy (ascl:1804.012) in a convenient yaml-style interface to generate training datasets. The code can use built-in astronomical surveys, realistic galaxy colors, real images of galaxies, and physically motivated distributions of all parameters to train the neural network to create a simulated dataset.

[ascl:2209.003] DeepMass: Cosmological map inference with deep learning

DeepMass infers dark matter maps from weak gravitational lensing measurements and uses deep learning to reconstruct cosmological maps. The code can also be incorporated into a Moment Network to enable high-dimensional likelihood-free inference.

[ascl:1805.029] DeepMoon: Convolutional neural network trainer to identify moon craters

DeepMoon trains a convolutional neural net using data derived from a global digital elevation map (DEM) and catalog of craters to recognize craters on the Moon. The TensorFlow-based pipeline code is divided into three parts. The first generates a set images of the Moon randomly cropped from the DEM, with corresponding crater positions and radii. The second trains a convnet using this data, and the third validates the convnet's predictions.

[ascl:2011.026] DeepShadows: Finding low-surface-brightness galaxies in survey images

DeepShadows uses a convolutional neural networks (CNNs) to separate low-surface-brightness galaxies (LSBGs) from artifacts (such as Galactic cirrus and star-forming regions) in survey images. The model is trained and tested on labeled LSBGs and artifacts from the Dark Energy Survey and demonstrates that CNNs offer a promising path in the quest to study the low-surface-brightness universe.

[ascl:2006.023] deepSIP: deep learning of Supernova Ia Parameters

deepSIP (deep learning of Supernova Ia Parameters) measures the phase and light-curve shape of a Type Ia Supernova (SN Ia) from an optical spectrum. The package contains a set of three trained Convolutional Neural Networks (CNNs) for the aforementioned purposes, but tools for preprocessing spectra, modifying the neural architecture, training models, and sweeping through hyperparameters are also included.

[ascl:2006.008] DeepSphere: Graph-based spherical convolutional neural network for cosmology

DeepSphere implements a generalization of Convolutional Neural Networks (CNNs) to the sphere. It models the discretized sphere as a graph of connected pixels. The resulting convolution is more efficient (especially when data doesn't span the whole sphere) and mostly equivariant to rotation (small distortions are due to the non-existence of a regular sampling of the sphere). The pooling strategy exploits a hierarchical pixelization of the sphere (HEALPix) to analyze the data at multiple scales. The graph neural network model is based on ChebNet and its TensorFlow implementation.

[ascl:2112.004] Defringe: Fringe artifact correction

Defringe corrects fringe artifacts in near-infrared astronomical images taken with old generation CCD cameras. It essentially solves a robust PCA problem, masking out astrophysical sources, and models the contaminants as a linear superposition of (unknown) modes, with (unknown) projection coefficients. The problem uses nuclear norm regularization, which acts as a convex proxy for rank minimization. The code is written in python, using cupy for GPU acceleration, but will also work on CPUs.

[ascl:1405.004] Defringeflat: Fringe pattern removal

The IDL package Defringeflat identifies and removes fringe patterns from images such as spectrograph flat fields. It uses a wavelet transform to calculate the frequency spectrum in a region around each point of a one-dimensional array. The wavelet transform amplitude is reconstructed from (smoothed) parameters obtaining the fringe's wavelet transform, after which an inverse wavelet transform is performed to obtain the computed fringe pattern which is then removed from the flat.

[ascl:1011.012] DEFROST: Simulating preheating after inflation

At the end of inflation, dynamical instability can rapidly deposit the energy of homogeneous cold inflaton into excitations of other fields. This process, known as preheating, is rather violent, inhomogeneous and non-linear, and has to be studied numerically. DEFROST simulates preheating of the Universe after the end of the inflation. It is small, easy to modify, very fast, and fully instrumented for 3D visualizations. An MPI extension for this code, MPI-DEFROST (ascl:1106.022), is available.

[ascl:2208.012] DELIGHT: Identify host galaxies of transient candidates

DELIGHT (Deep Learning Identification of Galaxy Hosts of Transients) automatically identifies host galaxies of transient candidates using multi-resolution images and a convolutional neural network. This library has a class with several methods to get the most likely host coordinates starting from given transient coordinates. In order to do this, the DELIGHT object needs a list of object identifiers and coordinates (oid, ra, dec). With this information, it downloads PanSTARRS images centered around the position of the transients (2 arcmin x 2 arcmin), gets their WCS solutions, creates the multi-resolution images, does some extra preprocessing of the data, and finally predicts the position of the hosts using a multi-resolution image and a convolutional neural network. DELIGHT can also estimate the host's semi-major axis if requested, taking advantage of the multi-resolution images.

[ascl:2306.005] Delight: Photometric redshift via Gaussian processes with physical kernels

Delight infers photometric redshifts in deep galaxy and quasar surveys. It uses a data-driven model of latent spectral energy distributions (SEDs) and a physical model of photometric fluxes as a function of redshift, thus leveraging the advantages of both machine- learning and template-fitting methods by building template SEDs directly from the training data. Delight obtains accurate redshift point estimates and probability distributions and can also be used to predict missing photometric fluxes or to simulate populations of galaxies with realistic fluxes and redshifts.

[ascl:1602.012] DELightcurveSimulation: Light curve simulation code

DELightcurveSimulation (also called DELCgen) simulates light curves with any given power spectral density and any probability density function, following the algorithm described in Emmanoulopoulos et al. (2013). The simulated products have exactly the same variability and statistical properties as the observed light curves. The code is a Python implementation of the Mathematica code provided by Emmanoulopoulos et al.

[ascl:2303.014] Delphes: Fast simulation of a generic collider experiment

Delphes simulates a fast multipurpose detector response. The simulation includes a tracking system, embedded into a magnetic field, calorimeters and a muon system. The Delphes framework is interfaced to standard file formats (e.g. Les Houches Event File or HepMC) and outputs observables such as isolated leptons, missing transverse energy and collection of jets that can be used for dedicated analyses. The simulation of the detector response takes into account the effect of magnetic field, the granularity of the calorimeters and sub-detector resolutions. Visualization of the final state particles is also built-in using the corresponding ROOT library.

[ascl:1705.003] demc2: Differential evolution Markov chain Monte Carlo parameter estimator

demc2, also abbreviated as DE-MCMC, is a differential evolution Markov Chain parameter estimation library written in R for adaptive MCMC on real parameter spaces.

[ascl:2104.015] dense_basis: Dense Basis SED fitting

dense_basis implements the Dense Basis method tailored to SED fitting, in particular, the task of recovering accurate star formation history (SFH) information from galaxy spectral energy distributions (SEDs). The code's original use-case was simultaneously fitting specific large catalogs of galaxies; it is adapted to a general purpose SED fitting code, and acts as a module to compress and decompress SFHs and other time-series.

[ascl:2312.004] DENSe: Bayesian density estimation for Poisson data

DENSe enables Bayesian non-parametric inferences of densities of Poisson data counts. Its framework of stateless methods is written in Python, although it relies on NIFTy (ascl:1302.013, ascl:1903.008) for the heavy lifting. DENSe utilizes all available information in the data by modeling the inherent correlation structure using a Matérn kernel. The inference of the density from count data can be written in a single line of python code. The fitting method takes a multidimensional numpy array as input and returns multidimensional arrays of the same dimensions encoding the density field.

[ascl:2403.016] DensityFieldTools: Manipulating density fields and measuring power spectra and bispectra

The DensityFieldTools toolset manipulates density fields and measures power spectra and bispectra using a very simple interface. After loading a density field, it computes the power spectrum and the bispectrum for a desired binning. The bispectrum estimator also automatically computes the power spectrum for the chosen binning, to facilitate, for example, shot-noise subtraction. DensityFieldTools also provides a quick way to measure (cross-)power spectra directly from density fields.

[ascl:1904.009] deproject: Deprojection of two-dimensional annular X-ray spectra

Deproject extends Sherpa (ascl:1107.005) to facilitate deprojection of two-dimensional annular X-ray spectra to recover the three-dimensional source properties. For typical thermal models, this includes the radial temperature and density profiles. This basic method is used for X-ray cluster analysis and is the basis for the XSPEC (ascl:9910.005) model project. The deproject module is written in Python and is straightforward to use and understand. The basic physical assumption of deproject is that the extended source emissivity is constant and optically thin within spherical shells whose radii correspond to the annuli used to extract the specta. Given this assumption, one constructs a model for each annular spectrum that is a linear volume-weighted combination of shell models.

[ascl:1511.017] DES exposure checker: Dark Energy Survey image quality control crowdsourcer

DES exposure checker renders science-grade images directly to a web browser and allows users to mark problematic features from a set of predefined classes, thus allowing image quality control for the Dark Energy Survey to be crowdsourced through its web application. Users can also generate custom labels to help identify previously unknown problem classes; generated reports are fed back to hardware and software experts to help mitigate and eliminate recognized issues. These problem reports allow rapid correction of artifacts that otherwise may be too subtle or infrequent to be recognized.

[ascl:1804.011] DESCQA: Synthetic Sky Catalog Validation Framework

The DESCQA framework provides rigorous validation protocols for assessing the quality of high-quality simulated sky catalogs in a straightforward and comprehensive way. DESCQA enables the inspection, validation, and comparison of an inhomogeneous set of synthetic catalogs via the provision of a common interface within an automated framework. An interactive web interface is also available at https://portal.nersc.gov/projecta/lsst/descqa/v2/.

[ascl:2301.025] desitarget: Selecting DESI targets from photometric catalogs

desitarget selects targets for spectroscopic follow-up by Dark Energy Spectroscopic Instrument (DESI). The pipeline uses bitmasks to record that a specific source has been selected by a particular targeting algorithm, setting bit-values in output data files in a number of different columns that indicate whether a particular target meets specific selection criteria. desitarget also outputs a unique TARGETID that allows each target to be tracked throughout the DESI survey. This TARGETID encodes information about each DESI target, such as the catalog the target was selected from, whether a target is a sky location or part of a random catalog, and whether a target is part of a secondary program.

[ascl:1304.007] DESPOTIC: Derive the Energetics and SPectra of Optically Thick Interstellar Clouds

DESPOTIC (Derive the Energetics and SPectra of Optically Thick Interstellar Clouds), written in Python, represents optically thick interstellar clouds using a one-zone model and calculates line luminosities, line cooling rates, and in restricted cases line profiles using an escape probability formalism. DESPOTIC calculates clouds' equilibrium gas and dust temperatures and their time-dependent thermal evolution. The code allows rapid and interactive calculation of clouds' characteristic temperatures, identification of their dominant heating and cooling mechanisms, and prediction of their observable spectra across a wide range of interstellar environments.

[submitted] Determination of Length of (Earth) Day [LOD] in the past geologic epochs

The protocol describes the algorithm of arriving at LOD in a given past geological Epoch. First the lunar orbital radius of the given geologic epoch has to be determined. For this the velocity of recession of Moon for the accelerated phase has to be determined. The spatial integral of the reciprocal of Velocity of recession gives the the transit time of Moon from desired orbit to the present orbit.Through several iterations the transit time is made to converge on the geologic epoch. Once we determine the desired orbital radius it has to be substituted in the LOD expression to determine the LOD in the given geologic epoch.

[ascl:1907.008] Dewarp: Distortion removal and on-sky orientation solution for LBTI detectors

Dewarp constructs pipelines to remove distortion from a detector and find the orientation with true North. It was originally written for the LBTI LMIRcam detector, but is generalizable to any project with reference sources and/or an astrometric field paired with a machine-readable file of astrometric target locations.

[ascl:1402.022] DexM: Semi-numerical simulations for very large scales

DexM (Deus ex Machina) efficiently generates density, halo, and ionization fields on very large scales and with a large dynamic range through seminumeric simulation. These properties are essential for reionization studies, especially those involving rare, massive QSOs, since one must be able to statistically capture the ionization field. DexM can also generate ionization fields directly from the evolved density field to account for the ionizing contribution of small halos. Semi-numerical simulations use more approximate physics than numerical simulations, but independently generate 3D cosmological realizations. DexM is portable and fast, and allows for explorations of wide swaths of astrophysical parameter space and an unprecedented dynamic range.

[ascl:1112.015] Dexter: Data Extractor for scanned graphs

The NASA Astrophysics Data System (ADS) now holds 1.3 million scanned pages, containing numerous plots and figures for which the original data sets are lost or inaccessible. The availability of scans of the figures can significantly ease the regeneration of the data sets. For this purpose, the ADS has developed Dexter, a Java applet that supports the user in this process. Dexter's basic functionality is to let the user manually digitize a plot by marking points and defining the coordinate transformation from the logical to the physical coordinate system. Advanced features include automatic identification of axes, tracing lines and finding points matching a template.

[ascl:1904.017] dfitspy: A dfits/fitsort implementation in Python

dfitspy searches and displays metadata contained in FITS files. Written in Python, it displays the results of a metadata search and is able to grep certain values of keywords inside large samples of files in the terminal. dfitspy can be used directly with the command line interface and can also be imported as a python module into other python code or the python interpreter.

[ascl:1805.002] dftools: Distribution function fitting

dftools, written in R, finds the most likely P parameters of a D-dimensional distribution function (DF) generating N objects, where each object is specified by D observables with measurement uncertainties. For instance, if the objects are galaxies, it can fit a mass function (D=1), a mass-size distribution (D=2) or the mass-spin-morphology distribution (D=3). Unlike most common fitting approaches, this method accurately accounts for measurement in uncertainties and complex selection functions.

[ascl:2410.011] DGEM: 3D dust continuum radiative transfer code for method comparison

DGEM compares different computation methods for three-dimensional dust continuum radiative transfer. This simple code is based on mcpolar, translated to C++, and refactored to realize and compare radiative transfer techniques, namely Monte Carlo, Quasi-Monte-Carlo, and the Directions Grid Enumeration Method (DGEM). DGEM uses precalculated directions of the photons propagation instead of the random ones to speed up the calculations process. The code also offers a gnuplot script for plotting the resulting images.

[ascl:1410.001] DIAMONDS: high-DImensional And multi-MOdal NesteD Sampling

DIAMONDS (high-DImensional And multi-MOdal NesteD Sampling) provides Bayesian parameter estimation and model comparison by means of the nested sampling Monte Carlo (NSMC) algorithm, an efficient and powerful method very suitable for high-dimensional and multi-modal problems; it can be used for any application involving Bayesian parameter estimation and/or model selection in general. Developed in C++11, DIAMONDS is structured in classes for flexibility and configurability. Any new model, likelihood and prior PDFs can be defined and implemented upon a basic template.

[ascl:2103.030] DIAPHANE: Library for radiation and neutrino transport in hydrodynamical simulations

DIAPHANE provides a common platform for application-independent radiation and neutrino transport in astrophysical simulations. The library contains radiation and neutrino transport algorithms for modeling galaxy formation, black hole formation, and planet formation, as well as supernova stellar explosions. DIAPHANE is written in C and C++, but as many hydrodynamic codes use Fortran, the library includes examples of how to interface the library from the Fortran codes SPHYNX (ascl:1709.001) and RAMSES (ascl:1011.007).

[ascl:1607.002] DICE: Disk Initial Conditions Environment

DICE models initial conditions of idealized galaxies to study their secular evolution or their more complex interactions such as mergers or compact groups using N-Body/hydro codes. The code can set up a large number of components modeling distinct parts of the galaxy, and creates 3D distributions of particles using a N-try MCMC algorithm which does not require a prior knowledge of the distribution function. The gravitational potential is then computed on a multi-level Cartesian mesh by solving the Poisson equation in the Fourier space. Finally, the dynamical equilibrium of each component is computed by integrating the Jeans equations for each particles. Several galaxies can be generated in a row and be placed on Keplerian orbits to model interactions. DICE writes the initial conditions in the Gadget1 or Gadget2 (ascl:0003.001) format and is fully compatible with Ramses (ascl:1011.007).

[ascl:1801.010] DICE/ColDICE: 6D collisionless phase space hydrodynamics using a lagrangian tesselation

DICE is a C++ template library designed to solve collisionless fluid dynamics in 6D phase space using massively parallel supercomputers via an hybrid OpenMP/MPI parallelization. ColDICE, based on DICE, implements a cosmological and physical VLASOV-POISSON solver for cold systems such as dark matter (CDM) dynamics.

[ascl:1704.013] Difference-smoothing: Measuring time delay from light curves

The Difference-smoothing MATLAB code measures the time delay from the light curves of images of a gravitationally lendsed quasar. It uses a smoothing timescale free parameter, generates more realistic synthetic light curves to estimate the time delay uncertainty, and uses X2 plot to assess the reliability of a time delay measurement as well as to identify instances of catastrophic failure of the time delay estimator. A systematic bias in the measurement of time delays for some light curves can be eliminated by applying a correction to each measured time delay.

[ascl:2302.025] Diffmah: Differentiable models of halo and galaxy formation history

Diffmah approximates the growth of individual halos as a simple power-law function of time, where the power-law index smoothly decreases as the halo transitions from the fast-accretion regime at early times to the slow-accretion regime at late times. The code has a typical accuracy of 0.1 dex for times greater than one billion years in halos of mass greater than 10e11 M_sun. Diffmah self-consistently captures the mean and variance of halo mass accretion rates across long time scales, and it generates Monte Carlo simulations of cosmologically-representative and differentiable halo histories.

[ascl:2302.012] Diffstar: Differentiable star formation histories

Diffstar fits the star formation history (SFH) of galaxies to a smooth parametric model. Diffstar differs from existing SFH models because the parameterization of the model is directly based on basic features of galaxy formation physics, including halo mass assembly history, accretion of gas into the dark matter halo, the fraction of gas that is converted into stars, the time scale over which star formation occurs, and the possibility of rejuvenated star formation. The SFHs of a large number of simulated galaxies can be fit in parallel using mpi4py.

[ascl:1512.012] DiffuseModel: Modeling the diffuse ultraviolet background

DiffuseModel calculates the scattered radiation from dust scattering in the Milky Way based on stars from the Hipparcos catalog. It uses Monte Carlo to implement multiple scattering and assumes a user-supplied grid for the dust distribution. The output is a FITS file with the diffuse light over the Galaxy. It is intended for use in the UV (900 - 3000 A) but may be modified for use in other wavelengths and galaxies.

[ascl:1304.008] Diffusion.f: Diffusion of elements in stars

Diffusion.f is an exportable subroutine to calculate the diffusion of elements in stars. The routine solves exactly the Burgers equations and can include any number of elements as variables. The code has been used successfully by a number of different groups; applications include diffusion in the sun and diffusion in globular cluster stars. There are many other possible applications to main sequence and to evolved stars. The associated README file explains how to use the subroutine.

[ascl:1103.001] Difmap: Synthesis Imaging of Visibility Data

Difmap is a program developed for synthesis imaging of visibility data from interferometer arrays of radio telescopes world-wide. Its prime advantages over traditional packages are its emphasis on interactive processing, speed, and the use of Difference mapping techniques.

[ascl:1102.024] DiFX2: A more flexible, efficient, robust and powerful software correlator

Software correlation, where a correlation algorithm written in a high-level language such as C++ is run on commodity computer hardware, has become increasingly attractive for small to medium sized and/or bandwidth constrained radio interferometers. In particular, many long baseline arrays (which typically have fewer than 20 elements and are restricted in observing bandwidth by costly recording hardware and media) have utilized software correlators for rapid, cost-effective correlator upgrades to allow compatibility with new, wider bandwidth recording systems and improve correlator flexibility. The DiFX correlator, made publicly available in 2007, has been a popular choice in such upgrades and is now used for production correlation by a number of observatories and research groups worldwide. Here we describe the evolution in the capabilities of the DiFX correlator over the past three years, including a number of new capabilities, substantial performance improvements, and a large amount of supporting infrastructure to ease use of the code. New capabilities include the ability to correlate a large number of phase centers in a single correlation pass, the extraction of phase calibration tones, correlation of disparate but overlapping sub-bands, the production of rapidly sampled filterbank and kurtosis data at minimal cost, and many more. The latest version of the code is at least 15% faster than the original, and in certain situations many times this value. Finally, we also present detailed test results validating the correctness of the new code.

[ascl:1904.023] digest2: NEO binary classifier

digest2 classifies Near-Earth Object (NEO) candidates by providing a score, D2, that represents a pseudo-probability that a tracklet belongs to a given solar system orbit type. The code accurately and precisely distinguishes NEOs from non-NEOs, thus helping to identify those to be prioritized for follow-up observation. This fast, short-arc orbit classifier for small solar system bodies code is built upon the Pangloss code developed by Robert McNaught and further developed by Carl Hergenrother and Tim Spahr and Robert Jedicke's 223.f code.

[ascl:1010.031] DimReduce: Nonlinear Dimensionality Reduction of Very Large Datasets with Locally Linear Embedding (LLE) and its Variants

DimReduce is a C++ package for performing nonlinear dimensionality reduction of very large datasets with Locally Linear Embedding (LLE) and its variants. DimReduce is built for speed, using the optimized linear algebra packages BLAS, LAPACK (ascl:2104.020), and ARPACK (ascl:1311.010). Because of the need for storing very large matrices (1000 by 10000, for our SDSS LLE work), DimReduce is designed to use binary FITS files as inputs and outputs. This means that using the code is a bit more cumbersome. For smaller-scale LLE, where speed of computation is not as much of an issue, the Modular Data Processing toolkit may be a better choice. It is a python toolkit with some LLE functionality, which VanderPlas contributed.

This code has been rewritten and included in scikit-learn and an improved version is included in http://mmp2.github.io/megaman/

[submitted] DIPol-UF: Remote control software for DIPol-UF polarimeter

DIPol-UF provides tools for remote control and operation of DIPol-UF, an optical (BVR) imaging CCD polarimeter. The project contains libraries that handle low-level interoperation with ANDOR SDK (provided by the CCD manufacturer), communication with stepper motors (which perform plate rotations), FITS file serialization/deserialization, over-network communication between different system components (each CCD is connected to a standalone PC), as well as provide GUI (built with WPF).

[ascl:1908.005] dips: Detrending periodic signals in timeseries

dips detrends timeseries of strictly periodic signals. It does not assume any functional form for the signal or the background or the noise; it disentangles the strictly periodic component from everything else. It has been used for detrending Kepler, K2 and TESS timeseries of periodic variable stars, eclipsing binary stars, and exoplanets.

[ascl:1405.016] DIPSO: Spectrum analysis code

DIPSO plots spectroscopic data rapidly and combines analysis and high-quality graphical output in a simple command-line driven interactive environment. It can be used, for example, to fit emission lines, measure equivalent widths and fluxes, do Fourier analysis, and fit models to spectra. A macro facility allows convenient execution of regularly used sequences of commands, and a simple Fortran interface permits "personal" software to be integrated with the program. DIPSO is part of the Starlink software collection (ascl:1110.012).

[ascl:2112.012] DiracVsMajorana: Statistical discrimination of sub-GeV Majorana and Dirac dark matter

DiracVsMajorana determines the statistical significance with which a successful electron scattering experiment could reject the Majorana hypothesis -- that dark matter (DM) particles are their own anti-particles, a so-called Majorana fermion -- using the likelihood ratio test in favor of the hypothesis of Dirac DM. The code assumes that the DM interacts with the photon via higher-order electromagnetic moments. It requires tabulated atomic response functions, which can be computed with DarkARC (ascl:2112.011), to compute ionization spectra and predictions for signal event rates.

[ascl:1806.015] DirectDM-mma: Dark matter direct detection

The Mathematica code DirectDM takes the Wilson coefficients of relativistic operators that couple DM to the SM quarks, leptons, and gauge bosons and matches them onto a non-relativistic Galilean invariant EFT in order to calculate the direct detection scattering rates. A Python implementation of DirectDM is also available (ascl:1806.016).

[ascl:1806.016] DirectDM-py: Dark matter direct detection

DirectDM, written in Python, takes the Wilson coefficients of relativistic operators that couple DM to the SM quarks, leptons, and gauge bosons and matches them onto a non-relativistic Galilean invariant EFT in order to calculate the direct detection scattering rates. A Mathematica implementation of DirectDM is also available (ascl:1806.015).

[ascl:2405.011] DirectSHT: Direct spherical harmonic transform

DirectSHT performs direct spherical harmonic transforms for point sets on the sphere. Given a set of points, defined by arrays of theta and phi (in radians) and weights, it provides the spherical harmonic transform coefficients alm. JAX (ascl:2111.002) can be used to speed up the computation; the code will automatically fall back to numpy if JAX is not present. The code is much faster when run on GPUs. When they are available and JAX is installed, the code automatically distributes computation and memory across them.

[ascl:1102.021] DIRT: Dust InfraRed Toolbox

DIRT is a Java applet for modelling astrophysical processes in circumstellar dust shells around young and evolved stars. With DIRT, you can select and display over 500,000 pre-run model spectral energy distributions (SEDs), find the best-fit model to your data set, and account for beam size in model fitting. DIRT also allows you to manipulate data and models with an interactive viewer, display gas and dust density and temperature profiles, and display model intensity profiles at various wavelengths.

[ascl:2410.009] DIRTY: 3D dust radiative transfer for dusty astrophysical sources

DIRTY (DustI Radiative Transfer, Yeah!) computes the radiative transfer and dust emission from arbitrary distributions of dust illuminated by arbitrary distributions of sources (usually stars). It uses Monte Carlo methods to solve the radiative transfer problem in full 3D including non-equilibrium and equilibrium thermal dust emission. As are other similar models, DUSTY is computationally intensive; as a result, it is written in C++.

[ascl:1403.020] disc2vel: Tangential and radial velocity components derivation

Disc2vel derives tangential and radial velocity components in the equatorial plane of a barred stellar disc from the observed line-of-sight velocity, assuming geometry of a thin disc. The code is written in IDL, and the method assumes that the bar is close to steady state (i.e. does not evolve fast) and that both morphology and kinematics are symmetrical with respect to the major axis of the bar.

[ascl:1605.011] DISCO: 3-D moving-mesh magnetohydrodynamics package

DISCO evolves orbital fluid motion in two and three dimensions, especially at high Mach number, for studying astrophysical disks. The software uses a moving-mesh approach with a dynamic cylindrical mesh that can shear azimuthally to follow the orbital motion of the gas, thus removing diffusive advection errors and permitting longer timesteps than a static grid. DISCO uses an HLLD Riemann solver and a constrained transport scheme compatible with the mesh motion to implement magnetohydrodynamics.

[ascl:2307.011] DiscVerSt: Vertical structure calculator for accretion discs around neutron stars and black holes

DiscVerSt calculates the vertical structure of accretion discs around neutron stars and black holes. Different classes represent the vertical structure for different types of EoS and opacity, temperature gradient and irradiation scheme; the code includes an interface for initializing the chosen structure type. DiscVerSt also contains functions to calculate S-curves and the vertical and radial profile of a stationary disc.

[ascl:1209.011] DiskFit: Modeling Asymmetries in Disk Galaxies

DiskFit implements procedures for fitting non-axisymmetries in either kinematic or photometric data. DiskFit can analyze H-alpha and CO velocity field data as well as HI kinematics to search for non-circular motions in the disk galaxies. DiskFit can also be used to constrain photometric models of the disc, bar and bulge. It deprecates an earlier version, by a subset of these authors, called velfit.

[ascl:1603.011] DiskJockey: Protoplanetary disk modeling for dynamical mass derivation

DiskJockey derives dynamical masses for T Tauri stars using the Keplerian motion of their circumstellar disks, applied to radio interferometric data from the Atacama Large Millimeter Array (ALMA) and the Submillimeter Array (SMA). The package relies on RADMC-3D (ascl:1202.015) to perform the radiative transfer of the disk model. DiskJockey is designed to work in a parallel environment where the calculations for each frequency channel can be distributed to independent processors. Due to the computationally expensive nature of the radiative synthesis, fitting sizable datasets (e.g., SMA and ALMA) will require a substantial amount of CPU cores to explore a posterior distribution in a reasonable timeframe.

[ascl:2308.007] DiskMINT: Disk Model For INdividual Targets

DiskMINT (Disk Model for INdividual Targets) models individual disks and derives robust disk mass estimates. Built on RADMC-3D (ascl:1202.015) for continuum (and gas line) radiative transfer, the code includes a reduced chemical network to determine the C18O emission. DiskMINT has a Python3 module that generates a self-consistent 2D disk structure to satisfy VHSE (Vertical Hydrostatic Equilibrium). It also contains a Fortran code of the reduced chemical network that contains the main chemical processes necessary for C18O modeling: the isotopologue-selective photodissociation, and the grain-surface chemistry where the CO converting to CO2 ice is the main reaction.

[ascl:2002.022] DISKMODs: Accretion Disk Radial Structure Models

DISKMODs provides radial structure models of accretion disk solutions. The following models are included: Novikov-Thorne thin disk model and Sadowski polytropic slim disk model. Each model implements a common interface that gives the radial dependence of selected geometrical, physical and thermodynamic quantities of the accretion flow. The model interpolates through a set of tabulated numerical solutions. These solutions are computed for a reference mass M=10 Msun. The model can rescale the disk structure to any mass, with masses in the range of 5-20 Msun giving reasonably good results.

[ascl:1811.013] DiskSim: Modeling Accretion Disk Dynamics with SPH

DiskSim is a source-code distribution of the SPH accretion disk modeling code previously released in a Windows executable form as FITDisk (ascl:1305.011). The code released now is the full research code in Fortran and can be modified as needed by the user.

[ascl:1108.015] DISKSTRUCT: A Simple 1+1-D Disk Structure Code

DISKSTRUCT is a simple 1+1-D code for modeling protoplanetary disks. It is not based on multidimensional radiative transfer! Instead, a flaring-angle recipe is used to compute the irradiation of the disk, while the disk vertical structure at each cylindrical radius is computed in a 1-D fashion; the models computed with this code are therefore approximate. Moreover, this model cannot deal with the dust inner rim.

In spite of these simplifications and drawbacks, the code can still be very useful for disk studies, for the following reasons:

  • It allows the disk structure to be studied in a 1-D vertical fashion (one radial cylinder at a time). For understanding the structure of disks, and also for using it as a basis of other models, this can be a great advantage.
  • For very optically thick disks this code is likely to be much faster than the RADMC full disk model.
  • Viscous internal heating of the disk is implemented and converges quickly, whereas the RADMC code is still having difficulty to deal with high optical depth combined with viscously generated internal heat.

[ascl:2207.028] disksurf: Measure the molecular emission surface of protoplanetary disks

disksurf measures the height of optically thick emission or photosphere in moderately inclined protoplanetary disks. The package is dependent on AstroPy (ascl:1304.002) and uses GoFish (ascl:2011.016) to retrieve data from FITS data cubes and user-specified parameters to return a surface object containing the disk-centric coordinates of the surface and the gas temperature and rotation velocity at those locations. disksurf provides clipping, smoothing, and diagnostic functions as well.

[ascl:2201.013] disnht: Absorption spectrum solver

disnht computes the absorption spectrum for a user-defined distribution of column densities. The input is a file including the array of column density values; a python routine is provided that can make logarithmic distribution of column density that can be used as an input. Other optional inputs are a cross-section file that includes the 2-d array [energy, cross-section]; a script is provided for computing cross sections for different abundance model for the interstellar medium (solar values). Other boolean flags can be used for input and output description, rebin, plot or save.

[ascl:1708.006] DISORT: DIScrete Ordinate Radiative Transfer

DISORT (DIScrete Ordinate Radiative Transfer) solves the problem of 1D scalar radiative transfer in a single optical medium, such as a planetary atmosphere. The code correctly accounts for multiple scattering by an isotropic or plane-parallel beam source, internal Planck sources, and reflection from a lower boundary. Provided that polarization effects can be neglected, DISORT efficiently calculates accurate fluxes and intensities at any user-specified angle and location within the user-specified medium.

[ascl:1302.015] DisPerSE: Discrete Persistent Structures Extractor

DisPerSE is open source software for the identification of persistent topological features such as peaks, voids, walls and in particular filamentary structures within noisy sampled distributions in 2D, 3D. Using DisPerSE, structure identification can be achieved through the computation of the discrete Morse-Smale complex. The software can deal directly with noisy datasets via the concept of persistence (a measure of the robustness of topological features). Although developed for the study of the properties of filamentary structures in the cosmic web of galaxy distribution over large scales in the Universe, the present version is quite versatile and should be useful for any application where a robust structure identification is required, such as for segmentation or for studying the topology of sampled functions (for example, computing persistent Betti numbers). Currently, it can be applied can work indifferently on many kinds of cell complex (such as structured and unstructured grids, 2D manifolds embedded within a 3D space, discrete point samples using delaunay tesselation, and Healpix tesselations of the sphere). The only constraint is that the distribution must be defined over a manifold, possibly with boundaries.

[ascl:2202.020] distance-omnibus: Distance estimation method for molecular cloud clumps in the Milky Way

distance-omnibus computes posterior DPDFs for catalog sources using the Bayesian application of kinematic distance likelihoods derived from a Galactic rotation curve with prior Distance Probability Density Functions (DPDFs) derived from ancillary data. The methodology and code base are generalized for use with any (sub-)millimeter survey of the Galactic plane.

[ascl:2403.002] DistClassiPy: Distance-based light curve classification

DistClassiPy uses different distance metrics to classify objects such as light curves. It provides state-of-the-art performance for time-domain astronomy, and offers lower computational requirements and improved interpretability over traditional methods such as Random Forests, making it suitable for large datasets. DistClassiPy allows fine-tuning based on scientific objectives by selecting appropriate distance metrics and features, which enhances its performance and improves classification interpretability.

[ascl:1812.012] distlink: Minimum orbital intersection distance (MOID) computation library

distlink computes the minimum orbital intersection distance (MOID), or global minimum of the distance between the points lying on two Keplerian ellipses by finding all stationary points of the distance function, based on solving an algebraic polynomial equation of 16th degree. The program tracks numerical errors and carefully treats nearly degenerate cases, including practical cases with almost circular and almost coplanar orbits. Benchmarks confirm its high numeric reliability and accuracy, and even with its error-controlling overheads, this algorithm is a fast MOID computation method that may be useful in processing large catalogs. Written in C++, the library also includes auxiliary functions.

[ascl:1910.004] DM_phase: Algorithm for correcting dispersion of radio signals

DM_phase maximizes the coherent power of a radio signal instead of its intensity to calculate the best dispersion measure (DM) for a burst such as those emitted by pulsars and fast radio bursts (FRBs). It is robust to complex burst structures and interference, thus mitigating the limitations of traditional methods that search for the best DM value of a source by maximizing the signal-to-noise ratio (S/N) of the detected signal.

[ascl:2106.030] DM_statistics: Statistics of the cosmological dispersion measure (DM)

DM_statistics calculates the free-electron power spectrum and the cosmological dispersion measure (DM) statistics (such as its mean and variance, angular power spectrum and correlation function). The default cosmological parameters are consistent with the Planck 2015 LambdaCDM model; the cosmological model can be easily changed by editing a few lines of the C code.

[ascl:1705.002] DMATIS: Dark Matter ATtenuation Importance Sampling

DMATIS (Dark Matter ATtenuation Importance Sampling) calculates the trajectories of DM particles that propagate in the Earth's crust and the lead shield to reach the DAMIC detector using an importance sampling Monte-Carlo simulation. A detailed Monte-Carlo simulation avoids the deficiencies of the SGED/KS method that uses a mean energy loss description to calculate the lower bound on the DM-proton cross section. The code implementing the importance sampling technique makes the brute-force Monte-Carlo simulation of moderately strongly interacting DM with nucleons computationally feasible. DMATIS is written in Python 3 and MATHEMATICA.

[ascl:1506.002] dmdd: Dark matter direct detection

The dmdd package enables simple simulation and Bayesian posterior analysis of recoil-event data from dark-matter direct-detection experiments under a wide variety of scattering theories. It enables calculation of the nuclear-recoil rates for a wide range of non-relativistic and relativistic scattering operators, including non-standard momentum-, velocity-, and spin-dependent rates. It also accounts for the correct nuclear response functions for each scattering operator and takes into account the natural abundances of isotopes for a variety of experimental target elements.

[ascl:2002.012] DMRadon: Radon Transform calculation tools

DMRadon calculates the Radon Transform for use in the analysis of Directional Dark Matter Direct Detection. The code can calculate speed distributions, velocity distribution, velocity integral (eta) and Radon Transforms or a standard Maxwell-Boltzmann distribution. DMRadon also calculates the velocity distribution averaged over different angular bins.

[ascl:1010.029] DNEST: Diffusive Nested Sampling

This code is a general Monte Carlo method based on Nested Sampling (NS) for sampling complex probability distributions and estimating the normalising constant. The method uses one or more particles, which explore a mixture of nested probability distributions, each successive distribution occupying ~e^-1 times the enclosed prior mass of the previous distribution. While NS technically requires independent generation of particles, Markov Chain Monte Carlo (MCMC) exploration fits naturally into this technique. This method can achieve four times the accuracy of classic MCMC-based Nested Sampling, for the same computational effort; equivalent to a factor of 16 speedup. An additional benefit is that more samples and a more accurate evidence value can be obtained simply by continuing the run for longer, as in standard MCMC.

[ascl:1604.007] DNest3: Diffusive Nested Sampling

DNest3 is a C++ implementation of Diffusive Nested Sampling (ascl:1010.029), a Markov Chain Monte Carlo (MCMC) algorithm for Bayesian Inference and Statistical Mechanics. Relative to older DNest versions, DNest3 has improved performance (in terms of the sampling overhead, likelihood evaluations still dominate in general) and is cleaner code: implementing new models should be easier than it was before. In addition, DNest3 is multi-threaded, so one can run multiple MCMC walkers at the same time, and the results will be combined together.

[ascl:2012.014] dolphin: Automated pipeline for lens modeling

Dolphin uniformly models large lens samples. It is a wrapper for Lenstronomy (ascl:1804.012), and features semi-automated modeling of a large sample of quasar and galaxy-galaxy lenses. Dolphin, written in Python, provides easy portability between local and MPI environments.

[ascl:1608.013] DOLPHOT: Stellar photometry

DOLPHOT is a stellar photometry package that was adapted from HSTphot for general use. It supports two modes; the first is a generic PSF-fitting package, which uses analytic PSF models and can be used for any camera. The second mode uses ACS PSFs and calibrations, and is effectively an ACS adaptation of HSTphot. A number of utility programs are also included with the DOLPHOT distribution, including basic image reduction routines.

[ascl:1709.004] DOOp: DAOSPEC Output Optimizer pipeline

The DAOSPEC Output Optimizer pipeline (DOOp) runs efficient and convenient equivalent widths measurements in batches of hundreds of spectra. It uses a series of BASH scripts to work as a wrapper for the FORTRAN code DAOSPEC (ascl:1011.002) and uses IRAF (ascl:9911.002) to automatically fix some of the parameters that are usually set by hand when using DAOSPEC. This allows batch-processing of quantities of spectra that would be impossible to deal with by hand. DOOp was originally built for the large quantity of UVES and GIRAFFE spectra produced by the Gaia-ESO Survey, but just like DAOSPEC, it can be used on any high resolution and high signal-to-noise ratio spectrum binned on a linear wavelength scale.

[ascl:2106.002] dopmap: Fast Doppler mapping program

dopmap constructs Doppler maps from the orbital variation of line profiles of (mass transferring) binaries. It uses an algorithm related to Richardson-Lucy iteration and includes an IDL-based set of routines for manipulating and plotting the input and output data.

[ascl:1206.011] Double Eclipsing Binary Fitting

The parameters of the mutual orbit of eclipsing binaries that are physically connected can be obtained by precision timing of minima over time through light travel time effect, apsidal motion or orbital precession. This, however, requires joint analysis of data from different sources obtained through various techniques and with insufficiently quantified uncertainties. In particular, photometric uncertainties are often underestimated, which yields too small uncertainties in minima timings if determined through analysis of a χ2 surface. The task is even more difficult for double eclipsing binaries, especially those with periods close to a resonance such as CzeV344, where minima get often blended with each other.

This code solves the double binary parameters simultaneously and then uses these parameters to determine minima timings (or more specifically O-C values) for individual datasets. In both cases, the uncertainties (or more precisely confidence intervals) are determined through bootstrap resampling of the original data. This procedure to a large extent alleviates the common problem with underestimated photometric uncertainties and provides a check on possible degeneracies in the parameters and the stability of the results. While there are shortcomings to this method as well when compared to Markov Chain Monte Carlo methods, the ease of the implementation of bootstrapping is a significant advantage.

[ascl:2305.014] DP3: Streaming processing pipeline for radio interferometric data

DP3 (the Default Preprocessing Pipeline) is the LOFAR data pipeline processing program and is the successor to DPPP (ascl:1804.003). It performs many kinds of operations on the data in a pipelined way so the data are read and written only once. DP3 preprocesses the data of a LOFAR observation by executing steps such as flagging or averaging. Such steps can be used for the raw data as well as the calibrated data by defining the data column to use. One or more of the following steps can be defined as a pipeline. DP3 has an implicit input and output step. It is also possible to have intermediate output steps. DP3 comes with predefined steps, but also allows the user to plug in arbitrary steps implemented in either C++ or Python.

[ascl:1504.012] DPI: Symplectic mapping for binary star systems for the Mercury software package

DPI is a FORTRAN77 library that supplies the symplectic mapping method for binary star systems for the Mercury N-Body software package (ascl:1201.008). The binary symplectic mapping is implemented as a hybrid symplectic method that allows close encounters and collisions between massive bodies and is therefore suitable for planetary accretion simulations.

[ascl:1804.003] DPPP: Default Pre-Processing Pipeline

DPPP (Default Pre-Processing Pipeline, also referred to as NDPPP) reads and writes radio-interferometric data in the form of Measurement Sets, mainly those that are created by the LOFAR telescope. It goes through visibilities in time order and contains standard operations like averaging, phase-shifting and flagging bad stations. Between the steps in a pipeline, the data is not written to disk, making this tool suitable for operations where I/O dominates. More advanced procedures such as gain calibration are also included. Other computing steps can be provided by loading a shared library; currently supported external steps are the AOFlagger (ascl:1010.017) and a bridge that enables loading python steps.

[ascl:1303.025] DPUSER: Interactive language for image analysis

DPUSER is an interactive language capable of handling numbers (both real and complex), strings, and matrices. Its main aim is to do astronomical image analysis, for which it provides a comprehensive set of functions, but it can also be used for many other applications.

[ascl:1712.005] draco: Analysis and simulation of drift scan radio data

draco analyzes transit radio data with the m-mode formalism. It is telescope agnostic, and is used as part of the analysis and simulation pipeline for the CHIME (Canadian Hydrogen Intensity Mapping Experiment) telescope. It can simulate time stream data from maps of the sky (using the m-mode formalism) and add gain fluctuations and correctly correlated instrumental noise (i.e. Wishart distributed). Further, it can perform various cuts on the data and make maps of the sky from data using the m-mode formalism.

[ascl:1512.009] DRACULA: Dimensionality Reduction And Clustering for Unsupervised Learning in Astronomy

DRACULA classifies objects using dimensionality reduction and clustering. The code has an easy interface and can be applied to separate several types of objects. It is based on tools developed in scikit-learn, with some usage requiring also the H2O package.

[ascl:1011.009] DRAGON: DRoplet and hAdron GeneratOr for Nuclear collisions

A Monte Carlo generator of the final state of hadrons emitted from an ultrarelativistic nuclear collision is introduced. An important feature of the generator is a possible fragmentation of the fireball and emission of the hadrons from fragments. Phase space distribution of the fragments is based on the blast wave model extended to azimuthally non-symmetric fireballs. Parameters of the model can be tuned and this allows to generate final states from various kinds of fireballs. A facultative output in the OSCAR1999A format allows for a comprehensive analysis of phase-space distributions and/or use as an input for an afterburner. DRAGON's purpose is to produce artificial data sets which resemble those coming from real nuclear collisions provided fragmentation occurs at hadronisation and hadrons are emitted from fragments without any further scattering. Its name, DRAGON, stands for DRoplet and hAdron GeneratOr for Nuclear collisions. In a way, the model is similar to THERMINATOR, with the crucial difference that emission from fragments is included.

[ascl:1106.011] DRAGON: Galactic Cosmic Ray Diffusion Code

DRAGON adopts a second-order Cranck-Nicholson scheme with Operator Splitting and time overrelaxation to solve the diffusion equation. This provides a fast solution that is accurate enough for the average user. Occasionally, users may want to have very accurate solutions to their problem. To enable this feature, users may get close to the accurate solution by using the fast method, and then switch to a more accurate solution scheme featuring the Alternating-Direction-Implicit (ADI) Cranck-Nicholson scheme.

[ascl:1811.002] DRAGONS: Gemini Observatory data reduction platform

DRAGONS (Data Reduction for Astronomy from Gemini Observatory North and South) is Gemini's Python-based data reduction platform. DRAGONS offers an automation system that allows for hands-off pipeline reduction of Gemini data, or of any other astronomical data once configured. The platform also allows researchers to control input parameters and in some cases will offer to interactively optimize some data reduction steps, e.g. change the order of fit and visualize the new solution.

[ascl:2012.024] DRAGraces: Reduction pipeline for GRACES spectra

DRAGraces (Data Reduction and Analysis for GRACES) reduces GRACES spectra taken with the Gemini North high-resolution spectrograph. It finds GRACES frames in a given directory, determines the list of bias, flat, arc and science frames, and performs the reduction and extraction. Written in IDL, DRAGraces is straightforward and easy to use.

[ascl:2103.023] DRAKE: Relic density in concrete models prediction

DRAKE (Dark matter Relic Abundance beyond Kinetic Equilibrium) predicts the dark matter relic abundance in situations where the standard assumption of kinetic equilibrium during the freeze-out process may not be satisfied. The code comes with a set of three dedicated Boltzmann equation solvers that implement, respectively, the traditionally adopted equation for the dark matter number density, fluid-like equations that couple the evolution of number density and velocity dispersion, and a full numerical evolution of the phase-space distribution.

[ascl:1507.012] DRAMA: Instrumentation software environment

DRAMA is a fast, distributed environment for writing instrumentation control systems. It allows low level instrumentation software to be controlled from user interfaces running on UNIX, MS Windows or VMS machines in a consistent manner. Such instrumentation tasks can run either on these machines or on real time systems such as VxWorks. DRAMA uses techniques developed by the AAO while using the Starlink-ADAM environment, but is optimized for the requirements of instrumentation control, portability, embedded systems and speed. A special program is provided which allows seamless communication between ADAM and DRAMA tasks.

[ascl:2308.013] Driftscan: Drift scan telescope analysis

Driftscan simulates and analyzes transit radio interferometers, with a particular focus on 21cm cosmology. Given a design of a telescope, it generates a set of products used to analyze data from it and simulate timestreams. Driftscan also constructs a filter to extract cosmological 21 cm emission from astrophysical foregrounds, such as our galaxy and radio point sources, and estimates the 21cm power spectrum using an optimal quadratic estimator.

[ascl:1504.006] drive-casa: Python interface for CASA scripting

drive-casa provides a Python interface for scripting of CASA (ascl:1107.013) subroutines from a separate Python process, allowing for utilization alongside other Python packages which may not easily be installed into the CASA environment. This is particularly useful for embedding use of CASA subroutines within a larger pipeline. drive-casa runs plain-text casapy scripts directly; alternatively, the package includes a set of convenience routines which try to adhere to a consistent style and make it easy to chain together successive CASA reduction commands to generate a command-script programmatically.

[ascl:1212.011] DrizzlePac: HST image software

DrizzlePac allows users to easily and accurately align and combine HST images taken at multiple epochs, and even with different instruments. It is a suite of supporting tasks for AstroDrizzle which includes:

- astrodrizzle to align and combine images
- tweakreg and tweakback for aligning images in different visits
- pixtopix transforms an X,Y pixel position to its pixel position after distortion corrections
- skytopix transforms sky coordinates to X,Y pixel positions. A reverse transformation can be done using the task pixtosky.

[ascl:1610.003] DSDEPROJ: Direct Spectral Deprojection

Deprojection of X-ray data by methods such as PROJCT, which are model dependent, can produce large and unphysical oscillating temperature profiles. Direct Spectral Deprojection (DSDEPROJ) solves some of the issues inherent to model-dependent deprojection routines. DSDEPROJ is a model-independent approach, assuming only spherical symmetry, which subtracts projected spectra from each successive annulus to produce a set of deprojected spectra.

[ascl:2204.006] dsigma: Galaxy-galaxy lensing Python package

dsigma analyzes galaxy-galaxy lensing. Written in Python, it has a broadly applicable API and is optimized for computational efficiency. While originally intended to be used with the shape catalog of the Hyper-Suprime Cam (HSC) survey, it should work for other surveys, most prominently the Dark Energy Survey (DES) and the Kilo-Degree Survey (KiDS).

[ascl:2302.024] DSPS: Differentiable Stellar Population Synthesis

DSPS synthesizes stellar populations, leading to fully-differentiable predictions for galaxy photometry and spectroscopy. The code implements an empirical model for stellar metallicity, and it also supports the Diffstar (ascl:2302.012) model of star formation and dark matter halo history. DSPS rapidly generates and simulates galaxy-halo histories on both CPU and GPU hardware.

[ascl:1010.006] DSPSR: Digital Signal Processing Software for Pulsar Astronomy

DSPSR, written primarily in C++, is an open-source, object-oriented, digital signal processing software library and application suite for use in radio pulsar astronomy. The library implements an extensive range of modular algorithms for use in coherent dedispersion, filterbank formation, pulse folding, and other tasks. The software is installed and compiled using the standard GNU configure and make system, and is able to read astronomical data in 18 different file formats, including FITS, S2, CPSR, CPSR2, PuMa, PuMa2, WAPP, ASP, and Mark5.

[ascl:1501.004] dst: Polarimeter data destriper

Dst is a fully parallel Python destriping code for polarimeter data; destriping is a well-established technique for removing low-frequency correlated noise from Cosmic Microwave Background (CMB) survey data. The software destripes correctly formatted HDF5 datasets and outputs hitmaps, binned maps, destriped maps and baseline arrays.

[ascl:1505.034] dStar: Neutron star thermal evolution code

dStar is a collection of modules for computing neutron star structure and evolution, and uses the numerical, utility, and equation of state libraries of MESA (ascl:1010.083).

[ascl:2008.023] DUCC: Distinctly Useful Code Collection

DUCC (Distinctly Useful Code Collection) provides basic programming tools for numerical computation, including Fast Fourier Transforms, Spherical Harmonic Transforms, non-equispaced Fourier transforms, as well as some concrete applications like 4pi convolution on the sphere and gridding/degridding of radio interferometry data. The code is written in C++17 and provides a simple and comprehensive Python
interface.

[ascl:1201.011] Duchamp: A 3D source finder for spectral-line data

Duchamp is software designed to find and describe sources in 3-dimensional, spectral-line data cubes. Duchamp has been developed with HI (neutral hydrogen) observations in mind, but is widely applicable to many types of astronomical images. It features efficient source detection and handling methods, noise suppression via smoothing or multi-resolution wavelet reconstruction, and a range of graphical and text-based outputs to allow the user to understand the detections.

[ascl:1605.014] DUO: Spectra of diatomic molecules

Duo computes rotational, rovibrational and rovibronic spectra of diatomic molecules. The software, written in Fortran 2003, solves the Schrödinger equation for the motion of the nuclei for the simple case of uncoupled, isolated electronic states and also for the general case of an arbitrary number and type of couplings between electronic states. Possible couplings include spin–orbit, angular momenta, spin-rotational and spin–spin. Introducing the relevant couplings using so-called Born–Oppenheimer breakdown curves can correct non-adiabatic effects.

[ascl:1503.005] dust: Dust scattering and extinction in the X-ray

Written in Python, dust calculates X-ray dust scattering and extinction in the intergalactic and local interstellar media.

[ascl:1908.016] DustCharge: Charge distribution for a dust grain

DustCharge calculates the equilibrium charge distribution for a dust grain of a given size and composition, depending on the local interstellar medium conditions, such as density, temperature, ionization fraction, local radiation field strength, and cosmic ray ionization fraction.

[ascl:1307.001] DustEM: Dust extinction and emission modelling

DustEM computes the extinction and the emission of interstellar dust grains heated by photons. It is written in Fortran 95 and is jointly developed by IAS and CESR. The dust emission is calculated in the optically thin limit (no radiative transfer) and the default spectral range is 40 to 108 nm. The code is designed so dust properties can easily be changed and mixed and to allow for the inclusion of new grain physics.

[ascl:2206.027] DustFilaments: Paint filaments to produce a thermal dust full sky map at mm frequencies

DustFilaments paints filaments in the Celestial Sphere to generate a full sky map of the Thermal Dust emission at millimeter frequencies by integrating a population of 3D filaments. The code requires a magnetic field cube, which can be calculated separately or by DustFilaments. With the magnetic field cube as input, the package creates a random filament population with a given seed, and then paints a filament into a healpix map provided as input; the healpix map is updated in place.

[ascl:2207.016] DustPy: Simulation of dust evolution in protoplanetary disks

DustPy simulates the radial evolution of gas and dust in protoplanetary disks, involving viscous evolution of the gas disk and advection and diffusion of the dust disk, as well as dust growth by solving the Smoluchowski equation. The package provides a standard simulation and the ability to plot results, and also allows modification of the initial conditions for dust, gas, the grid, and the central star.

[ascl:2310.005] DustPyLib: A library of DustPy extensions

The DustPyLib library contains auxiliary modules for the dust evolution software DustPy (ascl:2207.016), which simulates the evolution of dust and gas in protoplanetary disks. DustPyLib includes interfaces to radiative transfer codes and modules with extensions to the DustPy defaults.

[ascl:9911.001] DUSTY: Radiation transport in a dusty environment

DUSTY solves the problem of radiation transport in a dusty environment. The code can handle both spherical and planar geometries. The user specifies the properties of the radiation source and dusty region, and the code calculates the dust temperature distribution and the radiation field in it. The solution method is based on a self-consistent equation for the radiative energy density, including dust scattering, absorption and emission, and does not introduce any approximations. The solution is exact to within the specified numerical accuracy. DUSTY has built in optical properties for the most common types of astronomical dust and comes with a library for many other grains. It supports various analytical forms for the density distribution, and can perform a full dynamical calculation for radiatively driven winds around AGB stars. The spectral energy distribution of the source can be specified analytically as either Planckian or broken power-law. In addition, arbitrary dust optical properties, density distributions and external radiation can be entered in user supplied files. Furthermore, the wavelength grid can be modified to accommodate spectral features. A single DUSTY run can process an unlimited number of models, with each input set producing a run of optical depths, as specified. The user controls the detail level of the output, which can include both spectral and imaging properties as well as other quantities of interest.

[ascl:1602.004] DUSTYWAVE: Linear waves in gas and dust

Written in Fortran, DUSTYWAVE computes the exact solution for linear waves in a two-fluid mixture of gas and dust. The solutions are general with respect to both the dust-to-gas ratio and the amplitude of the drag coefficient.

[ascl:2109.004] DviSukta: Spherically Averaged Bispectrum calculator

DviSukta calculates the Spherically Averaged Bispectrum (SABS). The code is based on an optimized direct estimation method, is written in C, and is parallelized. DviSukta starts by reading the real space gridded data and performing a 3D Fourier transform of it. Alternatively, it starts by reading the data already in Fourier space. The grid spacing, number of k1 bins, number of n bins, and number of cos(theta) bins need to be specified in the input file.

[ascl:2011.007] DYNAMITE: DYnamics, Age and Metallicity Indicators Tracing Evolution

DYNAMITE (DYnamics, Age and Metallicity Indicators Tracing Evolution) is a triaxial dynamical modeling code for stellar systems and is based on existing codes for Schwarzschild modeling in triaxial systems. DYNAMITE provides an easy-to-use object oriented Python wrapper that extends the scope of pre-existing triaxial Schwarzschild codes with a number of new features, including discrete kinematics, more flexible descriptions of line-of-sight velocity distributions, and modeling of stellar population information. It also offers more efficient steps through parameter space, and can use GPU acceleration.

[ascl:1809.013] dynesty: Dynamic Nested Sampling package

dynesty is a Dynamic Nested Sampling package for estimating Bayesian posteriors and evidences. dynesty samples from a given distribution when provided with a loglikelihood function, a prior_transform function (that transforms samples from the unit cube to the target prior), and the dimensionality of the parameter space.

[ascl:1902.010] dyPolyChord: Super fast dynamic nested sampling with PolyChord

dyPolyChord implements dynamic nested sampling using the efficient PolyChord (ascl:1502.011) sampler to provide state-of-the-art nested sampling performance. Any likelihoods and priors which work with PolyChord can be used (Python, C++ or Fortran), and the output files produced are in the PolyChord format.

[ascl:1407.017] e-MERLIN data reduction pipeline

Written in Python and utilizing ParselTongue (ascl:1208.020) to interface with AIPS (ascl:9911.003), the e-MERLIN data reduction pipeline processes, calibrates and images data from the UK's radio interferometric array (Multi-Element Remote-Linked Interferometer Network). Driven by a plain text input file, the pipeline is modular and can be run in stages. The software includes options to load raw data, average in time and/or frequency, flag known sources of interference, flag more comprehensively with SERPent (ascl:1312.001), carry out some or all of the calibration procedures (including self-calibration), and image in either normal or wide-field mode. It also optionally produces a number of useful diagnostic plots at various stages so data quality can be assessed.

[ascl:1910.013] E0102-VR: Virtual Reality application to visualize the optical ejecta in SNR 1E 0102.2-7219

E0102-VR facilitates the characterization of the 3D structure of the oxygen-rich optical ejecta in the young supernova remnant 1E 0102.2-7219 in the Small Magellanic Cloud. This room-scale Virtual Reality application written for the HTC Vive contributes to the exploration of the scientific potential of this technology for the field of observational astrophysics.

[ascl:1106.004] E3D: The Euro3D Visualization Tool

E3D is a package of tools for the analysis and visualization of IFS data. It is capable of reading, writing, and visualizing reduced data from 3D spectrographs of any kind.

[ascl:2307.043] EAGLES: Estimating AGes from Lithium Equivalent widthS

EAGLES (Estimating AGes from Lithium Equivalent widthS) implements an empirical model that predicts the lithium equivalent width (EW) of a star as a function of its age and effective temperature. The code computes the age probability distribution for a star with a given EW and Teff, subject to an age probability prior that may be flat in age or flat in log age. Data for more than one star can be entered; EAGLES then treats these as a cluster and determines the age probability distribution for the ensemble. The code produces estimates of the most probable age, uncertainties and the median age; output files consisting of probability plots, best-fit isochrone plots, and tables of the posterior age probability distribution(s).

[ascl:1805.004] EARL: Exoplanet Analytic Reflected Lightcurves package

EARL (Exoplanet Analytic Reflected Lightcurves) computes the analytic form of a reflected lightcurve, given a spherical harmonic decomposition of the planet albedo map and the viewing and orbital geometries. The EARL Mathematica notebook allows rapid computation of reflected lightcurves, thus making lightcurve numerical experiments accessible.

[ascl:2205.007] EarthScatterLikelihood: Event rates and likelihoods for Dark Matter direct detection in the presence of Earth-Scattering

EarthScatterLikelihood calculates event rates and likelihoods for Earth-scattering Dark Matter. It is written in Fortran with plotting routines in Python. For input, it uses results from Monte Carlo simulations generated by DaMaSCUS (ascl:1706.003). It includes routines for submitting many reconstructions in parallel on a cluster, and the properties of the detector, such as for a Germanium and a Sapphire detector, can be edited.

[ascl:1611.012] EarthShadow: Calculator for dark matter particle velocity distribution after Earth-scattering

EarthShadow calculates the impact of Earth-scattering on the distribution of Dark Matter (DM) particles. The code calculates the speed and velocity distributions of DM at various positions on the Earth and also helps with the calculation of the average scattering probabilities. Tabulated data for DM-nuclear scattering cross sections and various numerical results, plots and animations are also included in the code package.

[ascl:1612.010] Earthshine simulator: Idealized images of the Moon

Terrestrial albedo can be determined from observations of the relative intensity of earthshine. Images of the Moon at different lunar phases can be analyzed to derive the semi-hemispheric mean albedo of the Earth, and an important tool for doing this is simulations of the appearance of the Moon for any time. This software produces idealized images of the Moon for arbitrary times. It takes into account the libration of the Moon and the distances between Sun, Moon and the Earth, as well as the relevant geometry. The images of the Moon are produced as FITS files. User input includes setting the Julian Day of the simulation. Defaults for image size and field of view are set to produce approximately 1x1 degree images with the Moon in the middle from an observatory on Earth, currently set to Mauna Loa.

[ascl:1812.008] easyaccess: SQL command line interpreter for astronomical surveys

easyaccess facilitates access to astronomical catalogs stored in SQL Databases. It is an enhanced command line interpreter and provides a custom interface with custom commands and was specifically designed to access data from the Dark Energy Survey Oracle database, including autocompletion of tables, columns, users and commands, simple ways to upload and download tables using csv, fits and HDF5 formats, iterators, search and description of tables among others. It can easily be extended to other surveys or SQL databases. The package is written in Python and supports customized addition of commands and functionalities.

[ascl:2203.015] easyFermi: Fermi-LAT data analyzer

easyFermi provides a user-friendly graphical interface for basic to intermediate analysis of Fermi-LAT data in the framework of Fermipy (ascl:1812.006). The code can measure the gamma-ray flux and photon index, build spectral energy distributions, light curves, test statistic maps, test for extended emission, and relocalize the coordinates of gamma-ray sources. Tutorials for easyFermi are available on YouTube and GitHub.

[ascl:1011.013] EasyLTB: Code for Testing LTB Models against CosmologyConfronting Lemaitre-Tolman-Bondi Models with Observational Cosmology

The possibility that we live in a special place in the universe, close to the centre of a large void, seems an appealing alternative to the prevailing interpretation of the acceleration of the universe in terms of a LCDM model with a dominant dark energy component. In this paper we confront the asymptotically flat Lemaitre-Tolman-Bondi (LTB) models with a series of observations, from Type Ia Supernovae to Cosmic Microwave Background and Baryon Acoustic Oscillations data. We propose two concrete LTB models describing a local void in which the only arbitrary functions are the radial dependence of the matter density Omega_M and the Hubble expansion rate H. We find that all observations can be accommodated within 1 sigma, for our models with 4 or 5 independent parameters. The best fit models have a chi^2 very close to that of the LCDM model. We perform a simple Bayesian analysis and show that one cannot exclude the hypothesis that we live within a large local void of an otherwise Einstein-de Sitter model.

[ascl:1010.052] EAZY: A Fast, Public Photometric Redshift Code

EAZY, Easy and Accurate Zphot from Yale, determines photometric redshifts. The program is optimized for cases where spectroscopic redshifts are not available, or only available for a biased subset of the galaxies. The code combines features from various existing codes: it can fit linear combinations of templates, it includes optional flux- and redshift-based priors, and its user interface is modeled on the popular HYPERZ (ascl:1108.010) code. The default template set, as well as the default functional forms of the priors, are not based on (usually highly biased) spectroscopic samples, but on semi-analytical models. Furthermore, template mismatch is addressed by a novel rest-frame template error function. This function gives different wavelength regions different weights, and ensures that the formal redshift uncertainties are realistic. A redshift quality parameter, Q_z, provides a robust estimate of the reliability of the photometric redshift estimate.

[ascl:1908.018] EBAI: Eclipsing Binaries with Artificial Intelligence

Eclipsing Binaries via Artificial Intelligence (EBAI) automates the process of solving light curves of eclipsing binary stars. EBAI is based on the back-propagating neural network paradigm and is highly flexible in construction of neural networks. EBAI comes in two flavors, serial (ebai) and multi-processor (ebai.mpi), and can be run in training, continued training, and recognition mode.

[ascl:1909.007] EBHLIGHT: General relativistic radiation magnetohydrodynamics with Monte Carlo transport

EBHLIGHT (also referred to as BHLIGHT) solves the equations of general relativistic radiation magnetohydrodynamics in stationary spacetimes. Fluid integration is performed with the second order shock-capturing scheme HARM (ascl:1209.005) and frequency-dependent radiation transport is performed with the second order Monte Carlo code grmonty (ascl:1306.002). Fluid and radiation exchange four-momentum in an explicit first-order operator-split fashion.

[ascl:1203.007] EBTEL: Enthalpy-Based Thermal Evolution of Loops

Observational and theoretical evidence suggests that coronal heating is impulsive and occurs on very small cross-field spatial scales. A single coronal loop could contain a hundred or more individual strands that are heated quasi-independently by nanoflares. It is therefore an enormous undertaking to model an entire active region or the global corona. Three-dimensional MHD codes have inadequate spatial resolution, and 1D hydro codes are too slow to simulate the many thousands of elemental strands that must be treated in a reasonable representation. Fortunately, thermal conduction and flows tend to smooth out plasma gradients along the magnetic field, so "0D models" are an acceptable alternative. We have developed a highly efficient model called Enthalpy-Based Thermal Evolution of Loops (EBTEL) that accurately describes the evolution of the average temperature, pressure, and density along a coronal strand. It improves significantly upon earlier models of this type--in accuracy, flexibility, and capability. It treats both slowly varying and highly impulsive coronal heating; it provides the differential emission measure distribution, DEM(T), at the transition region footpoints; and there are options for heat flux saturation and nonthermal electron beam heating. EBTEL gives excellent agreement with far more sophisticated 1D hydro simulations despite using four orders of magnitude less computing time. It promises to be a powerful new tool for solar and stellar studies.

[ascl:2404.015] EBWeyl: Compute the electric and magnetic parts of the Weyl tensor

EBWeyl computes the electric and magnetic parts of the Weyl tensor, Eαβ and Bαβ, using a 3+1 slicing formulation. The module provides a Finite Differencing class with 4th (default) and 6th order backward, centered, and forward schemes. Periodic boundary conditions are used by default; otherwise, a combination of the 3 schemes is available. It also includes a Weyl class that computes for a given metric the variables of the 3+1 formalism, the spatial Christoffel symbols, spatial Ricci tensor, electric and magnetic parts of the Weyl tensor projected along the normal to the hypersurface and fluid flow, the Weyl scalars and invariant scalars. EBWeyl can also compute the determinant and inverse of a 3x3 or 4x4 matrice in every position of a data box.

[ascl:1411.017] ECCSAMPLES: Bayesian Priors for Orbital Eccentricity

ECCSAMPLES solves the inverse cumulative density function (CDF) of a Beta distribution, sometimes called the IDF or inverse transform sampling. This allows one to sample from the relevant priors directly. ECCSAMPLES actually provides joint samples for both the eccentricity and the argument of periastron, since for transiting systems they display non-zero covariance.

[ascl:2207.005] echelle: Dynamic echelle diagrams for asteroseismology

Echelle diagrams are used mainly in asteroseismology, where they function as a diagnostic tool for estimating Δν, the separation between modes of the same degree ℓ; the amplitude spectrum of a star is stacked in equal slices of Δν, the large separation. The echelle Python code creates and manipulates echelle diagrams. The code provides the ability to dynamically change Δν for rapid identification of the correct value. echelle features performance optimized dynamic echelle diagrams and multiple backends for supporting Jupyter or terminal usage.

[ascl:1810.006] Echelle++: Generic spectrum simulator

Echelle++ simulates realistic raw spectra based on the Zemax model of any spectrograph, with a particular emphasis on cross-dispersed Echelle spectrographs. The code generates realistic spectra of astronomical and calibration sources, with accurate representation of optical aberrations, the shape of the point spread function, detector characteristics, and photon noise. It produces high-fidelity spectra fast, an important feature when testing data reduction pipelines with a large set of different input spectra, when making critical choices about order spacing in the design phase of the instrument, or while aligning the spectrograph during construction. Echelle++ also works with low resolution, low signal to noise, multi-object, IFU, or long slit spectra, for simulating a wide array of spectrographs.

[ascl:1405.018] ECHOMOP: Echelle data reduction package

ECHOMOP extracts spectra from 2-D data frames. These data can be single-order spectra or multi-order echelle spectra. A substantial degree of automation is provided, particularly in the traditionally manual functions for cosmic-ray detection and wavelength calibration; manual overrides are available. Features include robust and flexible order tracing, optimal extraction, support for variance arrays, and 2-D distortion fitting and extraction. ECHOMOP is distributed as part of the Starlink software collection (ascl:1110.012).

[ascl:2008.020] Eclaire: CUDA-based Library for Astronomical Image REduction

Eclaire is a GPU-accelerated image-reduction pipeline; it uses CuPy, a Python package for general-purpose computing on graphics processing units (GPGPU), to perform image processing, including bias subtraction, dark subtraction, flat fielding, bad pixel masking, alignment, and co-adding. It has been used for real-time image reduction of MITSuME observational data, and can be used with data from other observatories.

[ascl:1810.011] Eclairs: Efficient Codes for the LArge scales of the unIveRSe

Eclairs calculates matter power spectrum based on standard perturbation theory and regularized pertubation theory. The codes are written in C++ with a python wrapper which is designed to be easily combined with MCMC samplers.

[ascl:1910.008] ECLIPS3D: Linear wave and circulation calculations

ECLIPS3D (Eigenvectors, Circulation, and Linear Instabilities for Planetary Science in 3 Dimensions) calculates a posteriori energy equations for the study of linear processes in planetary atmospheres with an arbitrary steady state, and provides both increased robustness and physical meaning to the obtained eigenmodes. It was developed originally for planetary atmospheres and includes python scripts for data analysis. ECLIPS3D can be used to study the initial spin up of superrotation of GCM simulations of hot Jupiters in addition to being applied to other problems.

[ascl:2306.031] ECLIPSE: Efficient Cmb poLarization and Intensity Power Spectra Estimator

ECLIPSE (Efficient Cmb poLarization and Intensity Power Spectra Estimator) implements an optimized version of the Quadratic Maximum Likelihood (QML) method for the estimation of the power spectra of the Cosmic Microwave Background (CMB) from masked skies. Written in Fortran, ECLIPSE can be used in a personal computer but also benefits from the capabilities of a supercomputer to tackle large scale problems; it is designed to run parallel on many MPI tasks. ECLIPSE analyzes masked CMB maps in which the signal can be affected by the beam and pixel window functions. The masks of intensity and polarization can be different and the noise can be isotropic or anisotropic. The program can estimate auto and cross-correlation power spectrum, that can be binned or unbinned.

[ascl:1112.001] Eclipse: ESO C Library for an Image Processing Software Environment

Written in ANSI C, eclipse is a library offering numerous services related to astronomical image processing: FITS data access, various image and cube loading methods, binary image handling and filtering (including convolution and morphological filters), 2-D cross-correlation, connected components, cube and image arithmetic, dead pixel detection and correction, object detection, data extraction, flat-fielding with robust fit, image generation, statistics, photometry, image-space resampling, image combination, and cube stacking. It also contains support for mathematical tools like random number generation, FFT, curve fitting, matrices, fast median computation, and point-pattern matching. The main feature of this library is its ability to handle large amounts of input data (up to 2GB in the current version) regardless of the amount of memory and swap available on the local machine. Another feature is the very high speed allowed by optimized C, making it an ideal base tool for programming efficient number-crunching applications, e.g., on parallel (Beowulf) systems.

[ascl:2402.007] ECLIPSR: Automatically find individual eclipses in light curves, determine ephemerides, and more

ECLIPSR fully and automatically analyzes space based light curves to find eclipsing binaries and provide some first order measurements, such as the binary star period and eclipse depths. It provides a recipe to find individual eclipses using the time derivatives of the light curves, including eclipses in light curves of stars where the dominating variability is, for example, pulsations. Since the algorithm detects each eclipse individually, even light curves containing only one eclipse can (in principle) be successfully analyzed and classified. ECLIPSR can find eclipsing binaries among both pulsating and non-pulsating stars in a homogeneous and quick manner and process large amounts of light curves in reasonable amounts of time. The output includes, among other things, the individual eclipse markers, the period and time of first (primary) eclipse, and a score between 0 and 1 indicating the likelihood that the analyzed light curve is that of an eclipsing binary.

[ascl:1901.010] eddy: Extracting Disk DYnamics

The Python suite eddy recovers precise rotation profiles of protoplanetary disks from Doppler shifted line emission, providing an easy way to fit first moment maps and the inference of a rotation velocity from an annulus of spectra.

[ascl:2202.009] EDIV: Exoplanet Detection Identifier Vetter

EDI (Exoplanet Detection Identifier) Vetter identifies false positive transit signal in the K2 data set. It combines the functionalities of Terra (ascl:2202.008) and RoboVetter (ascl:2012.006) and is optimized to test single transiting planet signals. An easily implemented suite of vetting metrics built to run alongside TLS of EDI Vetter, EDI-Vetter Unplugged (ascl:2202.010), is also available.

[ascl:2202.010] EDIVU: Exoplanet Detection Identifier Vetter Unplugged

The EDI (Exoplanet Detection Identifier) Vetter Unplugged software identifies false positive transit signals using Transit Least Squares (TLS) information and has been simplified from the full EDI-Vetter algorithm (ascl:2202.009) for easy implementation with the TLS output.

[ascl:1512.003] EDRS: Electronography Data Reduction System

The Electronography Data Reduction System (EDRS) reduces and analyzes large format astronomical images and was written to be used from within ASPIC (ascl:1510.006). In its original form it specialized in the reduction of electronographic data but was built around a set of utility programs which were widely applicable to astronomical images from other sources. The programs align and calibrate images, handle lists of (X,Y) positions, apply linear geometrical transformations and do some stellar photometry. This package is now obsolete.

[ascl:1512.004] EDRSX: Extensions to the EDRS package

EDRSX extends the Electronography Data Reduction System (EDRS, ascl:1512.0030). It makes more versatile analysis of IRAS images than was otherwise available possible. EDRSX provides facilities for converting images into and out of EDRS format, accesses RA and DEC information stored with IRAS images, and performs several standard image processing operations such as displaying image histograms and statistics, and Fourier transforms. This enables such operations to be performed as estimation and subtraction of non-linear backgrounds, de-striping of IRAS images, modelling of image features, and easy aligning of separate images, among others.

[ascl:2405.014] EF-TIGRE: Effective Field Theory of Interacting dark energy with Gravitational REdshift

EF-TIGRE (Effective Field Theory of Interacting dark energy with Gravitational REdshift) constrains interacting Dark Energy/Dark Matter models in the Effective Field Theory framework through Large Scale Structures observables. In particular, the observables include the effect of gravitational redshift, a distortion of time from galaxy clustering. This generates a dipole in the correlation function which is detectable with two distinct populations of galaxies, thus making it possible to break degeneracies among parameters of the EFT description.

[ascl:2404.012] EffectiveHalos: Matter power spectrum and cluster counts covariance modeler

EffectiveHalos provides models of the real-space matter power spectrum, based on a combination of the Halo Model and Effective Field Theory, which are 1% accurate up to k = 1 h/Mpc, across a range of cosmologies, including those with massive neutrinos. It can additionally compute accurate halo count covariances (including a model of halo exclusion), both alone and in combination with the matter power spectrum.

[ascl:2307.041] EFTCAMB: Effective Field Theory with CAMB

EFTCAMB patches the public Einstein-Boltzmann solver CAMB (ascl:1102.026) to implement the Effective Field Theory approach to cosmic acceleration. It can be used to investigate the effect of different EFT operators on linear perturbations and to study perturbations in any specific DE/MG model that can be cast into EFT framework. To interface EFTCAMB with cosmological data sets, it is equipped with a modified version of CosmoMC (ascl:1106.025), EFTCosmoMC, to create a bridge between the EFT parametrization of the dynamics of perturbations and observations.

[ascl:1804.008] EGG: Empirical Galaxy Generator

The Empirical Galaxy Generator (EGG) generates fake galaxy catalogs and images with realistic positions, morphologies and fluxes from the far-ultraviolet to the far-infrared. The catalogs are generated by egg-gencat and stored in binary FITS tables (column oriented). Another program, egg-2skymaker, is used to convert the generated catalog into ASCII tables suitable for ingestion by SkyMaker (ascl:1010.066) to produce realistic high resolution images (e.g., Hubble-like), while egg-gennoise and egg-genmap can be used to generate the low resolution images (e.g., Herschel-like). These tools can be used to test source extraction codes, or to evaluate the reliability of any map-based science (stacking, dropout identification, etc.).

[ascl:1904.004] ehtim: Imaging, analysis, and simulation software for radio interferometry

ehtim (eht-imaging) simulates and manipulates VLBI data and produces images with regularized maximum likelihood methods. The package contains several primary classes for loading, simulating, and manipulating VLBI data. The main classes are the Image, Array, Obsdata, Imager, and Caltable classes, which provide tools for loading images and data, producing simulated data from realistic u-v tracks, calibrating, inspecting, and plotting data, and producing images from data sets in various polarizations using various data terms and regularizers.

[ascl:2106.038] ehtplot: Plotting functions for the Event Horizon Telescope

ehtplot creates publication quality, elegant, and consistent plots. Written for the Event Horizon Telescope (EHT) Collaboration, it provides a set of easy-to-use plotting functions for EHT and Very-Long-Baseline Interferometry (VLBI) specific figures. This includes plotting visibility and images for both synthetic and real data, adding uv-tracks to the plots, and adding the expected event horizon size to the plots, among other functions.

[submitted] Eidein: Interactive Visualization Tool for Deep Active Learning

Eidein interactively visualizes a data sample for the selection of an informative (contains data with high predictive uncertainty, is diverse, but not redundant) data subsample for deep active learning. The data sample is projected to 2-D with a dimensionality reduction technique. It is visualized in an interactive scatter plot that allows a human expert to select and annotate the data subsample.

[ascl:2305.015] EIDOS: Modeling primary beams of radio astronomy antennas

EIDOS models the primary beam of radio astronomy antennas. The code can be used to create MeerKAT L-band beams from both holographic (AH) observations and EM simulations within a maximum diameter of 10 degrees. The beam model is less accurate at higher frequencies, and performs much better below 1400 MHz. The diagonal terms of the model beam Jones matrix are much better known than the off-diagonal terms. The performance of EIDOS is dependent on the quality of the given AH and EM datasets; as more accurate AH models and EM simulations become available, this pipeline can be used to create more accurate sparse representation of primary beams using Zernike polynomials.

[ascl:2101.017] Eigentools: Tools for studying linear eigenvalue problems

Eigentools is a set of tools for studying linear eigenvalue problems. The underlying eigenproblems are solved using Dedalus (ascl:1603.015), which provides a domain-specific language for partial differential equations. Eigentools extends Dedalus's EigenvalueProblem object and provides automatic rejection of unresolved eigenvalues, simple plotting of specified eigenmodes and of spectra, and computation of $\epsilon$-pseudospectra for any Differential-Algebraic Equations with user-specifiable norms. It includes tools to find critical parameters for linear stability analysis and is able to project eigenmode onto 2- or 3-D domain for visualization. It can also output projected eigenmodes as Dedalus-formatted HDF5 file to be used as initial conditions for Initial Value Problems, and provides simple plotting of drift ratios (both ordinal and nearest) to evaluate tolerance for eigenvalue rejection.

[ascl:1904.013] EightBitTransit: Calculate light curves from pixel grids

EightBitTransit calculates the light curve of any pixelated image transiting a star and inverts a light curve to recover the "shadow image" that produced it.

[ascl:1102.014] Einstein Toolkit for Relativistic Astrophysics

The Einstein Toolkit is a collection of software components and tools for simulating and analyzing general relativistic astrophysical systems. Such systems include gravitational wave space-times, collisions of compact objects such as black holes or neutron stars, accretion onto compact objects, core collapse supernovae and Gamma-Ray Bursts.

The Einstein Toolkit builds on numerous software efforts in the numerical relativity community including CactusEinstein, Whisky, and Carpet. The Einstein Toolkit currently uses the Cactus Framework as the underlying computational infrastructure that provides large-scale parallelization, general computational components, and a model for collaborative, portable code development.

[ascl:2012.026] EinsteinPy: General Relativity and gravitational physics problems solver

EinsteinPy performs General Relativity and gravitational physics tasks, including geodesics plotting for Schwarzschild, Kerr and Kerr Newman space-time models, calculation of Schwarzschild radius, and calculation of event horizon and ergosphere for Kerr space-time. It can perform symbolic manipulations of various tensors such as Metric, Riemann, Ricci and Christoffel symbols. EinsteinPy also features hypersurface embedding of Schwarzschild space-time, and includes other utilities and functions. It is a community-developed package and is written in Python.

[ascl:1904.022] eleanor: Extracted and systematics-corrected light curves for TESS-observed stars

eleanor extracts target pixel files from TESS Full Frame Images and produces systematics-corrected light curves for any star observed by the TESS mission. eleanor takes a TIC ID, a Gaia source ID, or (RA, Dec) coordinates of a star observed by TESS and returns, as a single object, a light curve and accompanying target pixel data. The process can be customized, allowing, for example, examination of intermediate data products and changing the aperture used for light curve extraction. eleanor also offers tools that make it easier to work with stars observed in multiple TESS sectors.

[submitted] EleFits

EleFits is a modern C++ package to read and write FITS files which focuses on safety, user-friendliness, and performance.

[ascl:2108.015] ELISa: Eclipsing binaries Learning Interactive System

ELISa models light curves of close eclipsing binaries. It models surfaces of detached, semi-detached, and over-contact binaries, generates light curves, and generates stellar spots with given longitude, latitude, radius, and temperature. It can also fit radial velocity curves and light curves via the implementation of the non-linear least squares method and also via Markov Chain Monte Carlo method.

[submitted] ELISA: Efficient Library for Spectral Analysis in High-Energy Astrophysics

ELISA is a Python library designed for efficient spectral modeling and robust statistical inference. With user-friendly interface, ELISA streamlines the spectral analysis workflow.

The modeling framework of ELISA is flexible, allowing users to construct complex models by combining models of ELISA and XSPEC, as well as custom models. Parameters across different model components can also be linked. The models can be fitted to the spectral datasets using either Bayesian or maximum likelihood approaches. For Bayesian fitting, ELISA incorporates advanced Markov Chain Monte Carlo (MCMC) algorithms, including the No-U-Turn Sampler (NUTS), nested sampling, and affine-invariant ensemble sampling, to tackle the posterior sampling problem. For maximum likelihood estimation (MLE), ELISA includes two robust algorithms: the Levenberg-Marquardt algorithm and the Migrad algorithm from Minuit. The computation backend is based on Google's JAX, a high-performance numerical computing library, which can reduce the runtime for fitting procedures like MCMC, thereby enhancing the efficiency of analysis.

After fitting, goodness-of-fit assessment can be done with a single function call, which automatically conducts posterior predictive checks and leave-one-out cross-validation for Bayesian models, or parametric bootstrap for MLE. These methods offer greater accuracy and reliability than traditional fit-statistic/dof measures, and thus better model discovery capability. For comparing multiple candidate models, ELISA provides robust Bayesian tools such as the Widely Applicable Information Criterion (WAIC) and the Leave-One-Out Information Criterion (LOOIC), which are more reliable than AIC or BIC. Thanks to the object-oriented design, collecting the analysis results should be simple. ELISA also provide visualization tools to generate ready-for-publication figures.

ELISA is an open-source project and community contributions are welcome and greatly appreciated.

[ascl:1603.016] ellc: Light curve model for eclipsing binary stars and transiting exoplanets

ellc analyzes the light curves of detached eclipsing binary stars and transiting exoplanet systems. The model represents stars as triaxial ellipsoids, and the apparent flux from the binary is calculated using Gauss-Legendre integration over the ellipses that are the projection of these ellipsoids on the sky. The code can also calculate the fluxweighted radial velocity of the stars during an eclipse (Rossiter-McLaghlin effect). ellc can model a wide range of eclipsing binary stars and extrasolar planetary systems, and can enable the use of modern Monte Carlo methods for data analysis and model testing.

[ascl:1106.024] ELMAG: Simulation of Electromagnetic Cascades

A Monte Carlo program for the simulation of electromagnetic cascades initiated by high-energy photons and electrons interacting with extragalactic background light (EBL) is presented. Pair production and inverse Compton scattering on EBL photons as well as synchrotron losses and deflections of the charged component in extragalactic magnetic fields (EGMF) are included in the simulation. Weighted sampling of the cascade development is applied to reduce the number of secondary particles and to speed up computations. As final result, the simulation procedure provides the energy, the observation angle, and the time delay of secondary cascade particles at the present epoch. Possible applications are the study of TeV blazars and the influence of the EGMF on their spectra or the calculation of the contribution from ultrahigh energy cosmic rays or dark matter to the diffuse extragalactic gamma-ray background. As an illustration, we present results for deflections and time-delays relevant for the derivation of limits on the EGMF.

[ascl:2212.022] Elysium: Observing black hole accretion disks

Elysium creates an observing screen at the desirable distance away from a black hole system. Observers set on every pixel of this screen then photograph the area toward the black hole - accretion disk system and report back what they record. This can be the accretion disk (incoming photons bring in radiation and thus energy), the black hole event horizon, or the empty space outside and beyond the system (there are no incoming photons or energy). The central black hole can be either Schwarzschild (nonrotating) or Kerr (rotating) by choice of the user.

[ascl:1203.006] EMACSS: Evolve Me A Cluster of StarS

The star cluster evolution code Evolve Me A Cluster of StarS (EMACSS) is a simple yet physically motivated computational model that describes the evolution of some fundamental properties of star clusters in static tidal fields. The prescription is based upon the flow of energy within the cluster, which is a constant fraction of the total energy per half-mass relaxation time. According to Henon's predictions, this flow is independent of the precise mechanisms for energy production within the core, and therefore does not require a complete description of the many-body interactions therein. Dynamical theory and analytic descriptions of escape mechanisms is used to construct a series of coupled differential equations expressing the time evolution of cluster mass and radius for a cluster of equal-mass stars. These equations are numerically solved using a fourth-order Runge-Kutta integration kernel; the results were benchmarked against a data base of direct N-body simulations. EMACSS is publicly available and reproduces the N-body results to within ~10 per cent accuracy for the entire post-collapse evolution of star clusters.

[ascl:2106.029] EMBERS: Experimental Measurement of BEam Responses with Satellites

EMBERS provides a modular framework for radio telescopes and interferometric arrays such as the MWA, HERA, and the upcoming SKA-Low to accurately measure the all sky polarized beam responses of their antennas using weather and communication satellites. This tool enables astronomers and system engineers, all over the world, to characterize the in-situ antenna beam patterns of large arrays with ease.

[ascl:1303.002] emcee: The MCMC Hammer

emcee is an extensible, pure-Python implementation of Goodman & Weare's Affine Invariant Markov chain Monte Carlo (MCMC) Ensemble sampler. It's designed for Bayesian parameter estimation. The algorithm behind emcee has several advantages over traditional MCMC sampling methods and has excellent performance as measured by the autocorrelation time (or function calls per independent sample). One advantage of the algorithm is that it requires hand-tuning of only 1 or 2 parameters compared to $sim N^2$ for a traditional algorithm in an N-dimensional parameter space. Exploiting the parallelism of the ensemble method, emcee permits any user to take advantage of multiple CPU cores without extra effort.

[ascl:2109.006] eMCP: e-MERLIN CASA pipeline

The e-MERLIN CASA Pipeline calibrates and processes data from the e-MERLIN radio interferometer. It works on top of CASA (ascl:1107.013) and can convert, concatenate, prepare, flag and calibrate raw to produce advanced calibrated products for both continuum and spectral line data. The main outputs of the data are calibration tables, calibrated data, assessment plots, preliminary images of target and calibrator sources and a summary weblog. The pipeline provides an easy, ready-to-use toolkit that delivers calibrated data in a consistent, clear, and repeatable way. A parameters file is used to control the pipeline execution, so optimization of the algorithms is straightforward and reproducible. Good quality images are usually obtained with minimum human intervention.

[ascl:1910.006] EMERGE: Empirical ModEl for the foRmation of GalaxiEs

Emerge (Empirical ModEl for the foRmation of GalaxiEs) populates dark matter halo merger trees with galaxies using simple empirical relations between galaxy and halo properties. For each model represented by a set of parameters, it computes a mock universe, which it then compares to observed statistical data to obtain a likelihood. Parameter space can be explored with several advanced stochastic algorithms such as MCMC to find the models that are in agreement with the observations.

[ascl:1201.004] emGain: Determination of EM gain of CCD

The determination of the EM gain of the CCD is best done by fitting the histogram of many low-light frames. Typically, the dark+CIC noise of a 30ms frame itself is a sufficient amount of signal to determine accurately the EM gain with about 200 512x512 frames. The IDL code emGain takes as an input a cube of frames and fit the histogram of all the pixels with the EM stage output probability function. The function returns the EM gain of the frames as well as the read-out noise and the mean signal level of the frames.

[ascl:1708.027] empiriciSN: Supernova parameter generator

empiriciSN generates realistic supernova parameters given photometric observations of a potential host galaxy, based entirely on empirical correlations measured from supernova datasets. It is intended to be used to improve supernova simulation for DES and LSST. It is extendable such that additional datasets may be added in the future to improve the fitting algorithm or so that additional light curve parameters or supernova types may be fit.

[ascl:1010.018] Emu CMB: Power spectrum emulator

Emu CMB is a fast emulator the CMB temperature power spectrum based on CAMB (ascl:1102.026, Jan 2010 version). Emu CMB is based on a "space-filling" Orthogonal Array Latin Hypercube design in a de-correlated parameter space obtained by using a fiducial WMAP5 CMB Fisher matrix as a rotation matrix. This design strategy allows for accurate interpolation with small numbers of simulation design points. The emulator presented here is calibrated with 100 CAMB runs that are interpolated over the design space using a global quadratic polynomial fit.

[ascl:1109.012] EnBiD: Fast Multi-dimensional Density Estimation

We present a method to numerically estimate the densities of a discretely sampled data based on a binary space partitioning tree. We start with a root node containing all the particles and then recursively divide each node into two nodes each containing roughly equal number of particles, until each of the nodes contains only one particle. The volume of such a leaf node provides an estimate of the local density and its shape provides an estimate of the variance. We implement an entropy-based node splitting criterion that results in a significant improvement in the estimation of densities compared to earlier work. The method is completely metric free and can be applied to arbitrary number of dimensions. We use this method to determine the appropriate metric at each point in space and then use kernel-based methods for calculating the density. The kernel-smoothed estimates were found to be more accurate and have lower dispersion. We apply this method to determine the phase-space densities of dark matter haloes obtained from cosmological N-body simulations. We find that contrary to earlier studies, the volume distribution function v(f) of phase-space density f does not have a constant slope but rather a small hump at high phase-space densities. We demonstrate that a model in which a halo is made up by a superposition of Hernquist spheres is not capable in explaining the shape of v(f) versus f relation, whereas a model which takes into account the contribution of the main halo separately roughly reproduces the behaviour as seen in simulations. The use of the presented method is not limited to calculation of phase-space densities, but can be used as a general purpose data-mining tool and due to its speed and accuracy it is ideally suited for analysis of large multidimensional data sets.

[ascl:2105.014] encore: Efficient isotropic 2-, 3-, 4-, 5- and 6-point correlation functions

encore (Efficient N-point Correlator Estimation) estimates the isotropic NPCF multipoles for an arbitrary survey geometry in O(N2) time, with optional GPU support. The code features support for the isotropic 2PCF, 3PCF, 4PCF, 5PCF and 6PCF, with the option to subtract the Gaussian 4PCF contributions at the estimator level. For the 4PCF, 5PCF and 6PCF algorithms, the runtime is dominated by sorting the spherical harmonics into bins, which has complexity O(N_galaxy x N_bins3 x N_ell5) [4PCF], O(N_galaxy x N_bins4 x N_ell8) [5PCF] or O(N_galaxy x N_bins5 x N_ell11) [6PCF]. The higher-point functions are slow to compute unless N_bins and N_ell are small.

[ascl:1706.007] encube: Large-scale comparative visualization and analysis of sets of multidimensional data

Encube is a qualitative, quantitative and comparative visualization and analysis framework, with application to high-resolution, immersive three-dimensional environments and desktop displays, providing a capable visual analytics experience across the display ecology. Encube includes mechanisms for the support of: 1) interactive visual analytics of sufficiently large subsets of data; 2) synchronous and asynchronous collaboration; and 3) documentation of the discovery workflow. The framework is modular, allowing additional functionalities to be included as required.

[ascl:1501.008] Enrico: Python package to simplify Fermi-LAT analysis

Enrico analyzes Fermi data. It produces spectra (model fit and flux points), maps and lightcurves for a target by editing a config file and running a python script which executes the Fermi science tool chain.

[ascl:1912.015] ENTERPRISE: Enhanced Numerical Toolbox Enabling a Robust PulsaR Inference SuitE

ENTERPRISE (Enhanced Numerical Toolbox Enabling a Robust PulsaR Inference SuitE) is a pulsar-timing analysis code which performs noise analysis, gravitational-wave searches, and timing model analysis. It uses Tempo2 (ascl:1210.015) to find the maximum-likelihood fit for the timing parameters and the basis of the fit for the red noise parameters if they are significant.

[ascl:1010.072] Enzo: AMR Cosmology Application

Enzo is an adaptive mesh refinement (AMR), grid-based hybrid code (hydro + N-Body) which is designed to do simulations of cosmological structure formation. It uses the algorithms of Berger & Collela to improve spatial and temporal resolution in regions of large gradients, such as gravitationally collapsing objects. The Enzo simulation software is incredibly flexible, and can be used to simulate a wide range of cosmological situations with the available physics packages.

Enzo has been parallelized using the MPI message-passing library and can run on any shared or distributed memory parallel supercomputer or PC cluster. Simulations using as many as 1024 processors have been successfully carried out on the San Diego Supercomputing Center's Blue Horizon, an IBM SP.

[ascl:2012.007] EOS: Equation of State for planetary impacts

EOS is an analytical equation of state which models high pressure theory and fits well to the experimental data of ∊-Fe, SiO2, Mg2SiO4, and the Earth. The cold part of the EOS is modeled after the Varpoly EOS. The thermal part is based on a new formalism of the Gruneisen parameter, which improves behavior from earlier models and bridges the gap between elasticity and thermoelasticity. The EOS includes an expanded state model, which allows for the accurate modeling of material vapor curves.

[ascl:2101.008] EphemMatch: Ephemeris matching of DR25 TCEs, KOIs, and EBs for false positive identification

EphemMatch reads in the period, epoch, positional, and other information of all the Kepler DR25 TCEs, as well as the cumulative KOI list, and lists of EBs from the Kepler Eclipsing Binary Working Group (http://keplerebs.villanova.edu) as well as several catalogs of EBs known from ground-based surveys. The code then performs matching to identify two different objects that have a statistically identical period and epoch (within some tolerance) and perform logic to identify which is the real source (the parent) and which is a false positive due to contamination from the parent (a child).

[ascl:1511.021] EPIC: E-field Parallel Imaging Correlator

E-field Parallel Imaging Correlator (EPIC), a highly parallelized Object Oriented Python package, implements the Modular Optimal Frequency Fourier (MOFF) imaging technique. It also includes visibility-based imaging using the software holography technique and a simulator for generating electric fields from a sky model. EPIC can accept dual-polarization inputs and produce images of all four instrumental cross-polarizations.

[ascl:2104.007] EPIC5: Lindblad orbits in ovally perturbed potentials

EPIC5 computes positions, velocities and densities along closed orbits of interstellar matter, including frictional forces, in a galaxy with an arbitrary perturbing potential. Radial velocities are given for chosen lines of sight. These are analytic gas orbits in an arbitrary rotating galactic potential using the linear epicyclic approximation

[ascl:1302.005] EPICS: Experimental Physics and Industrial Control System

EPICS is a set of software tools and applications developed collaboratively and used to create distributed soft real-time control systems for scientific instruments such as particle accelerators and telescopes. Such distributed control systems typically comprise tens or even hundreds of computers, networked together to allow communication between them and to provide control and feedback of the various parts of the device from a central control room, or even remotely over the internet. EPICS uses Client/Server and Publish/Subscribe techniques to communicate between the various computers. A Channel Access Gateway allows engineers and physicists elsewhere in the building to examine the current state of the IOCs, but prevents them from making unauthorized adjustments to the running system. In many cases the engineers can make a secure internet connection from home to diagnose and fix faults without having to travel to the site.

EPICS is used by many facilities worldwide, including the Advanced Photon Source at Argonne National Laboratory, Fermilab, Keck Observatory, Laboratori Nazionali di Legnaro, Brazilian Synchrotron Light Source, Los Alamos National Laboratory, Australian Synchrotron, and Stanford Linear Accellerator Center.

[ascl:1909.013] EPOS: Exoplanet Population Observation Simulator

EPOS (Exoplanet Population Observation Simulator) simulates observations of exoplanet populations. It provides an interface between planet formation simulations and exoplanet surveys such as Kepler. EPOS can also be used to estimate planet occurrence rates and the orbital architectures of planetary systems.

[ascl:1204.017] epsnoise: Pixel noise in ellipticity and shear measurements

epsnoise simulates pixel noise in weak-lensing ellipticity and shear measurements. This open-source python code can efficiently create an intrinsic ellipticity distribution, shear it, and add noise, thereby mimicking a "perfect" measurement that is not affected by shape-measurement biases. For theoretical studies, we provide the Marsaglia distribution, which describes the ratio of normal variables in the general case of non-zero mean and correlation. We also added a convenience method that evaluates the Marsaglia distribution for the ratio of moments of a Gaussian-shaped brightness distribution, which gives a very good approximation of the measured ellipticity distribution also for galaxies with different radial profiles. We provide four shear estimators, two based on the ε ellipticity measure, two on χ. While three of them are essentially plain averages, we introduce a new estimator which requires a functional minimization.

[ascl:1802.016] eqpair: Electron energy distribution calculator

eqpair computes the electron energy distribution resulting from a balance between heating and direct acceleration of particles, and cooling processes. Electron-positron pair balance, bremstrahlung, and Compton cooling, including external soft photon input, are among the processes considered, and the final electron distribution can be hybrid, thermal, or non-thermal.

[ascl:2102.009] EqTide: Equilibrium Tide calculations

EqTide calculates the evolution of 2 bodies experiencing tidal evolution according to the "equilibrium tide" framework's "constant-phase-lag" and "constant-time-lag" models. The input file contains a list of options that can be set, as well as output parameters that print to a file during an integration. The example input files provide a guide for the syntax and grammar of EqTide.

[ascl:1603.005] EQUIB: Atomic level populations and line emissivities calculator

The Fortran program EQUIB solves the statistical equilibrium equation for each ion and yields atomic level populations and line emissivities for given physical conditions, namely electron temperature and electron density, appropriate to the zones in an ionized nebula where the ions are expected to exist.

[ascl:2401.020] escatter: Electron scattering in Python

escatter.py performs Monte Carlo simulations of electron scattering events. The code was developed to better understand the emission lines from the interacting supernova SN 2021adxl, specifically the blue excess seen in the Hα 6563A emission line. escatter follows a photon that was formed in a thin interface between the supernova ejecta and surrounding material as it travels radially outwards through the dense material, scattering electrons outwards until it reaches an optically thin region, and plots a histogram of the emergent photons.

[ascl:1302.017] ESO-MIDAS: General tools for image processing and data reduction

The ESO-MIDAS system provides general tools for image processing and data reduction with emphasis on astronomical applications including imaging and special reduction packages for ESO instrumentation at La Silla and the VLT at Paranal. In addition it contains applications packages for stellar and surface photometry, image sharpening and decomposition, statistics, data fitting, data presentation in graphical form, and more.

[ascl:1504.003] EsoRex: ESO Recipe Execution Tool

EsoRex (ESO Recipe Execution Tool) lists, configures, and executes Common Pipeline Library (CPL) (ascl:1402.010) recipes from the command line. Its features include automatically generating configuration files, recursive recipe-path searching, command line and configuration file parameters, and recipe product naming control, among many others.

[ascl:1405.017] ESP: Extended Surface Photometry

ESP (Extended Surface Photometry) determines the photometric properties of galaxies and other extended objects. It has applications that detect flatfielding faults, remove cosmic rays, median filter images, determine image statistics and local background values, perform galaxy profiling, fit 2-D Gaussian profiles to galaxies, generate pie slice cross-sections of galaxies, and display profiling results. It is distributed as part of the Starlink software collection (ascl:1110.012).

[ascl:2306.055] ESSENCE: Evaluate spatially correlated noise in interferometric images

ESSENCE (Evaluating Statistical Significance undEr Noise CorrElation) evaluates the statistical significance of image analysis and signal detection under correlated noise in interferometric images (e.g., ALMA, NOEMA). It measures the noise autocorrelation function (ACF) to fully characterize the statistical properties of spatially correlated noise in the interferometric image, computes the noise in the spatially integrated quantities (e.g., flux, spectrum) with a given aperture, and simulates noise maps with the same correlation property. ESSENSE can also construct a covariance matrix from noise ACF, which can be used for a 2d image or 3d cube model fitting.

[ascl:1305.001] ESTER: Evolution STEllaire en Rotation

The ESTER code computes the steady state of an isolated star of mass larger than two solar masses. The only convective region computed as such is the core where isentropy is assumed. ESTER provides solutions of the partial differential equations, for the pressure, density, temperature, angular velocity and meridional velocity for the whole volume. The angular velocity (differential rotation) and meridional circulation are computed consistently with the structure and are driven by the baroclinic torque. The code uses spectral methods, both radially and horizontally, with spherical harmonics and Chebyshev polynomials. The iterations follow Newton's algorithm. The code is object-oriented and is written in C++; a python suite allows an easy visualization of the results. While running, PGPLOT graphs are displayed to show evolution of the iterations.

[submitted] Estimating photo-z of quasars based on a cross-modal contrastive learning method

MMLPhoto-z is a cross-modal contrastive learning approach for estimating photo-z of quasars. This method employs adversarial training and contrastive loss functions to promote the mutual conversion between multi-band photometric data features (magnitude, color) and photometric image features, while extracting modality-invariant features.

[ascl:2208.018] EstrellaNueva: Expected rates of supernova neutrinos calculator

EstrellaNueva calculates expected rates of supernova neutrinos in detectors. It provides a link between supernova simulations and the expected events in detectors by calculating fluences and event rates in order to ease any comparison between theory and observation. The software is a standalone tool for exploring many physics scenarios, and offers an option to add analytical cross sections and define any target material.

[ascl:1311.012] ETC: Exposure Time Calculator

Written for the Wide-Field Infrared Survey Telescope (WFIRST) high-latitude survey, the exposure time calculator (ETC) works in both imaging and spectroscopic modes. In addition to the standard ETC functions (e.g. background and S/N determination), the calculator integrates over the galaxy population and forecasts the density and redshift distribution of galaxy shapes usable for weak lensing (in imaging mode) and the detected emission lines (in spectroscopic mode). The program may be useful outside of WFIRST but no warranties are made regarding its suitability for general purposes. The software is available for download; IPAC maintains a web interface for those who wish to run a small number of cases without having to download the package.

[ascl:1307.018] ETC++: Advanced Exposure-Time Calculations

ETC++ is a exposure-time calculator that considers the effect of cosmic rays, undersampling, dithering, and imperfect pixel response functions. Errors on astrometry and galaxy shape measurements can be predicted as well as photometric errors.

[ascl:2406.014] EVA: Excess Variability-based Age

EVA (Excess Variability-based Age) computes the VarX values and VarX90 ages for a given list of stars. The package retrieves information from Gaia, performs basic var90 calculations, then calculates the age of the group in a given band or overall (by combining all three bands). EVA then analyzes and plots the results.

[ascl:2011.015] EvapMass: Minimum mass of planets predictor

EvapMass predicts the minimum masses of planets in multi-planet systems using the photoevaporation-driven evolution model. The planetary system requires both a planet above and below the radius gap to be useful for this test. EvapMass includes an example Jupyter notebook for the Kepler-36 system. EvalMass can be used to identify TESS systems that warrant radial-velocity follow-up to further test the photoevaporation model.

[ascl:2212.002] Eventdisplay: Analysis and reconstruction package for ground-based Gamma-ray astronomy

Eventdisplay reconstructs and analyzes data from the Imaging Atmospheric Cherenkov Telescopes (IACT). It has been primarily developed for VERITAS and CTA analysis. The package calibrates and parametrizes images, event reconstruction, and stereo analysis, and provides train boosted decision trees for direction and energy reconstruction. It fills and uses lookup tables for mean scaled width and length calculation, energy reconstruction, and stereo reconstruction, and calculates radial camera acceptance from data files and instrument response functions such as effective areas, angular point-spread function, and energy resolution. Eventdisplay offers additional tools as well, including tools for calculating sky maps and spectral energy distribution, and to plot instrument response function, spectral energy distributions, light curves, and sky maps, among others.

[ascl:1807.029] EVEREST: Tools for de-trending stellar photometry

EVEREST (EPIC Variability Extraction and Removal for Exoplanet Science Targets) removes instrumental noise from light curves with pixel level decorrelation and Gaussian processes. The code, written in Python, generates the EVEREST catalog and offers tools for accessing and interacting with the de-trended light curves. EVEREST exploits correlations across the pixels on the CCD to remove systematics introduced by the spacecraft’s pointing error. For K2, it yields light curves with precision comparable to that of the original Kepler mission. Interaction with the EVEREST catalog catalog is available via the command line and through the Python interface. Though written for K2, EVEREST can be applied to additional surveys, such as the TESS mission, to correct for instrumental systematics and enable the detection of low signal-to-noise transiting exoplanets.

[ascl:2307.052] EVo: Thermodynamic magma degassing model

EVo calculates the speciation and volume of a volcanic gas phase erupting in equilibrium with its parent magma. Models can be run to calculate the gas phase in equilibrium with a melt at a single pressure, or the melt can be decompressed from depth rising to the surface as a closed-system case. Single pressure and decompression can be run for OH, COH, SOH, COHS and COHSN systems. EVo can calculate gas phase weight and volume fraction within the system, gas phase speciation as mole fraction or weight fraction across numerous compounds, and the volatile content of the melt at each pressure. It also calculates melt density, f02 of the system, and more. EVo can be set up using either melt volatile contents, or for a set amount of atomic volatile which is preferable for conducting experiments over a wide range of fO2 values.

[ascl:2303.012] EvoEMD: Cosmic Evolution with an Early Matter-Dominated era

EvoEMD evaluates cosmic evolution with or without an early matter dominated (EMD) era. The framework includes global parameter, particle, and process systems, and different methods for Hubble parameter calculation. EvoEMD automatically builds up the Boltzmann equation according to the user's definition of particle and process,solves the Boltzmann equation using 4th order Runge-Kutta method with adaptive steps tailored to cosmology application, and caches the collision rate calculation results for fast evaluation.

[ascl:1905.003] evolstate: Assign simple evolutionary states to stars

evolstate assigns crude evolutionary states (main-sequence, subgiant, red giant) to stars given an input temperature and radius/surface gravity, based on physically motivated boundaries from solar metallicity interior models.

[ascl:2307.053] EVolve: Growth and evolution of volcanically-derived atmospheres

EVolve calculates the chemical composition and surface pressure of a ID atmosphere on a rocky planet that is being produced by volcanic activity, as it grows over time. Once the initial volatile content of the planet's mantle and the composition and resultant surface pressure of any pre-existing atmosphere is set, the volcanic degassing model EVo (ascl:2307.052) calculates the amount and speciation of any volcanic gases released into the atmosphere over each time step. Atmospheric processing is calculated using FastChem (ascl:1804.025); thermochemical equilibrium is assumed so the final chemical composition of the atmosphere is calculated according to the pre-set surface temperature.

[ascl:2211.020] EXCEED-DM: EXtended Calculation of Electronic Excitations for Direct detection of Dark Matter

EXCEED-DM (EXtended Calculation of Electronic Excitations for Direct detection of Dark Matter) provides a complete framework for computing DM-electron interaction rates. Given an electronic configuration, EXCEED-DM computes the relevant electronic matrix elements, then particle physics specific rates from these matrix elements. This allows for separation between approximations regarding the electronic state configuration, and the specific calculation being performed.

[ascl:1204.011] EXCOP: EXtraction of COsmological Parameters

The EXtraction of COsmological Parameters software (EXCOP) is a set of C and IDL programs together with a very large database of cosmological models generated by CMBFAST (ascl:9909.004) that will compute likelihood functions for cosmological parameters given some CMB data. This is the software and database used in the Stompor et al. (2001) analysis of a high resoultion Maxima1 CMB anisotropy map.

[ascl:2010.008] Exo-DMC: Exoplanet Detection Map Calculator

The Exoplanet Detection Map Calculator (Exo-DMC) performs statistical analysis of exoplanet surveys results using Monte Carlo methods. Written in Python, it is the latest rendition of the MESS (Multi-purpose Exoplanet Simulation System, ascl:1111.009). Exo-DMC combines the information on the target stars with instrument detection limits to estimate the probability of detection of companions within a user defined range of masses and physical separations, ultimately generating detection probability maps. The software allows for a high level of flexibility in terms of possible assumptions on the synthetic planet population to be used for the determination of the detection probability.

[submitted] Exo-MerCat: a merged exoplanet catalog with Virtual Observatory connection

The heterogeneity of papers dealing with the discovery and characterization of exoplanets makes every attempt to maintain a uniform exoplanet catalog almost impossible. Four sources currently available online (NASA Exoplanet Archive, Exoplanet Orbit Database, Exoplanet Encyclopaedia, and Open Exoplanet Catalogue) are commonly used by the community, but they can hardly be compared, due to discrepancies in notations and selection criteria.
Exo-MerCat is a Python code that collects and selects the most precise measurement for all interesting planetary and orbital parameters contained in the four databases, accounting for the presence of multiple aliases for the same target. It can download information about the host star as well by the use of Virtual Observatory ConeSearch connections to the major archives such as SIMBAD and those available in VizieR. A Graphical User Interface is provided to filter data based on the user's constraints and generate automatic plots that are commonly used in the exoplanetary community.
With Exo-MerCat, we retrieved a unique catalog that merges information from the four main databases, standardizing the output and handling notation differences issues. Exo-MerCat can correct as many issues that prevent a direct correspondence between multiple items in the four databases as possible, with the available data. The catalog is available as a VO resource for everyone to use and it is periodically updated, according to the update rates of the source catalogs.

[ascl:1806.029] EXO-NAILER: EXOplanet traNsits and rAdIal veLocity fittER

EXO-NAILER (EXOplanet traNsits and rAdIal veLocity fittER) efficiently fits exoplanet transit lightcurves, radial velocities (RVs) or both. The code handles data taken with different instruments. For RVs, a different center-of-mass velocity can be fitted for each instrument to account for offsets between them; if jitter is included, a different jitter term can also fitted for each instrument. For transits, a different photometric jitter can be fitted to each instrument as can different limb-darkening coefficients and different transit depths. In addition to general options that need to be set, EXO-NAILER also requires that photometry and radial velocity options be defined for each instrument.

[ascl:2410.012] Exo-REM: 1D self-consistent radiative-equilibrium model for exoplanetary atmospheres

The 1D radiative-equilibrium model Exo-REM simulates young gas giants far from their star and brown dwarfs. Fluxes are calculated using the two-stream approximation assuming hemispheric closure. The radiative-convective equilibrium is solved assuming that the net flux (radiative + convective) is conservative. The conservation of flux over the pressure grid is solved iteratively using a constrained linear inversion method. Rayleigh scattering from H2, He, and H2O, as well as absorption and scattering by clouds (calculated from extinction coefficient, single scattering albedo, and asymmetry factor interpolated from precomputed tables for a set of wavelengths and particle radii), are also taken into account.

[ascl:1611.005] Exo-Transmit: Radiative transfer code for calculating exoplanet transmission spectra

Exo-Transmit calculates the transmission spectrum of an exoplanet atmosphere given specified input information about the planetary and stellar radii, the planet's surface gravity, the atmospheric temperature-pressure (T-P) profile, the location (in terms of pressure) of any cloud layers, the composition of the atmosphere, and opacity data for the atoms and molecules that make up the atmosphere. The code solves the equation of radiative transfer for absorption of starlight passing through the planet's atmosphere as it transits, accounting for the oblique path of light through the planetary atmosphere along an Earth-bound observer's line of sight. The fraction of light absorbed (or blocked) by the planet plus its atmosphere is calculated as a function of wavelength to produce the wavelength-dependent transmission spectrum. Functionality is provided to simulate the presence of atmospheric aerosols in two ways: an optically thick (gray) cloud deck can be generated at a user-specified height in the atmosphere, and the nominal Rayleigh scattering can be increased by a specified factor.

[ascl:2002.020] ExoCAM: Exoplanet Community Atmospheric Model

ExoCAM adapts the NCAR Community Earth System Model (CESM) for planetary and exoplanetary applications. The system files, source code, initial conditions files, and namelists provided do not run standalone. ExoCAM is a patch to be used with standard distributions of CESM version 1.2.1 (http://www.cesm.ucar.edu/models/current.html), and is also intended to be run with ExoRT (ascl:2002.019), a correlated-k radiative transfer package.

[ascl:1805.007] exocartographer: Constraining surface maps orbital parameters of exoplanets

exocartographer solves the exo-cartography inverse problem. This flexible forward-modeling framework, written in Python, retrieves the albedo map and spin geometry of a planet based on time-resolved photometry; it uses a Markov chain Monte Carlo method to extract albedo maps and planet spin and their uncertainties. Gaussian Processes use the data to fit for the characteristic length scale of the map and enforce smooth maps.

[ascl:1803.014] ExoCross: Spectra from molecular line lists

ExoCross generates spectra and thermodynamic properties from molecular line lists in ExoMol, HITRAN, or several other formats. The code is parallelized and also shows a high degree of vectorization; it works with line profiles such as Doppler, Lorentzian and Voigt and supports several broadening schemes. ExoCross is also capable of working with the recently proposed method of super-lines. It supports calculations of lifetimes, cooling functions, specific heats and other properties. ExoCross converts between different formats, such as HITRAN, ExoMol and Phoenix, and simulates non-LTE spectra using a simple two-temperature approach. Different electronic, vibronic or vibrational bands can be simulated separately using an efficient filtering scheme based on the quantum numbers.

[ascl:2207.012] ExoCTK: Exoplanet Characterization Tool Kit

The Exoplanet Characterization ToolKit (ExoCTK) focuses primarily on the atmospheric characterization of exoplanets and provides tools for time-series observation planning, forward modeling, data reduction, limb darkening, light curve fitting, and retrievals. It contains calculators for contamination, visibility, integrations and groups, and includes several Jupyter Notebooks to aid in learning how to use the various tools included in the ExoCTK package.

[ascl:1512.011] ExoData: Open Exoplanet Catalogue exploration and analysis tool

ExoData is a python interface for accessing and exploring the Open Exoplanet Catalogue. It allows searching of planets (including alternate names) and easy navigation of hierarchy, parses spectral types and fills in missing parameters based on programmable specifications, and provides easy reference of planet parameters such as GJ1214b.ra, GJ1214b.T, and GJ1214b.R. It calculates values such as transit duration, can easily rescale units, and can be used as an input catalog for large scale simulation and analysis of planets.

[ascl:2110.002] exodetbox: Finding planet-star projected separation extrema and difference in magnitude extrema

Exodetbox provides mathematical methods for calculating the planet-star separation and difference in magnitude extrema as well as when planets have particular planet-star separations or differences in magnitude. The code also projects the 3D Keplerian Orbit into a reparameterized 2D ellipse in the plane of the sky. Exodetbox is implemented in the EXOSIMS modeling software (ascl:1706.010).

[ascl:1207.001] EXOFAST: Fast transit and/or RV fitter for single exoplanet

EXOFAST is a fast, robust suite of routines written in IDL which is designed to fit exoplanetary transits and radial velocity variations simultaneously or separately, and characterize the parameter uncertainties and covariances with a Differential Evolution Markov Chain Monte Carlo method. Our code self-consistently incorporates both data sets to simultaneously derive stellar parameters along with the transit and RV parameters, resulting in consistent, but tighter constraints on an example fit of the discovery data of HAT-P-3b that is well-mixed in under two minutes on a standard desktop computer. EXOFAST has an easy-to-use online interface for several basic features of our transit and radial velocity fitting. A more robust version of EXOFAST, EXOFASTv2 (ascl:1710.003), is also available.

[ascl:1710.003] EXOFASTv2: Generalized publication-quality exoplanet modeling code

EXOFASTv2 improves upon EXOFAST (ascl:1207.001) for exoplanet modeling. It uses a differential evolution Markov Chain Monte Carlo code to fit an arbitrary number of transits (each with their own error scaling, normalization, TTV, and/or detrending parameters), an arbitrary number of RV sources (each with their own zero point and jitter), and an arbitrary number of planets, changing nothing but command line arguments and configuration files. The global model includes integrated isochrone and SED models to constrain the stellar properties and can accept priors on any fitted or derived quantities (e.g., parallax from Gaia). It is easily extensible to add additional effects or parameters.

[ascl:1201.009] ExoFit: Orbital parameters of extra-solar planets from radial velocity

ExoFit is a freely available software package for estimating orbital parameters of extra-solar planets. ExoFit can search for either one or two planets and employs a Bayesian Markov Chain Monte Carlo (MCMC) method to fit a Keplerian radial velocity curve onto the radial velocity data.

[ascl:1812.007] ExoGAN: Exoplanets Generative Adversarial Network

ExoGAN (Exoplanets Generative Adversarial Network) analyzes exoplanetary atmospheres using an unsupervised deep-learning algorithm that recognizes molecular features, atmospheric trace-gas abundances, and planetary parameters. After training, ExoGAN can be applied to a large number of instruments and planetary types and can be used either as a final atmospheric analysis or to provide prior constraints to subsequent retrieval.

[ascl:1806.020] exoinformatics: Compute the entropy of a planetary system's size-ordering

exoinformatics computes the entropy of a planetary system's size ordering using three different entropy methods: tally-scores, integral path, and change points.

[ascl:2206.003] ExoJAX: Spectrum modeling of exoplanets and brown dwarfs

ExoJAX provides auto-differentiable line-by-line spectral modeling of exoplanets/brown dwarfs/M dwarfs using JAX (ascl:2111.002). In a nutshell, ExoJAX allows the user to do a HMC-NUTS fitting using the latest molecular/atomic data in ExoMol, HITRAN/HITEMP, and VALD3. The code enables a fully Bayesian inference of the high-dispersion data to fit the line-by-line spectral computation to the observed spectrum, from end-to-end (i.e. from molecular/atomic databases to real spectra), by combining it with the Hamiltonian Monte Carlo in recent probabilistic programming languages such as NumPyro.

[submitted] ExoPix: Exoplanet Imaging with JWST

ExoPix is a collection of tutorials aimed at illustrating the imaging of exoplanets with the James Webb Space Telescope (JWST). ExoPix tutorials are meant to demonstrate the application of the PSF-subtraction algorithm pyKLIP (ascl:1506.001) to simulated JWST NIRCAM data. We provide simple walkthroughs of pyKLIP’s ability to reveal exoplanets, compute contrast curves, and measure exoplanet astrometry and photometry in imaged extrasolar systems.

[submitted] ExoPlanet

ExoPlanet provides a graphical interface for the construction, evaluation and application of a machine learning model in predictive analysis. With the back-end built using the numpy and scikit-learn libraries, ExoPlanet couples fast and well tested algorithms, a UI designed over the PyQt framework, and graphs rendered using Matplotlib. This serves to provide the user with a rich interface, rapid analytics and interactive visuals.

ExoPlanet is designed to have a minimal learning curve to allow researchers to focus more on the applicative aspect of machine learning algorithms rather than their implementation details and supports both methods of learning, providing algorithms for unsupervised and supervised training, which may be done with continuous or discrete labels. The parameters of each algorithms can be adjusted to ensure the best fit for the data. Training data is read from a CSV file, and after training is complete, ExoPlanet automates the building of the visual representations for the trained model. Once training and evaluation yield satisfactory results, the model may be used to make data based predictions on a new data set.

[ascl:1910.005] exoplanet: Probabilistic modeling of transit or radial velocity observations of exoplanets

exoplanet is a toolkit for probabilistic modeling of transit and/or radial velocity observations of exoplanets and other astronomical time series using PyMC3 (ascl:1610.016), a flexible and high-performance model building language and inference engine. exoplanet extends PyMC3's language to support many of the custom functions and distributions required when fitting exoplanet datasets. These features include a fast and robust solver for Kepler's equation; scalable Gaussian processes using celerite (ascl:1709.008); and fast and accurate limb darkened light curves using the code starry (ascl:1810.005). It also offers common reparameterizations for limb darkening parameters, and planet radius and impact parameters.

[ascl:1501.015] Exoplanet: Trans-dimensional MCMC method for exoplanet discovery

Exoplanet determines the posterior distribution of exoplanets by use of a trans-dimensional Markov Chain Monte Carlo method within Nested Sampling. This method finds the posterior distribution in a single run rather than requiring multiple runs with trial values.

[ascl:2108.021] ExoPlaSim: Exoplanet climate simulator

ExoPlaSim extends the PlaSim (ascl:2107.019) 3D general climate model to terrestrial exoplanets. It includes the PlaSim general circulation model and modifications that allow this code to run tidally-locked planets, planets with substantially different surface pressures than Earth, planets orbiting stars with different effective temperatures, super-Earths, and more. ExoPlaSim includes the ability to compute carbon-silicate weathering, dynamic orography through the glacier module (though only accumulation and ablation/evaporation/melting are included; glacial flow and spreading are not), and storm climatology.

[ascl:2404.029] ExoPlex: Thermodynamically self-consistent mass-radius-composition calculator

ExoPlex is a thermodynamically self-consistent mass-radius-composition calculator. Users input a bulk molar composition and a mass or radius, and ExoPlex will calculate the resulting radius or mass. Additionally, it will produce the planet's core mass fraction, interior mineralogy and the pressure, adiabatic temperature, gravity and density profiles as a function of depth.

[ascl:1407.008] Exopop: Exoplanet population inference

Exopop is a general hierarchical probabilistic framework for making justified inferences about the population of exoplanets. Written in python, it requires that the occurrence rate density be a smooth function of period and radius (employing a Gaussian process) and takes survey completeness and observational uncertainties into account. Exopop produces more accurate estimates of the whole population than standard procedures based on weighting by inverse detection efficiency.

[ascl:1603.010] ExoPriors: Accounting for observational bias of transiting exoplanets

ExoPriors calculates a log-likelihood penalty for an input set of transit parameters to account for observational bias (geometric and signal-to-noise ratio detection bias) of transiting exoplanets. Written in Python, the code calculates this log-likelihood penalty in one of seven user-specified cases specified with Boolean input parameters for geometric and/or SNR bias, grazing or non-grazing events, and occultation events.

[ascl:2210.006] ExoRad2: Generic point source radiometric model

ExoRad 2.0, a generic point source radiometric model, interfaces with any instrument to provide an estimate of several Payload performance metrics. For each target and for each photometric and spectroscopic channel, the code provides estimates of signals in pixels, saturation times, and read, photon, and dark current noise. ExoRad also provides estimates for the zodiacal background, inner sanctum, and sky foreground.

[ascl:1501.012] Exorings: Exoring modelling software

Exorings, written in Python, contains tools for displaying and fitting giant extrasolar planet ring systems; it uses FITS formatted data for input.

[ascl:1703.008] exorings: Exoring Transit Properties

Exorings is suitable for surveying entire catalogs of transiting planet candidates for exoring candidates, providing a subset of objects worthy of more detailed light curve analysis. Moreover, it is highly suited for uncovering evidence of a population of ringed planets by comparing the radius anomaly and PR-effects in ensemble studies.

[ascl:2002.019] ExoRT: Two-stream radiative transfer code

ExoRT is a flexible, two-stream radiative transfer code that interfaces with CAM/CESM (http://www.cesm.ucar.edu/models/current.html) or 1D offline; it is also used with ExoCAM (ascl:2002.020). Quadrature is used for shortwave and hemispheric mean is used for longwave. The gas phase optical depths are calculate using a correlated K-distribution method, with overlapping bands treated using an amount weighted scheme. Cloud optics are treated using mie scattering for both liquid and ice clouds, and cloud overlap is treated using Monte Carlo Independent Column Approximation.

[ascl:2002.008] ExoSim: Simulator for predicting signal and noise in transit spectroscopy observations

ExoSim models host star and planet transit events, simulating the temporal change in stellar flux due to the light curve. It is wavelength-dependent, using an input planet spectrum to determine the light curve depth for any given wavelength and can capture temporal effects, such as correlated noise. ExoSim's star spot simulator produces simulated observations that include spot and facula contamination. The code is flexible and can be generically applied to different instruments that simulate specific time-dependent processes.

[ascl:1706.010] EXOSIMS: Exoplanet Open-Source Imaging Mission Simulator

EXOSIMS generates and analyzes end-to-end simulations of space-based exoplanet imaging missions. The software is built up of interconnecting modules describing different aspects of the mission, including the observatory, optical system, and scheduler (encoding mission rules) as well as the physical universe, including the assumed distribution of exoplanets and their physical and orbital properties. Each module has a prototype implementation that is inherited by specific implementations for different missions concepts, allowing for the simulation of widely variable missions.

[ascl:1708.023] ExoSOFT: Exoplanet Simple Orbit Fitting Toolbox

ExoSOFT provides orbital analysis of exoplanets and binary star systems. It fits any combination of astrometric and radial velocity data, and offers four parameter space exploration techniques, including MCMC. It is packaged with an automated set of post-processing and plotting routines to summarize results, and is suitable for performing orbital analysis during surveys with new radial velocity and direct imaging instruments.

[ascl:2001.011] ExoTETHyS: Exoplanetary transits and eclipsing binaries modeler

ExoTETHyS models exoplanetary transits, eclipsing binaries, and related phenomena. The package calculates stellar limb-darkening coefficients down to <10 parts per million (ppm) and generates an exact transit light-curve based on the entire stellar intensity profile rather than limb-darkening coefficients.

[ascl:2302.009] EXOTIC: EXOplanet Transit Interpretation Code

EXOTIC (EXOplanet Transit Interpretation Code) analyzes photometric data of transiting exoplanets into lightcurves and retrieves transit epochs and planetary radii. The software reduces images of a transiting exoplanet into a lightcurve, and fits a model to the data to extract planetary information crucial to increasing the efficiency of larger observational platforms. EXOTIC is written in Python and supports the citizen science project Exoplanet Watch. The software runs on Windows, Macintosh, and Linux/Unix computer, and can also be used via Google Colab.

[ascl:1706.001] Exotrending: Fast and easy-to-use light curve detrending software for exoplanets

The simple, straightforward Exotrending code detrends exoplanet transit light curves given a light curve (flux versus time) and good ephemeris (epoch of first transit and orbital period). The code has been tested with Kepler and K2 light curves and should work with any other light curve.

[submitted] Exovetter

Exovetter is an open-source, pip-installable python package which calculates metrics on high cadence time series photometry to distinguish between exoplanet transit signals and false positives. The package standardizes the implementation of metrics developed for the TESS, Kepler, and K2 missions such as Odd-Even, Multiple Event Statistic, and Centroid Offset (see “Planetary Candidates Observed by Kepler. VIII.”, Thompson et al. 2018.). Metrics can be run individually or together as part of a pipeline. Exovetter also includes several visualizations to further evaluate the transits and metrics.

[ascl:2203.002] exoVista: Planetary systems generator

exoVista generates a "universe" of planetary systems, creating thousands of models of quasi-self-consistent planetary systems around known nearby stars at scattered light wavelengths. It efficiently records the position, velocity, spectrum, and physical parameters of all bodies as functions of time. exoVista models can be used for simulating surveys using the direct imaging, transit, astrometric, and radial velocity techniques.

[ascl:1902.009] ExPRES: Exoplanetary and Planetary Radio Emissions Simulator

ExPRES (Exoplanetary and Planetary Radio Emission Simulator) reproduces the occurrence of CMI-generated radio emissions from planetary magnetospheres, exoplanets or star-planet interacting systems in time-frequency plane, with special attention given to computation of the radio emission beaming at and near its source. Physical information drawn from such radio observations may include the location and dynamics of the radio sources, the type of current system leading to electron acceleration and their energy and, for exoplanetary systems, the magnetic field strength, the orbital period of the emitting body and the rotation period, tilt and offset of the planetary magnetic field. Most of these parameters can be remotely measured only via radio observations. ExPRES code provides the proper framework of analysis and interpretation for past (Cassini, Voyager, Galileo), current (Juno, ground-based radio telescopes) and future (BepiColombo, Juice) observations of planetary radio emissions, as well as for future detection of radio emissions from exoplanetary systems.

[ascl:1212.013] EXSdetect: Extended X-ray Source Detection

EXSdetect is a python implementation of an X-ray source detection algorithm which is optimally designed to detected faint extended sources and makes use of Voronoi tessellation and Friend-of-Friend technique. It is a flexible tool capable of detecting extended sources down to the lowest flux levels attainable within instrumental limitations while maintaining robust photometry, high completeness, and low contamination, regardless of source morphology. EXSdetect was developed mainly to exploit the ever-increasing wealth of archival X-ray data, but is also ideally suited to explore the scientific capabilities of future X-ray facilities, with a strong focus on investigations of distant groups and clusters of galaxies.

[ascl:9906.002] EXTINCT: A computerized model of large-scale visual interstellar extinction

The program EXTINCT.FOR is a FORTRAN subroutine summarizing a three-dimensional visual Galactic extinction model, based on a number of published studies. INPUTS: Galactic latitude (degrees), Galactic longitude (degrees), and source distance (kpc). OUTPUTS (magnitudes): Extinction, extinction error, a statistical correction term, and an array containing extinction and extinction error from each subroutine. The model is useful for correcting visual magnitudes of Galactic sources (particularly in statistical models), and has been used to find Galactic extinction of extragalactic sources. The model's limited angular resolution (subroutine-dependent, but with a minimum resolution of roughly 2 degrees) is necessitated by its ability to describe three-dimensional structure.

[ascl:1708.025] extinction-distances: Estimating distances to dark clouds

Extinction-distances uses the number of foreground stars and a Galactic model of the stellar distribution to estimate the distance to dark clouds. It exploits the relatively narrow range of intrinsic near-infrared colors of stars to separate foreground from background stars. An advantage of this method is that the distribution of stellar colors in the Galactic model need not be precisely correct, only the number density as a function of distance from the Sun.

[ascl:2102.026] extinction: Dust extinction laws

extinction is an implementation of fast interstellar dust extinction laws in Python. It contains Cython-optimized implementations of empirical dust extinction laws found in the literature. Flux values can be reddened or dereddened using included functions, and all extinction laws accept a unit keyword to change the interpretation of the wavelength array from Angstroms to inverse microns. Part of this code originated in the specutils package (ascl:1902.012).

[ascl:1803.011] ExtLaw_H18: Extinction law code

ExtLaw_H18 generates the extinction law between 0.8 - 2.2 microns. The law is derived using the Westerlund 1 (Wd1) main sequence (A_Ks ~ 0.6 mag) and Arches cluster field Red Clump at the Galactic Center (A_Ks ~ 2.7 mag). To derive the law a Wd1 cluster age of 5 Myr is assumed, though changing the cluster age between 4 Myr -- 7 Myr has no effect on the law. This extinction law can be applied to highly reddened stellar populations that have similar foreground material as Wd1 and the Arches RC, namely dust from the spiral arms of the Milky Way in the Galactic Plane.

[ascl:2305.003] extrapops: Fast simulation and analysis of extra-galactic binary GW sources

extrapops simulates extra-galactic populations of gravitational waves sources and models their emission during the inspiral phase. The code approximately assesses the detectability of individual sources by LISA and computes the background due to unresolved sources in the LISA band using different methods. The simulated populations can be saved in a format compatible with LISA LDC. Simulations are well calibrated to produce accurate background calculations and fair random generation at the tails of the distributions, which is important for accurate probability of detectable events. extrapops uses a number of ad-hoc techniques for rapid simulation and allows room for further optimization up to almost 1 order of magnitude.

[ascl:1010.032] Extreme Deconvolution: Density Estimation using Gaussian Mixtures in the Presence of Noisy, Heterogeneous and Incomplete Data

Extreme-deconvolution is a general algorithm to infer a d-dimensional distribution function from a set of heterogeneous, noisy observations or samples. It is fast, flexible, and treats the data's individual uncertainties properly, to get the best description possible for the underlying distribution. It performs well over the full range of density estimation, from small data sets with only tens of samples per dimension, to large data sets with hundreds of thousands of data points.

[ascl:1010.061] EyE: Enhance Your Extraction

In EyE (Enhance Your Extraction) an artificial neural network connected to pixels of a moving window (retina) is trained to associate these input stimuli to the corresponding response in one or several output image(s). The resulting filter can be loaded in SExtractor (ascl:1010.064) to operate complex, wildly non-linear filters on astronomical images. Typical applications of EyE include adaptive filtering, feature detection and cosmetic corrections.

[ascl:1407.019] EZ_Ages: Stellar population age calculator

EZ_Ages is an IDL code package that computes the mean, light-weighted stellar population age, [Fe/H], and abundance enhancements [Mg/Fe], [C/Fe], [N/Fe], and [Ca/Fe] for unresolved stellar populations. This is accomplished by comparing Lick index line strengths between the data and the stellar population models of Schiavon (2007), using a method described in Graves & Schiavon (2008). The algorithm uses the inversion of index-index model grids to determine ages and abundances, and exploits the sensitivities of the various Lick indices to measure Mg, C, N, and Ca enhancements over their solar abundances with respect to Fe.

[ascl:1210.004] EZ: A Tool For Automatic Redshift Measurement

EZ (Easy-Z) estimates redshifts for extragalactic objects. It compares the observed spectrum with a set of (user given) spectral templates to find out the best value for the redshift. To accomplish this task, it uses a highly configurable set of algorithms. EZ is easily extendible with new algorithms. It is implemented as a set of C programs and a number of python classes. It can be used as a standalone program, or the python classes can be directly imported by other applications.

[ascl:1208.021] EzGal: A Flexible Interface for Stellar Population Synthesis Models

EzGal is a flexible Python program which generates observable parameters (magnitudes, colors, and mass-to-light ratios) for arbitrary input stellar population synthesis (SPS) models; it enables simple, direct comparison of different model sets so that the uncertainty introduced by choice of model set can be quantified. EzGal is also capable of generating composite stellar population models (CSPs) for arbitrary input star-formation histories and reddening laws, and can be used to interpolate between metallicities for a given model set.

[ascl:2201.001] EzTao: Easier CARMA Modeling

EzTao models time series as a continuous-time autoregressive moving-average (CARMA) process. EzTao utilizes celerite (ascl:1709.008), a fast and scalable Gaussian Process Regression library, to evaluate the likelihood function. On average, EzTao is ten times faster than other tools relying on a Kalman filter for likelihood computation.

[ascl:1705.006] f3: Full Frame Fotometry for Kepler Full Frame Images

Light curves from the Kepler telescope rely on "postage stamp" cutouts of a few pixels near each of 200,000 target stars. These light curves are optimized for the detection of short-term signals like planet transits but induce systematics that overwhelm long-term variations in stellar flux. Longer-term effects can be recovered through analysis of the Full Frame Images, a set of calibration data obtained monthly during the Kepler mission. The Python package f3 analyzes the Full Frame Images to infer long-term astrophysical variations in the brightness of Kepler targets, such as magnetic activity or sunspots on slowly rotating stars.

[ascl:2307.062] FABADA: Non-parametric noise reduction using Bayesian inference

FABADA (Fully Adaptive Bayesian Algorithm for Data Analysis) performs non-parametric noise reduction using Bayesian inference. It iteratively evaluates possible smoothed models of the data to estimate the underlying signal that is statistically compatible with the noisy measurements. Iterations stop based on the evidence E and the χ2 statistic of the last smooth model, and the expected value of the signal is computed as a weighted average of the smooth models. Though FABADA was written for astronomical data, such as spectra (1D) or images (2D), it can be used as a general noise reduction algorithm for any one- or two-dimensional data; the only requisite of the input data is an estimation of its associated variance.

[ascl:1802.001] FAC: Flexible Atomic Code

FAC calculates various atomic radiative and collisional processes, including radiative transition rates, collisional excitation and ionization by electron impact, energy levels, photoionization, and autoionization, and their inverse processes radiative recombination and dielectronic capture. The package also includes a collisional radiative model to construct synthetic spectra for plasmas under different physical conditions.

[ascl:2306.038] FacetClumps: Molecular clump detection algorithm based on Facet model

FacetClumps extracts and analyses clumpy structure in molecular clouds. Written in Python and based on the Gaussian Facet model, FacetClumps extracts signal regions using morphology, and segments the signal regions into local regions with a gradient-based method. It then applies a connectivity-based minimum distance clustering method to cluster the local regions to the clump centers. FacetClumps automatically adjusts its parameters to local situations to improve adaptability, and is optimized to detect faint and overlapping clumps.

[ascl:2406.026] Faceted-HyperSARA: Parallel faceted imaging in radio interferometry

Faceted-HyperSARA images radio-interferometric wideband intensity data. Written in MATLAB, the library offers a collection of utility functions and scripts from data extraction from an RI measurement set MS Table to the reconstruction of a wideband intensity image over the field of view and frequency range of interest. The code achieves high precision imaging from large data volumes and supports data dimensionality reduction via visibility gridding and estimation of the effective noise level when reliable noise estimates are not available. Faceted-HyperSASA also corrects the w-term via w-projection and incorporates available compact Fourier models of the direction dependent effects (DDEs) in the measurement operator.

[ascl:2210.024] Faiss: Similarity search and clustering of dense vectors library

The Faiss library performs efficient similarity search and clustering of dense vectors. It contains algorithms that search in sets of vectors of any size, up to ones that possibly do not fit in RAM. It also contains supporting code for evaluation and parameter tuning. Faiss is written in C++ with complete wrappers for Python/numpy. Some of the most useful algorithms are implemented on the GPU.

[ascl:2001.005] FAKEOBS: Model visibilities generator

The CASA (1107.013) task FAKEOBS generates model visibilities from already-existing measurement sets. This task can be used to substitute all the visibilities of the target with simulations computed from any model image. The measurement can either be with real or simulated data, the target can have been observed in mosaic mode, and there can be several sources (e.g., bandpass calibrator, flux/phase calibrator, and target).

[ascl:2304.004] FALCO: Fast Linearized Coronagraph Optimizer in MATLAB

FALCO (Fast Linearized Coronagraph Optimizer) performs coronagraphic focal plane wavefront correction. It includes routines for pair-wise probing estimation of the complex electric field and Electric Field Conjugation (EFC) control. FALCO utilizes and builds upon PROPER (ascl:1405.006) and rapidly computes the linearized response matrix for each DM, which facilitates re-linearization after each control step for faster DM-integrated coronagraph design and wavefront correction experiments. A Python 3 implementation of FALCO (ascl:2304.005) is also available.

[ascl:2304.005] FALCO: Fast Linearized Coronagraph Optimizer in Python

FALCO (Fast Linearized Coronagraph Optimizer) performs coronagraphic focal plane wavefront correction. It includes routines for pair-wise probing estimation of the complex electric field and Electric Field Conjugation (EFC) control. FALCO utilizes and builds upon PROPER (ascl:1405.006) and rapidly computes the linearized response matrix for each DM, which facilitates re-linearization after each control step for faster DM-integrated coronagraph design and wavefront correction experiments. A MATLAB implementation of FALCO (ascl:2304.004) is also available.

[ascl:2410.020] Falcon-DM: N-body code for inspirals in DM spikes

Falcon-DM simulates intermediate mass ratio inspirals in DM spikes. This lightweight N-body code is written in C++ and is specifically tuned for simulating IMRIs embedded in dark matter (DM) spikes. It features a 2nd order Drift-Kick-Drift integrator using the symplectic HOLD scheme and symmetrized, individual, time-steps for accurate time-integration. Falcon-DM also offers post-Newtonian (PN) effects up to PN2.5 using the auxiliary velocity algorithm.

[ascl:2205.004] FAlCon-DNS: Framework of time schemes for direct numerical simulation of annular convection

FAlCon-DNS (Framework of time schemes for direct numerical simulation of annular convection) solves for 2-D convection in an annulus and analyzes different time integration schemes. The framework contains a suite of IMEX, IMEXRK and RK time integration schemes. The code uses a pseudospectral method for spatial discretization. The governing equations contain both numerically stiff (diffusive) and non-stiff (advective) components for time discretization. The software offers OpenMP for parallelization.

[ascl:1509.004] FalconIC: Initial conditions generator for cosmological N-body simulations in Newtonian, Relativistic and Modified theories

FalconIC generates discrete particle positions, velocities, masses and pressures based on linear Boltzmann solutions that are computed by libraries such as CLASS and CAMB. FalconIC generates these initial conditions for any species included in the selection, including Baryons, Cold Dark Matter and Dark Energy fluids. Any species can be set in Eulerian (on a fixed grid) or Lagrangian (particle motion) representation, depending on the gauge and reality chosen. That is, for relativistic initial conditions in the synchronous comoving gauge, Dark Matter can only be described in an Eulerian representation. For all other choices (Relativistic in Longitudinal gauge, Newtonian with relativistic expansion rates, Newtonian without any notion of radiation), all species can be treated in all representations. The code also computes spectra. FalconIC is useful for comparative studies on initial conditions.

[ascl:1402.016] FAMA: Fast Automatic MOOG Analysis

FAMA (Fast Automatic MOOG Analysis), written in Perl, computes the atmospheric parameters and abundances of a large number of stars using measurements of equivalent widths (EWs) automatically and independently of any subjective approach. Based on the widely-used MOOG code, it simultaneously searches for three equilibria, excitation equilibrium, ionization balance, and the relationship between logn(FeI) and the reduced EWs. FAMA also evaluates the statistical errors on individual element abundances and errors due to the uncertainties in the stellar parameters. Convergence criteria are not fixed "a priori" but instead are based on the quality of the spectra.

[ascl:2006.021] FAMED: Extraction and mode identification of oscillation frequencies for solar-like pulsators

The FAMED (Fast and AutoMated pEak bagging with Diamonds) pipeline is a multi-platform parallelized software that performs and automates extraction and mode identification of oscillation frequencies for solar-like pulsators. The pipeline can be applied to a large variety of stars, ranging from hot F-type main sequence, up to stars evolving along the red giant branch, settled into the core-Helium-burning main sequence, and even evolved beyond towards the early asymptotic giant branch. FAMED is based on DIAMONDS (ascl:1410.001), a Bayesian parameter estimation and model comparison by means of the nested sampling Monte Carlo (NSMC) algorithm.

[ascl:1209.014] FAMIAS: Frequency Analysis and Mode Identification for AsteroSeismology

FAMIAS (Frequency Analysis and Mode Identification for Asteroseismology) is a package of software tools programmed in C++ for the analysis of photometric and spectroscopic time-series data. FAMIAS provides analysis tools that are required for the steps between the data reduction and the seismic modeling. Two main sets of tools are incorporated in FAMIAS. The first set permits to search for periodicities in the data using Fourier and non-linear least-squares fitting techniques. The other set permits to carry out a mode identification for the detected pulsation frequencies to determine their harmonic degree l, and azimuthal order m. FAMIAS is applicable to main-sequence pulsators hotter than the Sun. This includes Gamma Dor, Delta Sct stars, slowly pulsating B (SPB)-stars and Beta Cep stars - basically all stars for which empirical mode identification is required to successfully carry out asteroseismology.

[ascl:1102.017] FARGO: Fast Advection in Rotating Gaseous Objects

FARGO is an efficient and simple modification of the standard transport algorithm used in explicit eulerian fixed polar grid codes, aimed at getting rid of the average azimuthal velocity when applying the Courant condition. This results in a much larger timestep than the usual procedure, and it is particularly well-suited to the description of a Keplerian disk where one is traditionally limited by the very demanding Courant condition on the fast orbital motion at the inner boundary. In this modified algorithm, the timestep is limited by the perturbed velocity and by the shear arising from the differential rotation. The speed-up resulting from the use of the FARGO algorithm is problem dependent. In the example presented in the code paper below, which shows the evolution of a Jupiter sized protoplanet embedded in a minimum mass protoplanetary nebula, the FARGO algorithm is about an order of magnitude faster than a traditional transport scheme, with a much smaller numerical diffusivity.

[ascl:1509.006] FARGO3D: Hydrodynamics/magnetohydrodynamics code

A successor of FARGO (ascl:1102.017), FARGO3D is a versatile HD/MHD code that runs on clusters of CPUs or GPUs, with special emphasis on protoplanetary disks. FARGO3D offers Cartesian, cylindrical or spherical geometry; 1-, 2- or 3-dimensional calculations; and orbital advection (aka FARGO) for HD and MHD calculations. As in FARGO, a simple Runge-Kutta N-body solver may be used to describe the orbital evolution of embedded point-like objects. There is no need to know CUDA; users can develop new functions in C and have them translated to CUDA automatically to run on GPUs.

[ascl:2311.014] FASMA: Stellar spectral analysis package

FASMA delivers the atmospheric stellar parameters (effective temperature, surface gravity, metallicity, microturbulence, macroturbulence, and rotational velocity) based on the spectral synthesis technique. This technique relies on the comparison of synthetic spectra with observations to yield the best-fit parameters under a χ2 minimization process. FASMA also delivers chemical abundances of 13 elements. Written in Python, the code is wrapped around MOOG (ascl:1202.009) which calculates the synthetic spectra. FASMA includes two grids of models in MOOG readable format, Kurucz and marcs, that cover the parameter space for both dwarf and giant stars with metallicity limit of -5.0 dex.

[ascl:1010.010] Fast WMAP Likelihood Code and GSR PC Functions

We place functional constraints on the shape of the inflaton potential from the cosmic microwave background through a variant of the generalized slow roll approximation that allows large amplitude, rapidly changing deviations from scale-free conditions. Employing a principal component decomposition of the source function G'~3(V'/V)^2 - 2V''/V and keeping only those measured to better than 10% results in 5 nearly independent Gaussian constraints that maybe used to test any single-field inflationary model where such deviations are expected. The first component implies < 3% variations at the 100 Mpc scale. One component shows a 95% CL preference for deviations around the 300 Mpc scale at the ~10% level but the global significance is reduced considering the 5 components examined. This deviation also requires a change in the cold dark matter density which in a flat LCDM model is disfavored by current supernova and Hubble constant data and can be tested with future polarization or high multipole temperature data. Its impact resembles a local running of the tilt from multipoles 30-800 but is only marginally consistent with a constant running beyond this range. For this analysis, we have implemented a ~40x faster WMAP7 likelihood method which we have made publicly available.

[ascl:1603.006] FAST-PT: Convolution integrals in cosmological perturbation theory calculator

FAST-PT calculates 1-loop corrections to the matter power spectrum in cosmology. The code utilizes Fourier methods combined with analytic expressions to reduce the computation time down to scale as N log N, where N is the number of grid point in the input linear power spectrum. FAST-PT is extremely fast, enabling mode-coupling integral computations fast enough to embed in Monte Carlo Markov Chain parameter estimation.

[ascl:1803.008] FAST: Fitting and Assessment of Synthetic Templates

FAST (Fitting and Assessment of Synthetic Templates) fits stellar population synthesis templates to broadband photometry and/or spectra. FAST is compatible with the photometric redshift code EAzY (ascl:1010.052) when fitting broadband photometry; it uses the photometric redshifts derived by EAzY, and the input files (for examply, photometric catalog and master filter file) are the same. FAST fits spectra in combination with broadband photometric data points or simultaneously fits two components, allowing for an AGN contribution in addition to the host galaxy light. Depending on the input parameters, FAST outputs the best-fit redshift, age, dust content, star formation timescale, metallicity, stellar mass, star formation rate (SFR), and their confidence intervals. Though some of FAST's functions overlap with those of HYPERZ (ascl:1108.010), it differs by fitting fluxes instead of magnitudes, allows the user to completely define the grid of input stellar population parameters and easily input photometric redshifts and their confidence intervals, and calculates calibrated confidence intervals for all parameters. Note that FAST is not a photometric redshift code, though it can be used as one.

[ascl:2301.010] Fastcc: Broadband radio telescope receiver fast color corrections

Fastcc returns color corrections for different spectra for various Cosmic Microwave Background experiments. Available in both Python and IDL, the script is easy to use when analyzing radio spectra of sources with data from multiple wide-survey CMB experiments in a consistent way across multiple experiments.

[ascl:1804.025] FastChem: An ultra-fast equilibrium chemistry

FastChem is an equilibrium chemistry code that calculates the chemical composition of the gas phase for given temperatures and pressures. Written in C++, it is based on a semi-analytic approach and is optimized for extremely fast and accurate calculations.

[ascl:1010.037] FastChi: A Fast Chi-squared Technique For Period Search of Irregularly Sampled Data

The Fast Chi-Squared Algorithm is a fast, powerful technique for detecting periodicity. It was developed for analyzing variable stars, but is applicable to many of the other applications where the Fast Fourier Transforms (FFTs) or other periodograms (such as Lomb-Scargle) are currently used. The Fast Chi-squared technique takes a data set (e.g. the brightness of a star measured at many different times during a series of observations) and finds the periodic function that has the best frequency and shape (to an arbitrary number of harmonics) to fit the data. Among its advantages are:

  • Statistical efficiency: all of the data are used, weighted by their individual error bars, giving a result with a significance calibrated in well-understood Chi-squared statistics.
  • Sensitivity to harmonic content: many conventional techniques look only at the significance (or the amplitude) of the fundamental sinusoid and discard the power of the higher harmonics.
  • Insensitivity to the sample timing: you won't find a period of 24 hours just because you take your observations at night. You do not need to window your data.
  • The frequency search is gridded more tightly than the traditional "integer number of cycles over the span of observations", eliminating power loss from peaks that fall between the grid points.
  • Computational speed: The complexity of the algorithm is O(NlogN), where N is the number of frequencies searched, due to its use of the FFT.

[ascl:1908.025] FastCSWT: Fast directional Continuous Spherical Wavelet Transform

FastCSWT performs a directional continuous wavelet transform on the sphere. The transform is based on the construction of the continuous spherical wavelet transform (CSWT) developed by Antoine and Vandergheynst (1999). A fast implementation of the CSWT (based on the fast spherical convolution developed by Wandelt and Gorski 2001) is also provided.

[ascl:2212.004] FastDF: Integrating neutrino geodesics in linear theory

FastDF (Fast Distribution Function) integrates relativistic particles along geodesics in a comoving periodic volume with forces determined by cosmological linear perturbation theory. Its main application is to set up accurate particle realizations of the linear phase-space distribution of massive relic neutrinos by starting with an analytical solution deep in radiation domination. Such particle realizations are useful for Monte Carlo experiments and provide consistent initial conditions for cosmological N-body simulations. Gravitational forces are calculated from three-dimensional potential grids, which are obtained by convolving random phases with linear transfer functions using Fast Fourier Transforms. The equations of motion are solved using a symplectic leapfrog integration scheme to conserve phase-space density and prevent the build-up of errors. Particles can be exported in different gauges and snapshots are provided in the HDF5 format, compatible with N-body codes like SWIFT (ascl:1805.020) and Gadget-4 (ascl:2204.014). The code has an interface with CLASS (ascl:1106.020) for calculating transfer functions and with monofonIC (ascl:2008.024) for setting up initial conditions with dark matter, baryons, and neutrinos.

[ascl:9910.003] FASTELL: Fast calculation of a family of elliptical mass gravitational lens models

Because of their simplicity, axisymmetric mass distributions are often used to model gravitational lenses. Since galaxies are usually observed to have elliptical light distributions, mass distributions with elliptical density contours offer more general and realistic lens models. They are difficult to use, however, since previous studies have shown that the deflection angle (and magnification) in this case can only be obtained by rather expensive numerical integrations. We present a family of lens models for which the deflection can be calculated to high relative accuracy (10-5) with a greatly reduced numerical effort, for small and large ellipticity alike. This makes it easier to use these distributions for modeling individual lenses as well as for applications requiring larger computing times, such as statistical lensing studies. FASTELL is a code to calculate quickly and accurately the lensing deflection and magnification matrix for the softened power-law elliptical mass distribution (SPEMD) lens galaxy model. The SPEMD consists of a softened power-law radial distribution with elliptical isodensity contours.

[ascl:2303.013] FastJet: Jet finding in pp and e+e− collisions

The FastJet package provides fast native implementations of many sequential recombination algorithms, including the longitudinally invariant kt longitudinally invariant inclusive Cambridge/Aachen and anti-kt jet finders. It also provides a uniform interface to external jet finders via a plugin mechanism. FastJet also includes tools for calculating jet areas and performing background (pileup/UE) subtraction and for jet substructure analyses.

[ascl:1010.041] FASTLens (FAst STatistics for weak Lensing): Fast Method for Weak Lensing Statistics and Map Making

The analysis of weak lensing data requires to account for missing data such as masking out of bright stars. To date, the majority of lensing analyses uses the two point-statistics of the cosmic shear field. These can either be studied directly using the two-point correlation function, or in Fourier space, using the power spectrum. The two-point correlation function is unbiased by missing data but its direct calculation will soon become a burden with the exponential growth of astronomical data sets. The power spectrum is fast to estimate but a mask correction should be estimated. Other statistics can be used but these are strongly sensitive to missing data. The solution that is proposed by FASTLens is to properly fill-in the gaps with only NlogN operations, leading to a complete weak lensing mass map from which one can compute straight forwardly and with a very good accuracy any kind of statistics like power spectrum or bispectrum.

[ascl:1302.008] FASTPHOT: A simple and quick IDL PSF-fitting routine

PSF fitting photometry allows a simultaneously fit of a PSF profile on the sources. Many routines use PSF fitting photometry, including IRAF/allstar, Strarfinder, and Convphot. These routines are in general complex to use and slow. FASTPHOT is optimized for prior extraction (the position of the sources is known) and is very fast and simple.

[ascl:1905.010] FastPM: Scaling N-body Particle Mesh solver

FastPM solves the gravity Possion equation with a boosted particle mesh. Arbitrary time steps can be used. The code is intended to study the formation of large scale structure and supports plain PM and Comoving-Lagranian (COLA) solvers. A broadband correction enforces the linear theory model growth factor at large scale. FastPM scales extremely well to hundred thousand MPI ranks, which is possible through the use of the PFFT Fourier Transform library. The size of mesh in FastPM can vary with time, allowing one to use coarse force mesh at high redshift with increase temporal resolution for accurate large scale modes. The code supports a variety of Greens function and differentiation kernels, though for most practical simulations the choice of kernels does not make a difference. A parameter file interpreter is provided to validate and execute the configuration files without running the simulation, allowing creative usages of the configuration files.

[ascl:2410.018] fastPTA: Constraining power of PTA configurations forecaster

fastPTA forecasts the sensitivity of future Pulsar Timing Array (PTA) configurations and assesses constraints on Stochastic Gravitational Wave Background (SGWB) parameters. The code can generate mock PTA catalogs with noise levels compatible with current and future PTA experiments. These catalogs can then be used to perform Fisher forecasts of MCMC simulations.

[ascl:2209.020] FastQSL: Quasi-separatrix Layers computation method

FastQSL calculate the squashing factor Q at the photosphere, a cross section, or a box volume, given a 3D magnetic field with Cartesian, uniform or stretched grids. It is available in IDL and in an optimized version using Fortran for calculations and field line tracing. Use of a GPU accelerates a step-size adaptive scheme for the most computationally intensive part, the field line tracing, making the code fast and efficient.

[submitted] fastrometry: Fast world coordinate solution solver

Fastrometry is a Python implementation of the fast world coordinate solution solver for the FITS standard astronomical image. When supplied with the approximate field center (+-25%) and the approximate field scale (+-10%) of the telescope and detector system the astronomical image is from, fastrometry provides WCS solutions almost instantaneously. The algorithm is also originally implemented with parallelism enabled in the Windows FITS image processor and viewer CCDLAB (ascl:2206.021).

[ascl:2211.011] fastSHT: Fast Spherical Harmonic Transforms

fastSHT performs spherical harmonic transforms on a large number of spherical maps. It converts massive SHT operations to a BLAS level 3 problem and uses the highly optimized matrix multiplication toolkit to accelerate the computation. GPU acceleration is supported and can be very effective. The core code is written in Fortran, but a Python wrapper is provided and recommended.

[ascl:2308.005] FastSpecFit: Fast spectral synthesis and emission-line fitting of DESI spectra

FastSpecFit models the observed-frame optical spectroscopy and broadband photometry of extragalactic targets using physically grounded stellar continuum and emission-line templates. The code handles data from the Dark Energy Spectroscopic Instrument (DESI) Survey, which is amassing spectrophotometry for an unprecedented 40 million extragalactic targets, although the algorithms are general enough to accommodate other upcoming, massively multiplexed spectroscopic surveys. FastSpecFit extracts nearly 800 observed- and rest-frame quantities from each target, including light-weighted ages and stellar velocity dispersions based on the underlying stellar continuum; line-widths, velocity shifts, integrated fluxes, and equivalent widths for nearly 40 rest-frame ultraviolet, optical, and near-infrared emission lines arising from both star formation and active galactic nuclear activity; and K-corrections and rest-frame absolute magnitudes and colors. Moreover, FastSpecFit is designed with speed and parallelism in mind, enabling it to deliver robust model fits to tens of millions of targets.

[ascl:1507.011] FAT: Fully Automated TiRiFiC

FAT (Fully Automated TiRiFiC) is an automated procedure that fits tilted-ring models to Hi data cubes of individual, well-resolved galaxies. The method builds on the 3D Tilted Ring Fitting Code (TiRiFiC, ascl:1208.008). FAT accurately models the kinematics and the morphologies of galaxies with an extent of eight beams across the major axis in the inclination range 20°-90° without the need for priors such as disc inclination. FAT's performance allows us to model the gas kinematics of many thousands of well-resolved galaxies, which is essential for future HI surveys, with the Square Kilometre Array and its pathfinders.

[ascl:1711.017] FATS: Feature Analysis for Time Series

FATS facilitates and standardizes feature extraction for time series data; it quickly and efficiently calculates a compilation of many existing light curve features. Users can characterize or analyze an astronomical photometric database, though this library is not necessarily restricted to the astronomical domain and can also be applied to any kind of time series data.

[ascl:2204.010] FBCTrack: Fragmentation and bulk composition tracking

The fragmentation and bulk composition tracking package contains two codes. The fragmentation code models fragmentation in collisions for the C version of REBOUND (ascl:1110.016). This code requires setting two global parameters. It automatically produces a collision report that details the time of every collision, the bodies involved, how the collision was resolved, and how many fragments were produced; collision outcomes are assigned a numerical value. The bulk composition tracking code tracks the composition change as a function of mass exchange for bodies with a homogenous composition. It is a post-processing code that works in conjunction with the fragmentation code, and requires the collision report generated by the fragmentation code.

[ascl:1712.011] FBEYE: Analyzing Kepler light curves and validating flares

FBEYE, the "Flares By-Eye" detection suite, is written in IDL and analyzes Kepler light curves and validates flares. It works on any 3-column light curve that contains time, flux, and error. The success of flare identification is highly dependent on the smoothing routine, which may not be suitable for all sources.

[ascl:2302.015] FCFC: C toolkit for computing correlation functions from pair counts

FCFC (Fast Correlation Function Calculator) computes correlation functions from pair counts. It supports the isotropic 2-point correlation function, anisotropic 2PCF, 2-D 2PCF, and 2PCF Legendre multipoles, among others. Written in C, FCFC takes advantage of three parallelisms that can be used simultaneously, distributed-memory processes via Message Passing Interface (MPI), shared-memory threads via Open Multi-Processing (OpenMP), and single instruction, multiple data (SIMD).

[ascl:1505.014] FCLC: Featureless Classification of Light Curves

FCLC (Featureless Classification of Light Curves) software describes the static behavior of a light curve in a probabilistic way. Individual data points are converted to densities and consequently probability density are compared instead of features. This gives rise to an independent classification which can corroborate the usefulness of the selected features.

[ascl:1806.027] fcmaker: Creating ESO-compliant finding charts for Observing Blocks on p2

fcmaker creates astronomical finding charts for Observing Blocks (OBs) on the p2 web server from the European Southern Observatory (ESO). It automates the creation of ESO-compliant finding charts for Service Mode and/or Visitor Mode OBs at the Very Large Telescope (VLT). The design of the fcmaker finding charts, based on an intimate knowledge of VLT observing procedures, is fine-tuned to best support night time operations. As an automated tool, fcmaker also allows observers to independently check visually, for the first time, the observing sequence coded inside an OB. This includes, for example, the signs of telescope and position angle offsets.

[ascl:1705.012] fd3: Spectral disentangling of double-lined spectroscopic binary stars

The spectral disentangling technique can be applied on a time series of observed spectra of a spectroscopic double-lined binary star (SB2) to determine the parameters of orbit and reconstruct the spectra of component stars, without the use of template spectra. fd3 disentangles the spectra of SB2 stars, capable also of resolving the possible third companion. It performs the separation of spectra in the Fourier space which is faster, but in several respects less versatile than the wavelength-space separation. (Wavelength-space separation is implemented in the twin code CRES.) fd3 is written in C and is designed as a command-line utility for a Unix-like operating system. fd3 is a new version of FDBinary (ascl:1705.011), which is now deprecated.

[ascl:1705.011] FDBinary: A tool for spectral disentangling of double-lined spectroscopic binary stars

FDBinary disentangles spectra of SB2 stars. The spectral disentangling technique can be applied on a time series of observed spectra of an SB2 to determine the parameters of orbit and reconstruct the spectra of component stars, without the use of template spectra. The code is written in C and is designed as a command-line utility for a Unix-like operating system. FDBinary uses the Fourier-space approach in separation of composite spectra. This code has been replaced with the newer fd3 (ascl:1705.012).

[ascl:1606.011] FDIPS: Finite Difference Iterative Potential-field Solver

FDIPS is a finite difference iterative potential-field solver that can generate the 3D potential magnetic field solution based on a magnetogram. It is offered as an alternative to the spherical harmonics approach, as when the number of spherical harmonics is increased, using the raw magnetogram data given on a grid that is uniform in the sine of the latitude coordinate can result in inaccurate and unreliable results, especially in the polar regions close to the Sun. FDIPS is written in Fortran 90 and uses the MPI library for parallel execution.

[ascl:1604.011] FDPS: Framework for Developing Particle Simulators

FDPS provides the necessary functions for efficient parallel execution of particle-based simulations as templates independent of the data structure of particles and the functional form of the interaction. It is used to develop particle-based simulation programs for large-scale distributed-memory parallel supercomputers. FDPS includes templates for domain decomposition, redistribution of particles, and gathering of particle information for interaction calculation. It uses algorithms such as Barnes-Hut tree method for long-range interactions; methods to limit the calculation to neighbor particles are used for short-range interactions. FDPS reduces the time and effort necessary to write a simple, sequential and unoptimized program of O(N^2) calculation cost, and produces compiled programs that will run efficiently on large-scale parallel supercomputers.

[ascl:1806.001] feets: feATURE eXTRACTOR FOR tIME sERIES

feets characterizes and analyzes light-curves from astronomical photometric databases for modelling, classification, data cleaning, outlier detection and data analysis. It uses machine learning algorithms to determine the numerical descriptors that characterize and distinguish the different variability classes of light-curves; these range from basic statistical measures such as the mean or standard deviation to complex time-series characteristics such as the autocorrelation function. The library is not restricted to the astronomical field and could also be applied to any kind of time series. This project is a derivative work of FATS (ascl:1711.017).

[ascl:2110.018] FEniCS: Computing platform for solving partial differential equations

FEniCS solves partial differential equations (PDEs) and enables users to quickly translate scientific models into efficient finite element code. With the high-level Python and C++ interfaces to FEniCS, it is easy to get started, but FEniCS offers also powerful capabilities for more experienced programmers. FEniCS runs on a multitude of platforms ranging from laptops to high-performance clusters, and each component of the FEniCS platform has been fundamentally designed for parallel processing. This framework allows for rapid prototyping of finite element formulations and solvers on laptops and workstations, and the same code may then be deployed on large high-performance computers.

[ascl:1203.004] FERENGI: Full and Efficient Redshifting of Ensembles of Nearby Galaxy Images

Bandpass shifting and the (1+z)5 surface brightness dimming (for a fixed width filter) make standard tools for the extraction of structural parameters of galaxies wavelength dependent. If only few (or one) observed high-res bands exist, this dependence has to be corrected to make unbiased statements on the evolution of structural parameters or on galaxy subsamples defined by morphology. FERENGI artificially redshifts low-redshift galaxy images to different redshifts by applying the correct cosmological corrections for size, surface brightness and bandpass shifting. A set of artificially redshifted galaxies in the range 0.1<z<1.1 using a set of ~100 SDSS low-redshift (v<7000 km s-1) images as input has been created to use as a training set of realistic images of galaxies of diverse morphologies and a large range of redshifts for the GEMS and COSMOS galaxy evolution projects. This training set allows other studies to investigate and quantify the effects of cosmological redshift on the determination of galaxy morphologies, distortions, and other galaxy properties that are potentially sensitive to resolution, surface brightness, and bandpass issues. The data sets are also available for download from the FERENGI website.

[ascl:2201.008] fermi-gce-flows: Infer the Galactic Center gamma-ray excess

fermi-gce-flows uses a machine learning-based technique to characterize the contribution of modeled components, including unresolved point sources, to the GCE. It can perform posterior parameter estimation while accounting for pixel-to-pixel spatial correlations in the gamma-ray map. On application to Fermi data, the method generically attributes a smaller fraction of the GCE flux to unresolved point source-like emission when compared to traditional approaches.

[ascl:1812.006] Fermipy: Fermi-LAT data analysis package

Fermipy facilitates analysis of data from the Large Area Telescope (LAT) with the Fermi Science Tools. It is built on the pyLikelihood interface of the Fermi Science Tools and provides a set of high-level tools for performing common analysis tasks, including data and model preparation with the gt-tools, extracting a spectral energy distribution (SED) of a source, and generating TS and residual maps for a region of interest. Fermipy also finds new source candidates and can localize a source or fit its spatial extension. The package uses a configuration-file driven workflow in which the analysis parameters (data selection, IRFs, and ROI model) are defined in a YAML configuration file. Analysis is executed through a python script that calls the methods of GTAnalysis to perform different analysis operations.

[ascl:1905.011] Fermitools: Fermi Science Tools

Fermi Science Tools is a suite of tools for the analysis of both the Large-Area Telescope (LAT) and the Gamma-ray Burst Monitor (GBM) data, including point source analysis for generating maps, spectra, and light curves, pulsar timing analysis, and source identification.

[ascl:2301.016] FERRE: Match physical models to measurements

FERRE matches physical models to observed data, taking a set of observations and identifying the model parameters that best reproduce the data, in a chi-squared sense. It solves the common problem of having numerical parametric models that are costly to evaluate and need to be used to interpret large data sets. FERRE provides flexibility to search for all model parameters, or hold constant some of them while searching for others. The code is written to be truly N-dimensional and fast. Model predictions are to be given as an array whose values are a function of the model parameters, i.e., numerically. FERRE holds this array in memory, or in a direct-access binary file, and interpolates in it. The code returns, in addition to the optimal set of parameters, their error covariance, and the corresponding model prediction. The code is written in FORTRAN90.

[ascl:2005.014] FETCH: Fast Extragalactic Transient Candidate Hunter

FETCH (Fast Extragalactic Transient Candidate Hunter) provides real-time classification of candidates from single pulse search pipelines. The package takes in a candidate file of frequency-time and DM-time data and, for each candidate and choice of model, provides the probability that the candidate is an FRB. FETCH also provides a framework for fine-tuning the models to further improve its performance for particular backends.

[ascl:1208.011] Fewbody: Numerical toolkit for simulating small-N gravitational dynamics

Fewbody is a numerical toolkit for simulating small-N gravitational dynamics. It is a general N-body dynamics code, although it was written for the purpose of performing scattering experiments, and therefore has several features that make it well-suited for this purpose. Fewbody uses the 8th-order Runge-Kutta Prince-Dormand integration method with 9th-order error estimate and adaptive timestep to advance the N-body system forward in time. It integrates the usual formulation of the N-body equations in configuration space, but allows for the option of global pairwise Kustaanheimo-Stiefel (K-S) regularization (Heggie 1974; Mikkola 1985). The code uses a binary tree algorithm to classify the N-body system into a set of independently bound hierarchies, and performs collisions between stars in the “sticky star” approximation. Fewbody contains a collection of command line utilities that can be used to perform individual scattering and N-body interactions, but is more generally a library of functions that can be used from within other codes.

[ascl:2005.006] FFANCY: Fast Folding Algorithm for pulsar searching

FFANCY uses the Fast Folding Algorithm (FFA) on a distributed-computing framework to search for pulsars in time-domain series data. This enables the algorithm to be applied to all-sky blind pulsar surveys. The package runs an implementation of the FFA on real or simulated pulsar time series data in either SIGPROC (ascl:1107.016) or PRETSO (ascl:1107.017) format with a choice of additional algorithms to be used in the evaluation of each folded profile and outputs a periodogram along with other output threads used for testing. It also contains routines that convert the periodogram output into a list of pulsar candidates with options for candidate grouping and harmonic matching, generate simulated pulsar profiles for use in testing profile evaluation algorithms independent of the FFA, provide basic statistics for the folded profiles produced by progeny, test individual profiles using profiles produced by progeny, and other complementary functions.

[ascl:2208.010] FFD: Flare Frequency Distribution

FFD (Flare Frequency Distribution) fits power-laws to FFDs. FFDs relate the frequency (i.e., occurrence rate) of flares to their energy, peak flux, photometric equivalent width, or other parameters. This module was created to handle disparate datasets between which the flare detection limit varies; in essence, the number of flares detected is treated as following a Poisson distribution while the flare energies are treated as following a power law.

[ascl:1911.022] FFTLog-and-beyond: Generalized FFTLog algorithm

FFTLog-and-beyond takes the FFTLog algorithm for single-Bessel integrals and generalizes it for integrals containing a derivative of the Bessel function to solve the non-Limber integrals. The full non-Limber angular power spectrum integral is simplified by noting the small contribution from unequal-time nonlinear terms; this significantly reduces the computation and avoids the double-Bessel integral. The original FFTLog algorithm is also extended to compute integrals containing derivatives of Bessel functions, which can be used to efficiently compute angular power spectra including redshift-space distortions (RSD) and Doppler effects. C and Python versions of the code are available.

[ascl:1512.017] FFTLog: Fast Fourier or Hankel transform

FFTLog is a set of Fortran subroutines that compute the fast Fourier or Hankel (= Fourier-Bessel) transform of a periodic sequence of logarithmically spaced points. FFTLog can be regarded as a natural analogue to the standard Fast Fourier Transform (FFT), in the sense that, just as the normal FFT gives the exact (to machine precision) Fourier transform of a linearly spaced periodic sequence, so also FFTLog gives the exact Fourier or Hankel transform, of arbitrary order m, of a logarithmically spaced periodic sequence.

[ascl:1201.015] FFTW: Fastest Fourier Transform in the West

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions, of arbitrary input size, and of both real and complex data (as well as of even/odd data, i.e. the discrete cosine/sine transforms or DCT/DST).

Benchmarks performed on a variety of platforms show that FFTW's performance is typically superior to that of other publicly available FFT software, and is even competitive with vendor-tuned codes. In contrast to vendor-tuned codes, however, FFTW's performance is portable: the same program will perform well on most architectures without modification.

The FFTW library is required by other codes such as StarCrash (ascl:1010.074) and Hammurabi (ascl:1201.014).

[ascl:2307.021] FGBuster: Parametric component separation for Cosmic Microwave Background observations

FGBuster (ForeGroundBuster) separates frequency maps into component maps and forecasts component separation both when the model is correct and when it is incorrect. FGBuster can be used for SED evaluation, intermediate component separation, multi-resolution separation, and forecasting, among other tasks.

[ascl:2409.004] FGCluster: ForeGround Clustering

FGCluster runs spectral clustering onto Healpix maps for parametric foreground removal, using a map encoding the feature to cluster as inputs. Pixel similarity is given by the geometrical affinity of each pixel in the sphere. FGCluster can also take an uncertainty map as an input, in which case the adjacency is modified in such a way that the pixel similarity accounts also for the statistical significance given by the pixel values in a map and the uncertainties.

[ascl:1909.014] fgivenx: Functional posterior plotter

fgivenx plots a predictive posterior of a function, dependent on sampled parameters, for a Bayesian posterior Post(theta|D,M) described by a set of posterior samples {theta_i}~Post. If there is a function parameterized by theta y=f(x;theta), this script produces a contour plot of the conditional posterior P(y|x,D,M) in the (x,y) plane.

[ascl:2205.014] FHD: Fast Holographic Deconvolution

FHD is an open-source imaging algorithm for radio interferometers and is written in IDL. The three main use-cases for FHD are efficient image deconvolution for general radio astronomy, fast-mode Epoch of Reionization analysis, and simulation. FHD inputs beam models, calibration files, and sky model catalogs and requires input data to be in uvfits format.

[ascl:1603.014] fibmeasure: Python/Cython module to find the center of back-illuminated optical fibers in metrology images

fibmeasure finds the precise locations of the centers of back-illuminated optical fibers in images. It was developed for astronomical fiber positioning feedback via machine vision cameras and is optimized for high-magnification images where fibers appear as resolvable circles. It was originally written during the design of the WEAVE pick-and-place fiber positioner for the William Herschel Telescope.

[ascl:1111.013] FIBRE-pac: FMOS Image-based Reduction Package

The FIBRE-pac (FMOS image-based reduction package) is an IRAF-based reduction tool for the fiber multiple-object spectrograph (FMOS) of the Subaru telescope. To reduce FMOS images, a number of special techniques are necessary because each image contains about 200 separate spectra with airglow emission lines variable in spatial and time domains, and with complicated throughput patterns for the airglow masks. In spite of these features, almost all of the reduction processes except for a few steps are carried out automatically by scripts in text format making it easy to check the commands step by step. Wavelength- and flux-calibrated images together with their noise maps are obtained using this reduction package.

[ascl:2202.012] fiducial_flare: Spectra and lightcurves of a standardized far ultraviolet flare

fiducial_flare generates a reasonable approximation of the UV emission of M dwarf stars over a single flare or a series of them. The simulated radiation is resolved in both wavelength and time. The intent is to provide consistent input for applications requiring time-dependent stellar UV radiation fields that balances simplicity with realism, namely for simulations of exoplanet atmospheres.

[ascl:1307.004] FieldInf: Field Inflation exact integration routines

FieldInf is a collection of fast modern Fortran routines for computing exactly the background evolution and primordial power spectra of any single field inflationary models. It implements reheating without any assumptions through the "reheating parameter" R allowing robust inflationary parameter estimations and inference on the reheating energy scale. The underlying perturbation code actually deals with N fields minimally-coupled and/or non-minimally coupled to gravity and works for flat FLRW only.

[ascl:1708.009] FIEStool: Automated data reduction for FIber-fed Echelle Spectrograph (FIES)

FIEStool automatically reduces data obtained with the FIber-fed Echelle Spectrograph (FIES) at the Nordic Optical Telescope, a high-resolution spectrograph available on a stand-by basis, while also allowing the basic properties of the reduction to be controlled in real time by the user. It provides a Graphical User Interface and offers bias subtraction, flat-fielding, scattered-light subtraction, and specialized reduction tasks from the external packages IRAF (ascl:9911.002) and NumArray. The core of FIEStool is instrument-independent; the software, written in Python, could with minor modifications also be used for automatic reduction of data from other instruments.

[ascl:1203.013] Figaro: Data Reduction Software

Figaro (sometimes referred to as "standalone Figaro") is a data reduction system that originated at Caltech and whose development continued at the Anglo-Australian Observatory. Although it is intended to be able to deal with any sort of data, almost all its applications to date are geared towards processing optical and infrared data. Figaro uses hierarchical data structures to provide flexibility in its data file formats. Figaro was originally written to run under DEC's VMS operating system, but is now available both for VAX/VMS (by special request) and for various flavors of UNIX including Linux and MacOS.

A variant of Figaro (ascl:1411.022) is incorporated into the Starlink package (ascl:1110.012).

[ascl:1608.009] FilFinder: Filamentary structure in molecular clouds

FilFinder extracts and analyzes filamentary structure in molecular clouds. In particular, it is capable of uniformly extracting structure over a large dynamical range in intensity. It returns the main filament properties: local amplitude and background, width, length, orientation and curvature. FilFinder offers additional tools to, for example, create a filament-only image based on the properties of the radial fits. The resulting mask and skeletons may be saved in FITS format, and property tables may be saved as a CSV, FITS or LaTeX table.

[ascl:1602.007] FilTER: Filament Trait-Evaluated Reconstruction

FilTER (Filament Trait-Evaluated Reconstruction) post-processes output from DisPerSE (ascl:1302.015 ) to produce a set of filaments that are well-defined and have measured properties (e.g. width), then cuts the profiles, fits and assesses them to reconstruct new filaments according to defined criteria.

[ascl:2202.016] Find_Orb: Orbit determination from observations

Find_Orb takes a set of observations of an asteroid, comet, or natural or artificial satellite given in the MPC (Minor Planet Center) format, the ADES astrometric format, and/or the NEODyS or AstDyS formats, and finds the corresponding orbit.

[ascl:2210.004] Finder_charts: Create finder charts from image data of various sky surveys

Finder_charts creates multi-band finder charts from image data of various partial- and all-sky surveys such as DSS, 2MASS, WISE, UKIDSS, VHS, Pan-STARRS, and DES. It also creates a WISE time series of image data acquired between 2010 and 2021. All images are reprojected so that north is up and east is to the left. The resulting finder charts can be overplotted with corresponding catalog positions. All catalog entries within the specified field of view can be saved in a variety of formats, including ipac, csv, and tex, as can the finder charts in png, pdf, eps, and other common graphics formats. Finder_charts consists of a single Python module, which depends only on well-known packages, making it easy to install.

[ascl:2004.013] Finesse: Frequency domain INterfErometer Simulation SoftwarE

Finesse is a numeric simulation for laser interferometers and models parametric instabilities, easily providing the required mechanical-to-optical transfer functions in imperfect and arbitrary interferometer configurations using Hermite-Gaussian beams. The code has been used to apply limits to the number and type of higher order modes used in simulation and investigate the potential use of higher order Laguerre-Gauss modes to reduce thermal noise in future gravitational wave detector designs. The PyKat wrapper (ascl:2004.014) helps automate complex Finesse tasks.

[ascl:1808.006] Fips: An OpenGL based FITS viewer

FIPS is a cross-platform FITS viewer with a responsive user interface. Unlike other FITS viewers, FIPS uses GPU hardware via OpenGL to provide functionality such as zooming, panning and level adjustments. OpenGL 2.1 and later is supported. FIPS supports all 2D image formats except floating point formats on OpenGL 2.1. FITS image extension has basic limited support.

[ascl:2202.006] FIRE Studio: Movie making utilities for the FIRE simulations

FIRE Studio is a Python interface for C libraries that project Smoothed Particle Hydrodynamic (SPH) datasets. These C libraries can, in principle, be applied to any SPH dataset; the Python interface is specialized to conveniently load and format Gadget-derivative datasets such as GIZMO (ascl:1410.003). FIRE Studio is fast, memory efficient, and parallelizable. In addition to producing "1-color" projection maps for SPH datasets, the interface can produce "2-color" maps, where the pixel saturation is set by one projected quantity and the hue is set by another, and "3-color" maps, where three quantities are projected simultaneously and remapped into an RGB colorspace. FIRE Studio can model stellar emission and dust extinction to produce mock Hubble images (by default) or to model surface brightness maps for thirteen of the most common bands (plus the bolometric luminosity). It produces publication quality static images of simulation datasets and provides interpolation scripts to create movies that smoothly evolve in time (provided multiple snapshots in time of the data exist), view the dataset from different perspectives (taking advantage of shared memory buffers to allow massive parallelization), or both.

[ascl:2108.010] FIREFLY: Chi-squared minimization full spectral fitting code

FIREFLY (Fitting IteRativEly For Likelihood analYsis) derives stellar population properties of stellar systems, whether observed galaxy or star cluster spectra or model spectra from simulations. The code fits combinations of single-burst stellar population models to spectroscopic data following an iterative best-fitting process controlled by the Bayesian Information Criterion without applying priors. Solutions within a statistical cut are retained with their weight, which is arbitrary. No additive or multiplicative polynomia are used to adjust the spectral shape and no regularization is imposed. This fitting freedom allows mapping of the effect of intrinsic spectral energy distribution (SED) degeneracies, such as age, metallicity, dust reddening on stellar population properties, and quantifying the effect of varying input model components on such properties.

[ascl:1810.021] Firefly: Interactive exploration of particle-based data

Firefly provides interactive exploration of particle-based data in the browser. The user can filter, display vector fields, and toggle the visibility of their customizable datasets all on-the-fly. Different Firefly visualizations, complete with preconfigured data and camera view-settings, can be shared by URL. As Firefly is written in WebGL, it can be hosted online, though Firefly can also be used locally, without an internet connection. Firefly was developed with simulations of galaxy formation in mind but is flexible enough to display any particle-based data. Other features include a stereoscopic 3D picture mode and mobile compatibility.

[ascl:1908.023] FIRST Classifier: Automated compact and extended radio sources classifier

FIRST Classifier is an on-line system for automated classification of compact and extended radio sources. It is developed based on a trained Deep Convolutional Neural Network Model to automate the morphological classification of compact and extended radio sources observed in the FIRST radio survey. FIRST Classifier is able to predict the morphological class for a single source or for a list of sources as Compact or Extended (FRI, FRII and BENT).

[ascl:1202.014] FISA: Fast Integrated Spectra Analyzer

FISA (Fast Integrated Spectra Analyzer) permits fast and reasonably accurate age and reddening determinations for small angular diameter open clusters by using their integrated spectra in the (3600-7400) AA range and currently available template spectrum libraries. This algorithm and its implementation help to achieve astrophysical results in shorter times than from other methods. FISA has successfully been applied to integrated spectroscopy of open clusters, both in the Galaxy and in the Magellanic Clouds, to determine ages and reddenings.

[ascl:1010.070] Fisher.py: Fisher Matrix Manipulation and Confidence Contour Plotting

Fisher.py allows you to combine constraints from multiple experiments (e.g., weak lensing + supernovae) and add priors (e.g., a flat universe) simply and easily. Calculate parameter uncertainties and plot confidence ellipses. Fisher matrix expectations for several experiments are included as calculated by myself (time delays) and the Dark Energy Task Force (WL/SN/BAO/CL/CMB), or provide your own.

[ascl:1201.007] Fisher4Cast: Fisher Matrix Toolbox

The Fisher4Cast suite, which requires MatLab, provides a standard, tested tool set for general Fisher Information matrix prediction and forecasting for use in both research and education. The toolbox design is robust and modular, allowing for easy additions and adaptation while keeping the user interface intuitive and easy to use. Fisher4Cast is completely general but the default is coded for cosmology. It provides parameter error forecasts for cosmological surveys providing distance, Hubble expansion and growth measurements in a general, curved FLRW background.

[ascl:2308.015] FishLSS: Fisher forecasting for Large Scale Structure surveys

FishLSS computes the Fisher information matrix for a set of observables and model parameters. It can model the redshift-space power spectrum of any biased tracer of the CDM+baryon field and the post-reconstruction galaxy power spectrum. The code also models the projected cross-correlation of galaxies with the CMB lensing convergence, the projected galaxy power spectrum, and the CMB lensing convergence power spectrum. FishLSS requires pyFFTW (ascl:2109.009), velocileptors (ascl:2308.014), and CLASS (ascl:1106.020).

[ascl:1609.004] FISHPACK: Efficient FORTRAN Subprograms for the Solution of Separable Elliptic Partial Differential Equations

The FISHPACK collection of Fortran77 subroutines solves second- and fourth-order finite difference approximations to separable elliptic Partial Differential Equations (PDEs). These include Helmholtz equations in cartesian, polar, cylindrical, and spherical coordinates, as well as more general separable elliptic equations. The solvers use the cyclic reduction algorithm. When the problem is singular, a least-squares solution is computed. Singularities induced by the coordinate system are handled, including at the origin r=0 in cylindrical coordinates, and at the poles in spherical coordinates. A modernization of FISHPACK is available as FISHPACK90 (ascl:1609.005).

[ascl:1609.005] FISHPACK90: Efficient FORTRAN Subprograms for the Solution of Separable Elliptic Partial Differential Equations

FISHPACK90 is a modernization of the original FISHPACK (ascl:1609.004), employing Fortran90 to slightly simplify and standardize the interface to some of the routines. This collection of Fortran programs and subroutines solves second- and fourth-order finite difference approximations to separable elliptic Partial Differential Equations (PDEs). These include Helmholtz equations in cartesian, polar, cylindrical, and spherical coordinates, as well as more general separable elliptic equations. The solvers use the cyclic reduction algorithm. When the problem is singular, a least-squares solution is computed. Singularities induced by the coordinate system are handled, including at the origin r=0 in cylindrical coordinates, and at the poles in spherical coordinates. Test programs are provided for the 19 solvers. Each serves two purposes: as a template to guide you in writing your own codes utilizing the FISHPACK90 solvers, and as a demonstration on your computer that you can correctly produce FISHPACK90 executables.

[ascl:1601.016] Fit Kinematic PA: Fit the global kinematic position-angle of galaxies

Fit kinematic PA measures the global kinematic position-angle (PA) from integral field observations of a galaxy stellar or gas kinematics; the code is available in IDL and Python.

[ascl:1609.015] FIT3D: Fitting optical spectra

FIT3D fits optical spectra to deblend the underlying stellar population and the ionized gas, and extract physical information from each component. FIT3D is focused on the analysis of Integral Field Spectroscopy data, but is not restricted to it, and is the basis of Pipe3D, a pipeline used in the analysis of datasets like CALIFA, MaNGA, and SAMI. It can run iteratively or in an automatic way to derive the parameters of a large set of spectra.

[ascl:2403.010] FitCov: Fitted Covariance generation

FitCov estimates the covariance of two-point correlation functions in a way that requires fewer mocks than the standard mock-based covariance. Rather than using an analytically fixed correction to some terms that enter the jackknife covariance matrix, the code fits the correction to a mock-based covariance obtained from a small number of mocks. The fitted jackknife covariance remains unbiased, an improvement over other methods, performs well both in terms of precision (unbiased constraints) and accuracy (similar uncertainties), and requires significant less computational power. In addition, FitCov can be easily implemented on top of the standard jackknife covariance computation.

[ascl:1305.011] FITDisk: Cataclysmic Variable Accretion Disk Demonstration Tool

FITDisk models accretion disk phenomena using a fully three-dimensional hydrodynamics calculation, and data can either be visualized as they are computed or stored to hard drive for later playback at a fast frame rate. Simulations are visualized using OpenGL graphics and the viewing angle can be changed interactively. Pseudo light curves of simulated systems can be plotted along with the associated Fourier amplitude spectrum. It provides an easy to use graphical user interface as well as 3-D interactive graphics. The code computes the evolution of a CV accretion disk, visualizes results in real time, records and plays back simulations, and generates and plots pseudo light curves and associated power spectra. FITDisk is the Windows executable form of this software; its Fortran source code is also available as DiskSim (ascl:1811.013).

[ascl:2301.005] fitOmatic: Interferometric data modeling

The fitOmatic model-fitting prototyping tool tests multi-wavelength model-fitting and exploits VLTI data. It provides tools to define simple geometrical models and conveniently adjust the model's parameters. Written in Yorick, it takes optical interferometry FITS (oifits) files as input and allows the user to define a model of the source from a set of pre-defined models, which can be combined to make more complicated models. fitOmatic then computes the Fourier Transform of the modeled brightness distribution and synthetic observables are computed at the wavelengths and projected baselines of the observations. fitomatic's strength is its ability to define vector-parameters, i.e., parameters that may depend on wavelength and/or time. The self-cal (ascl:2301.006) component of fitOmatic is also available as a separate code.

[ascl:2405.012] fitramp: Likelihood-based jump detection

fitramp fits a ramp to a series of nondestructive reads and detects and rejects jumps. The software performs likelihood-based jump detection for detectors read out up-the-ramp; it uses the entire set of reads to compute likelihoods. The code compares the χ2 value of a fit with and without a jump for every possible jump location. fitramp can fit ramps with and without fitting the reset value (the pedestal), and fit and mask jumps within or between groups of reads. It can also compute the bias of ramp fitting.

[ascl:1206.002] FITS Liberator: Image processing software

The ESA/ESO/NASA FITS Liberator makes it possible to process and edit astronomical science data in the FITS format to produce stunning images of the universe. Formerly a plugin for Adobe Photoshop, the current version of FITS Liberator is a stand-alone application and no longer requires Photoshop. This image processing software makes it possible to create color images using raw observations from a range of telescopes; the FITS Liberator continues to support the FITS and PDS formats, preferred by astronomers and planetary scientists respectively, which enables data to be processed from a wide range of telescopes and planetary probes, including ESO’s Very Large Telescope, the NASA/ESA Hubble Space Telescope, NASA’s Spitzer Space Telescope, ESA’s XMM–Newton Telescope and Cassini–Huygens or Mars Reconnaissance Orbiter.

[ascl:1505.029] fits2hdf: FITS to HDFITS conversion

fits2hdf ports FITS files to Hierarchical Data Format (HDF5) files in the HDFITS format. HDFITS allows faster reading of data, higher compression ratios, and higher throughput. HDFITS formatted data can be presented transparently as an in-memory FITS equivalent by changing the import lines in Python-based FITS utilities. fits2hdf includes a utility to port MeasurementSets (MS) to HDF5 files.

[ascl:2309.014] fitScalingRelation: Fit galaxy cluster scaling relations using MCMC

fitScalingRelation fits galaxy cluster scaling relations using orthogonal or bisector regression and MCMC. It takes into account errors on both variables and intrinsic scatter. Although it geared for fitting galaxy cluster scaling relations of all kinds, it can be used for any kind of regression problem with errors on both variables and intrinsic scatter.

[ascl:1710.018] FITSFH: Star Formation Histories

FITSFH derives star formation histories from photometry of resolved stellar populations by populating theoretical isochrones according to a chosen stellar initial mass function (IMF) and searching for the linear combination of isochrones with different ages and metallicities that best matches the data. In comparing the synthetic and real data, observational errors and incompleteness are taken into account, and a rudimentary treatment of the effect of unresolved binaries is also implemented. The code also allows for an age-dependent range of extinction values to be included in the modelling.

[ascl:1111.014] FITSH: Software Package for Image Processing

FITSH provides a standalone environment for analysis of data acquired by imaging astronomical detectors. The package provides utilities both for the full pipeline of subsequent related data processing steps (including image calibration, astrometry, source identification, photometry, differential analysis, low-level arithmetic operations, multiple image combinations, spatial transformations and interpolations, etc.) and for aiding the interpretation of the (mainly photometric and/or astrometric) results. The package also features a consistent implementation of photometry based on image subtraction, point spread function fitting and aperture photometry and provides easy-to-use interfaces for comparisons and for picking the most suitable method for a particular problem. The utilities in the package are built on the top of the commonly used UNIX/POSIX shells (hence the name of the package), therefore both frequently used and well-documented tools for such environments can be exploited and managing massive amount of data is rather convenient.

[ascl:1107.003] FITSManager: Management of Personal Astronomical Data

With the increase of personal storage capacity, it is easy to find hundreds to thousands of FITS files in the personal computer of an astrophysicist. Because Flexible Image Transport System (FITS) is a professional data format initiated by astronomers and used mainly in the small community, data management toolkits for FITS files are very few. Astronomers need a powerful tool to help them manage their local astronomical data. Although Virtual Observatory (VO) is a network oriented astronomical research environment, its applications and related technologies provide useful solutions to enhance the management and utilization of astronomical data hosted in an astronomer's personal computer. FITSManager is such a tool to provide astronomers an efficient management and utilization of their local data, bringing VO to astronomers in a seamless and transparent way. FITSManager provides fruitful functions for FITS file management, like thumbnail, preview, type dependent icons, header keyword indexing and search, collaborated working with other tools and online services, and so on. The development of the FITSManager is an effort to fill the gap between management and analysis of astronomical data.

[ascl:2201.004] FitsMap: Interactive astronomical image and catalog data visualizer

FitsMap visualizes astronomical image and catalog data. Implemented in Python, the software is a simple, lightweight tool, requires only a simple web server, and can scale to over gigapixel images with tens of millions of sources. Further, the web-based visualizations can be viewed performantly on mobile devices.

[ascl:1905.012] Fitsverify: FITS file format-verification tool

Fitsverify rigorously checks whether a FITS (Flexible Image Transport System) data file conforms to the requirements defined in Version 3.0 of the FITS Standard document; it is a standalone version of the ftverify and fverify tasks that are distributed as part of the ftools (ascl:9912.002) software package. The source code must be compiled and linked with the CFITSIO (ascl:1010.001) library. An interactive web is also available that can verify the format of any FITS data file on a local computer or on the Web.

[ascl:2403.006] fkpt: Compute LCDM and modified gravity perturbation theory using fk-kernels

fkpt computes the 1-loop redshift space power spectrum for tracers using perturbation theory for LCDM and Modified Gravity theories using "fk"-Kernels. Though implemented for the Hu-Sawicky f(R) modified gravity model, it is straightforward to use it for other models.

[ascl:1709.011] FLaapLUC: Fermi-LAT automatic aperture photometry light curve

Most high energy sources detected with Fermi-LAT are blazars, which are highly variable sources. High cadence long-term monitoring simultaneously at different wavelengths being prohibitive, the study of their transient activities can help shed light on our understanding of these objects. The early detection of such potentially fast transient events is the key for triggering follow-up observations at other wavelengths. FLaapLUC (Fermi-LAT automatic aperture photometry Light C↔Urve) uses the simple aperture photometry approach to effectively detect relative flux variations in a set of predefined sources and alert potential users. Such alerts can then be used to trigger observations of these sources with other facilities. The FLaapLUC pipeline is built on top of the Science Tools provided by the Fermi-LAT collaboration and quickly generates short- or long-term Fermi-LAT light curves.

[ascl:1710.007] FLAG: Exact Fourier-Laguerre transform on the ball

FLAG is a fast implementation of the Fourier-Laguerre Transform, a novel 3D transform exploiting an exact quadrature rule of the ball to construct an exact harmonic transform in 3D spherical coordinates. The angular part of the Fourier-Laguerre transform uses the MW sampling theorem and the exact spherical harmonic transform implemented in the SSHT code (ascl:2207.034). The radial sampling scheme arises from an exact quadrature of the radial half-line using damped Laguerre polynomials. The radial transform can in fact be used to compute the spherical Bessel transform exactly, and the Fourier-Laguerre transform is thus closely related to the Fourier-Bessel transform.

[ascl:1112.007] FLAGCAL: FLAGging and CALlibration Pipeline for GMRT Data

FLAGging and CALlibration (FLAGCAL) is a software pipeline developed for automatic flagging and calibration of the GMRT data. This pipeline can be used for preprocessing (before importing the data in AIPS) any other interferromteric data also (given that the data file is in FITS format and contains multiple channels & scans).There are also a few GUI based tools which can be used for quick visualization of the data.

[ascl:2305.010] FLAGLET: Fast and exact wavelet transform on the ball

FLAGLET computes flaglet transforms with arbitrary spin direction, probing the angular features of this generic wavelet transform for rapid analysis of signals from wavelet coefficients. The code enables the decomposition of a band-limited signal into a set of flaglet maps that capture all information contained in the initial band-limited map, and it can reconstruct the individual flaglets at varying resolutions. FLAGLET relies upon the SSHT (ascl:2207.034), S2LET (ascl:1211.001), and SO3 codes to provide angular transforms and sampling theorems, as well as the FFTW (ascl:1201.015) code to compute Fourier transforms.

[ascl:1811.007] Flame: Near-infrared and optical spectroscopy data reduction pipeline

Flame reduces near-infrared and optical multi-object spectroscopic data. Although the pipeline was created for the LUCI instrument at the Large Binocular Telescope, Flame, written in IDL, is modular and can be adapted to work with data from other instruments. The software uses 2D transformations, thus using one interpolation step to wavelength calibrate and rectify the data. The γ(x, y) transformation also includes the spatial misalignment between frames, which can be measured from a reference star observed simultaneously with the science targets; sky subtraction can be performed via nodding and/or modelling of the sky spectrum.

[submitted] FLARE: Synthetic Fast Radio Burst catalog generator

FLARE, a parallel code written in Python, generates 100,000 Fast Radio Bursts (FRB) using the Monte Carlo method. The FRB population is diverse and includes sporadic FRBs, repeaters, and periodic repeaters. However, less than 200 FRBs have been detected to date, which makes understanding the FRB population difficult. To tackle this problem, FLARE uses a Monte Carlo method to generate 100,000 realistic FRBs, which can be analyzed later on for further research. It has the capability to simulate FRB distances (based on the observed FRB distance range), energies (based on the "flaring magnetar model" of FRBs), fluences, multi-wavelength counterparts (based on x-ray to radio fluence ratio of FRB 200428), and other properties. It analyzes the resulting synthetic FRB catalog and displays the distribution of their properties. It is fast (as a result of parallel code) and requires minimal human interaction. FLARE is, therefore, able to give a broad picture of the FRB population.

[submitted] Flash-X: A Performance Portable, Multiphysics Simulation Software Instrument

Flash-X simulates physical phenomena in several scientific domains, primarily those involving compressible or incompressible reactive flows, using Eulerian adaptive mesh and particle techniques. It derives some of its solvers from and is a descendant of FLASH (ascl:1010.082). Flash-X has a new framework that relies on abstractions and asynchronous communications for performance portability across a range of heterogeneous hardware platforms, including exascale machines. It also includes new physics capabilities, such as the Spark general relativistic magnetohydrodynamics (GRMHD) solver, and supports interoperation with the AMReX mesh framework, the HYPRE linear solver package, and the Thornado neutrino radiation hydrodynamics package, among others.

[ascl:1010.082] FLASH: Adaptive Mesh Hydrodynamics Code for Modeling Astrophysical Thermonuclear Flashes

The FLASH code, currently in its 4th version, is a publicly available high performance application code which has evolved into a modular, extensible software system from a collection of unconnected legacy codes. FLASH consists of inter-operable modules that can be combined to generate different applications. The FLASH architecture allows arbitrarily many alternative implementations of its components to co-exist and interchange with each other. A simple and elegant mechanism exists for customization of code functionality without the need to modify the core implementation of the source. A built-in unit test framework combined with regression tests that run nightly on multiple platforms verify the code.

[ascl:1606.015] FLASK: Full-sky Lognormal Astro-fields Simulation Kit

FLASK (Full-sky Lognormal Astro-fields Simulation Kit) makes tomographic realizations on the sphere of an arbitrary number of correlated lognormal or Gaussian random fields; it can create joint simulations of clustering and lensing with sub-per-cent accuracy over relevant angular scales and redshift ranges. It is C++ code parallelized with OpenMP; FLASK generates fast full-sky simulations of cosmological large-scale structure observables such as multiple matter density tracers (galaxies, quasars, dark matter haloes), CMB temperature anisotropies and weak lensing convergence and shear fields. The mutiple fields can be generated tomographically in an arbitrary number of redshift slices and all their statistical properties (including cross-correlations) are determined by the angular power spectra supplied as input and the multivariate lognormal (or Gaussian) distribution assumed for the fields. Effects like redshift space distortions, doppler distortions, magnification biases, evolution and intrinsic aligments can be introduced in the simulations via the input power spectra which must be supplied by the user.

[ascl:2111.012] flatstar: Make 2d intensity maps of limb-darkened stars

flatstar is an open-source Python tool for drawing stellar disks as numpy.ndarray objects with scientifically-rigorous limb darkening. Each pixel has an accurate fractional intensity in relation to the total stellar intensity of 1.0. It is ideal for ray-tracing simulations of stars and planetary transits. The code is fast, has the most well-known limb-darkening laws, including linear, quadratic, square-root, logarithmic, and exponential, and allows the user to implement custom limb-darkening laws. flatstar also offers supersampling for situations where both coarse arrays and precision in stellar disk intensity (i.e., no hard pixel boundaries) is desired, and upscaling to save on computation time when high-resolution intensity maps are needed, though there is some precision loss in intensities.

[ascl:2308.002] FLATW'RM: Finding flares in Kepler data using machine-learning tools

FLATW'RM (FLAre deTection With Ransac Method) detects stellar flares in light curves using a classical machine-learning method. The code tries to find a rotation period in the light curve and splits the data to detection windows. The light curve sections are fit with the robust fitting algorithm RANSAC (Random sample consensus); outlier points (flare candidates) above the pre-set detection level are marked for each section. For the given detection window, only those flare candidates that have at least a given number of consecutive points (three by default) are kept and marked as flares. When using FLATW’RM, the code's output should be checked to determine whether changes to the default settings are needed to account for light curve noise, data sampling frequency, and scientific needs.

[ascl:2203.009] fleck: Fast starspot rotational modulation light curves

fleck simulates rotational modulation of stars due to starspots and is used to overcome the degeneracies and determine starspot coverages accurately for a sample of young stars. The code simulates starspots as circular dark regions on the surfaces of rotating stars, accounting for foreshortening towards the limb, and limb darkening. Supplied with the latitudes, longitudes, and radii of spots and the stellar inclinations from which each star is viewed, fleck takes advantage of efficient array broadcasting with numpy to return approximate light curves. For example, the code can compute rotational modulation curves sampled at ten points throughout the rotation of each star for one million stars, with two unique spots each, all viewed at unique inclinations, in about 10 seconds on a 2.5 GHz Intel Core i7 processor. This rapid computation of light curves en masse makes it possible to measure starspot distributions with techniques such as Approximate Bayesian Computation.

[ascl:2007.011] FleCSPH: Parallel and distributed SPH implementation based on the FleCSI

FleCSPH is a multi-physics compact application that exercises FleCSI parallel data structures for tree-based particle methods. In particular, the software implements a smoothed-particle hydrodynamics (SPH) solver for the solution of Lagrangian problems in astrophysics and cosmology. FleCSPH includes support for gravitational forces using the fast multipole method (FMM). Particle affinity and gravitation is handled using the parallel implementation of the octree data structure provided by FleCSI.

[ascl:2009.019] FLEET: Finding Luminous and Exotic Extragalactic Transients

FLEET (Finding Luminous and Exotic Extragalactic Transients) is a machine-learning pipeline that predicts the probability of a transient to be a superluminous supernova. With light curve and contextual host galaxy information, it uses a random forest algorithm to rapidly identify SLSN-I without the need for redshift information.

[ascl:1612.006] flexCE: Flexible one-zone chemical evolution code

flexCE (flexible Chemical Evolution) computes the evolution of a one-zone chemical evolution model with inflow and outflow in which gas is instantaneously and completely mixed. It can be used to demonstrate the sensitivity of chemical evolution models to parameter variations, show the effect of CCSN yields on chemical evolution models, and reproduce the 2D distribution in [O/Fe]{[Fe/H] by mixing models with a range of inflow and outflow histories. It can also post-process cosmological simulations to predict element distributions.

[ascl:1107.004] Flexible DM-NRG

This code combines the spectral sum-conserving methods of Weichselbaum and von Delft and of Peters, Pruschke and Anders (both relying upon the complete basis set construction of Anders and Schiller) with the use of non-Abelian symmetries in a flexible manner: Essentially any non-Abelian symmetry can be taught to the code, and any number of such symmetries can be used throughout the computation for any density of states, and to compute any local operators' correlation function's real and imaginary parts or any thermodynamical expectation value. The code works both at zero and finite temperatures.

[ascl:1205.006] Flexion: IDL code for calculating gravitational flexion

Gravitational flexion is a technique for measuring 2nd order gravitational lensing signals in background galaxies and radio lobes. Unlike shear, flexion directly probes variations of the potential field. Moreover, the information contained in flexion is orthogonal to what is found in the shear. Thus, we get the information "for free."

A newer version of the code, Lenser, is available here: https://github.com/DrexelLenser/Lenser

[ascl:1411.016] Flicker: Mean stellar densities from flicker

Flicker calculates the mean stellar density of a star by inputting the flicker observed in a photometric time series. Written in Fortran90, its output may be used as an informative prior on stellar density when fitting transit light curves.

[ascl:2406.015] FLORAH: Galaxy merger tree generator with machine learning

FLORAH generates the assembly history of halos using a recurrent neural network and normalizing flow model. The machine-learning framework can be used to combine multiple generated networks that are trained on a suite of simulations with different redshift ranges and mass resolutions. Depending on the training, the code recovers key properties, including the time evolution of mass and concentration, and galaxy stellar mass versus halo mass relation and its residuals. FLORAH also reproduces the dependence of clustering on properties other than mass, and is a step towards a machine learning-based framework for planting full merger trees.

[ascl:1210.007] FLUKA: Fully integrated particle physics Monte Carlo simulation package

FLUKA (FLUktuierende KAskade) is a general-purpose tool for calculations of particle transport and interactions with matter. FLUKA can simulate with high accuracy the interaction and propagation in matter of about 60 different particles, including photons and electrons from 1 keV to thousands of TeV, neutrinos, muons of any energy, hadrons of energies up to 20 TeV (up to 10 PeV by linking FLUKA with the DPMJET code) and all the corresponding antiparticles, neutrons down to thermal energies and heavy ions. The program, written in Fortran, can also transport polarised photons (e.g., synchrotron radiation) and optical photons. Time evolution and tracking of emitted radiation from unstable residual nuclei can be performed online.

[ascl:1105.008] Flux Tube Model

This Fortran code computes magnetohydrostatic flux tubes and sheets according to the method of Steiner, Pneuman, & Stenflo (1986) A&A 170, 126-137. The code has many parameters contained in one input file that are easily modified. Extensive documentation is provided in README files.

[ascl:1712.010] Flux Tube: Solar model

Flux Tube is a nonlinear, two-dimensional, numerical simulation of magneto-acoustic wave propagation in the photosphere and chromosphere of small-scale flux tubes with internal structure. Waves with realistic periods of three to five minutes are studied, after horizontal and vertical oscillatory perturbations are applied to the equilibrium model. Spurious reflections of shock waves from the upper boundary are minimized by a special boundary condition.

[ascl:2110.015] Flux: Julia machine learning library

Flux provides an elegant approach to machine learning. Written in Julia, it provides lightweight abstractions on top of Julia's native GPU and AD support. It has many useful tools built in, but also lets you use the full power of the Julia language where you need it. Flux has relatively few explicit APIs for features like regularization or embeddings; instead, writing down the mathematical form works and is fast. The package works well with Julia libraries from data frames and images to differential equation solvers, so building complex data processing pipelines that integrate Flux models is straightforward.

[ascl:1405.010] FLUXES: Position and flux density of planets

FLUXES calculates approximate topocentric positions of the planets and also integrated flux densities of five of them at several wavelengths. These provide calibration information at the effective frequencies and beam-sizes employed by the UKT14, SCUBA and SCUBA-2 receivers on the JCMT telescope based on Mauna Kea, Hawaii. FLUXES is part of the bundle that comprises the Starlink multi-purpose astronomy software package (ascl:1110.012).

[ascl:1011.019] FLY: MPI-2 High Resolution code for LSS Cosmological Simulations

Cosmological simulations of structures and galaxies formations have played a fundamental role in the study of the origin, formation and evolution of the Universe. These studies improved enormously with the use of supercomputers and parallel systems and, recently, grid based systems and Linux clusters. Now we present the new version of the tree N-body parallel code FLY that runs on a PC Linux Cluster using the one side communication paradigm MPI-2 and we show the performances obtained. FLY is included in the Computer Physics Communication Program Library. This new version was developed using the Linux Cluster of CINECA, an IBM Cluster with 1024 Intel Xeon Pentium IV 3.0 Ghz. The results show that it is possible to run a 64 Million particle simulation in less than 15 minutes for each timestep, and the code scalability with the number of processors is achieved. This lead us to propose FLY as a code to run very large N-Body simulations with more than $10^{9}$ particles with the higher resolution of a pure tree code.

[ascl:2107.004] FoF-Halo-finder: Halo location and size

FoF-Halo-finder identifies the location and size of collapsed objects (halos) within a cosmological simulation box. These halos are the host for the luminous objects in the Universe. Written in C, it is based on the friends-of-friends (FoF) algorithm, and is designed to work with PMN-body (ascl:2107.003).

[ascl:2407.012] Fof: Friends-of-friends code to find groups

Fof uses the friends-of-friends method to find groups. A particle belongs to a friends-of-friends group if it is within some linking length of any other particle in the group. After all such groups are found, those with less than a specified minimum number of group members are rejected. The program takes input files in the TIPSY (ascl:1111.015) binary format and produces a single ASCII output file called fof.grp. This output file is in the TIPSY array format and contains the group number to which each particle belongs. A group number of zero means that the particle does not belong to a group. The fof.grp file can be read in by TIPSY and used to color each particle by group number to visualize the groups. Simulations with periodic boundary conditions can also be handled by fof by specifying the period in each dimension on the command line.

[ascl:2410.006] forcepho: Generative modeling galaxy photometry for JWST

Forcepho infers the fluxes and shapes of galaxies from astronomical images. It models the appearance of multiple sources in multiple bands simultaneously and compares to observed data via a likelihood function. Gradients of this likelihood allow for efficient maximization of the posterior probability or sampling of the posterior probability distribution via Hamiltonian Monte Carlo. The model intrinsic galaxy shapes and positions are shared across the different bands, but the fluxes are fit separately for each band. Forcepho does not perform detection; initial locations and (very rough) parameter estimates must be supplied by the user.

[ascl:2312.010] FORECAST: Realistic astronomical image and galaxy survey generator

FORECAST generates realistic astronomical images and galaxy surveys by forward modeling the output snapshot of any hydrodynamical cosmological simulation. It exploits the snapshot by constructing a lightcone centered on the observer's position; the code computes the observed fluxes of each simulated stellar element, modeled as a Single Stellar Population (SSP), in any chosen set of pass-band filters, including k-correction, IGM absorption, and dust attenuation. These fluxes are then used to create an image on a grid of pixels, to which observational features such as background noise and PSF blurring can be added. FORECAST provides customizable options for filters, size of the field of view, and survey parameters, thus allowing the synthetic images to be tailored for specific research requirements.

[submitted] forecaster-plus

An internally overhauled but fundamentally similar version of Forecaster by Jingjing Chen and David Kipping, originally presented in arXiv:1603.08614 and hosted at https://github.com/chenjj2/forecaster.

The model itself has not changed- no new data was included and the hyperparameter file was not regenerated. All functions were rewritten to take advantage of Numpy vectorization and some additional user features were added. Now able to be installed via pip.

[ascl:1701.007] Forecaster: Mass and radii of planets predictor

Forecaster predicts the mass (or radius) from the radius (or mass) for objects covering nine orders-of-magnitude in mass. It is an unbiased forecasting model built upon a probabilistic mass-radius relation conditioned on a sample of 316 well-constrained objects. It accounts for observational errors, hyper-parameter uncertainties and the intrinsic dispersions observed in the calibration sample.

[ascl:2407.004] Forklens: Deep learning weak lensing shear

Forklens measures weak gravitational lensing signal using a deep-learning methoe. It measures galaxy shapes (shear) and corrects the smearing of the point spread function (PSF, an effect from either/both the atmosphere and optical instrument). It contains a custom CNN architecture with two input branches, fed with the observed galaxy image and PSF image, and predicts several features of the galaxy, including shape, magnitude, and size. Simulation in the code is built directly upon GalSim (ascl:1402.009).

[ascl:1912.009] FORSTAND: Flexible ORbit Superposition Toolbox for ANalyzing Dynamical models

FORSTAND constructs dynamical models of galaxies using the Schwarzschild orbit-superposition method; the method is available as part of the AGAMA (ascl:1805.008) framework. The models created are constrained by line-of-sight kinematic observations and are applicable to galaxies of all morphological types, including disks and triaxial rotating bars.

[ascl:1904.011] FortesFit: Flexible spectral energy distribution modelling with a Bayesian backbone

FortesFit efficiently explores and discriminates between various spectral energy distributions (SED) models of astronomical sources. The Python package adds Bayesian inference to a framework that is designed for the easy incorporation and relative assessment of SED models, various fitting engines, and a powerful treatment of priors, especially those that may arise from non-traditional wave-bands such as the X-ray or radio emission, or from spectroscopic measurements. It has been designed with particular emphasis for its scalability to large datasets and surveys.

[ascl:1405.007] FORWARD: Forward modeling of coronal observables

FORWARD forward models various coronal observables and can access and compare existing data. Given a coronal model, it can produce many different synthetic observables (including Stokes polarimetry), as well as plots of model plasma properties (density, magnetic field, etc.). It uses the CHIANTI database (ascl:9911.004) and CLE polarimetry synthesis code, works with numerical model datacubes, interfaces with the PFSS module of SolarSoft (ascl:1208.013), includes several analytic models, and connects to the Virtual Solar Observatory for downloading data in a format directly comparable to model predictions.

[ascl:2102.015] ForwardDiff: Forward mode automatic differentiation for Julia

ForwardDiff implements methods to take derivatives, gradients, Jacobians, Hessians, and higher-order derivatives of native Julia functions (or any callable object, really) using forward mode automatic differentiation (AD).While performance can vary depending on the functions you evaluate, the algorithms implemented by ForwardDiff generally outperform non-AD algorithms in both speed and accuracy.

[ascl:1204.004] Fosite: 2D advection problem solver

Fosite implements a method for the solution of hyperbolic conservation laws in curvilinear orthogonal coordinates. It is written in Fortran 90/95 integrating object-oriented (OO) design patterns, incorporating the flexibility of OO-programming into Fortran 90/95 while preserving the efficiency of the numerical computation. Although mainly intended for CFD simulations, Fosite's modular design allows its application to other advection problems as well. Unlike other two-dimensional implementations of finite volume methods, it accounts for local conservation of specific angular momentum. This feature turns the program into a perfect tool for astrophysical simulations where angular momentum transport is crucial. Angular momentum transport is not only implemented for standard coordinate systems with rotational symmetry (i.e. cylindrical, spherical) but also for a general set of orthogonal coordinate systems allowing the use of exotic curvilinear meshes (e.g. oblate-spheroidal). As in the case of the advection problem, this part of the software is also kept modular, therefore new geometries may be incorporated into the framework in a straightforward manner.

[ascl:1610.012] Fourierdimredn: Fourier dimensionality reduction model for interferometric imaging

Fourierdimredn (Fourier dimensionality reduction) implements Fourier-based dimensionality reduction of interferometric data. Written in Matlab, it derives the theoretically optimal dimensionality reduction operator from a singular value decomposition perspective of the measurement operator. Fourierdimredn ensures a fast implementation of the full measurement operator and also preserves the i.i.d. Gaussian properties of the original measurement noise.

[ascl:1806.030] foxi: Forecast Observations and their eXpected Information

Using information theory and Bayesian inference, the foxi Python package computes a suite of expected utilities given futuristic observations in a flexible and user-friendly way. foxi requires a set of n-dim prior samples for each model and one set of n-dim samples from the current data, and can calculate the expected ln-Bayes factor between models, decisiveness between models and its maximum-likelihood averaged equivalent, the decisivity, and the expected Kullback-Leibler divergence (i.e., the expected information gain of the futuristic dataset). The package offers flexible inputs and is designed for all-in-one script calculation or an initial cluster run then local machine post-processing, which should make large jobs quite manageable subject to resources and includes features such as LaTeX tables and plot-making for post-data analysis visuals and convenience of presentation.

[ascl:1010.002] fpack: FITS Image Compression Program

fpack is a utility program for optimally compressing images in the FITS data format. The associated funpack program will restore the compressed file back to its original state. These programs may be run from the host operating system command line and are analogous to the gzip and gunzip utility programs, except that they are specifically optimized for FITS format images and offer a wider choice of compression options.

fpack uses the tiled image compression convention for storing the compressed images. This convention can in principle support any number of of different compression algorithms; currently GZIP, Rice, Hcompress, and the IRAF pixel list compression algorithms have been implemented.

The main advantages of fpack compared to the commonly used technique of externally compressing the whole FITS file with gzip are:

- It is generally faster and offers better compression than gzip.
- The FITS header keywords remain uncompressed for fast access.
- Each HDU of a multi-extension FITS file is compressed separately, so it is not necessary to uncompress the entire file to read a single image in a multi-extension file.
- Dividing the image into tiles before compression enables faster access to small subsections of the image.
- The compressed image is itself a valid FITS file and can be manipulated by other general FITS utility software.
- Lossy compression can be used for much higher compression in cases where it is not necessary to exactly preserve the original image.
- The CHECKSUM keywords are automatically updated to help verify the integrity of the files.
- Software that supports the tiled image compression technique can directly read and write the FITS images in their compressed form.

[ascl:2311.010] FPFS: Fourier Power Function Shaplets

FPFS (Fourier Power Function Shaplets) is a fast, accurate shear estimator for the shear responses of galaxy shape, flux, and detection. Utilizing leading-order perturbations of shear (a vector perturbation) and image noise (a tensor perturbation), the code determines shear and noise responses for both measurements and detections. Unlike methods that distort each observed galaxy repeatedly, the software employs analytical shear responses of select basis functions, including Shapelets basis and peak basis. FPFS is efficient and can process approximately 1,000 galaxies within a single CPU second, and maintains a multiplicative shear estimation bias below 0.5% even amidst blending challenges.

[ascl:2001.004] FragMent: Fragmentation techniques for studying filaments

FragMent studies fragmentation in filaments by collating a number of different techniques, including nearest neighbour separations, minimum spanning tree, two-point correlation function, and Fourier power spectrum. It also performs model selection using a frequentist and Bayesian approach to find the best descriptor of a filament's fragmentation. While the code was designed to investigate filament fragmentation, the functions are general and may be used for any set of 2D points to study more general cases of fragmentation.

[ascl:2109.010] Frankenstein: Flux reconstructor

Frankenstein (frank) fits the 1D radial brightness profile of an interferometric source given a set of visibilities. It uses a Gaussian process that performs the fit in <1 minute for a typical protoplanetary disc continuum dataset. Frankenstein can perform a fit in 2 ways, by running the code directly from the terminal or using the code as a Python module.

[ascl:2306.018] FRB: Fast Radio Burst calculations, estimations, and analysis

FRB performs calculations, estimations, analysis, and Bayesian inferences for Fast Radio Bursts, including dispersion measure and emission measure calculations, derived properties and spectrums, and Galactic RM.

[ascl:2011.011] frbcat: Fast Radio Burst CATalog querying package

frbcat queries and downloads Fast Radio Burst (FRB) data from the FRBCAT Catalogue web page, the CHIME-REPEATERS web page and the Transient Name Server (TNS). It is written in Python and can be installed using pip.

[submitted] frbmclust: Model-independent classification of events from the first CHIME/FRB Fast Radio Burst catalog

CHIME/FRB instrument has recently published a catalog containing about half of thousand fast radio bursts (FRB) including their spectra and several reconstructed properties, like signal widths, amplitudes, etc. We have developed a model-independent approach for the classification of these bursts using cross-correlation and clustering algorithms applied to one-dimensional intensity profiles, i.e. to amplitudes as a function of time averaged over the frequency. This approach is implemented in frbmclust package, which is used for classification of bursts featuring different waveform morphology.

[ascl:1911.009] frbpoppy: Fast radio burst population synthesis in Python

frbpoppy conducts fast radio burst population synthesis and continues the work of PSRPOP (ascl:1107.019) and PsrPopPy (ascl:1501.006) in the realm of FRBs. The code replicates observed FRB detection rates and FRB distributions in three steps. It first simulates a cosmic population of one-off FRBs and allows the user to select options such as models for source number density, cosmology, DM host/IGM/Milky Way, luminosity functions, and emission bands as well as maximum redshift and size of the FRB population. The code then generates a survey by adopting a beam pattern using various survey parameters, among them telescope gain, sampling time, receiver temperature, central frequency, channel bandwidth, number of polarizations, and survey region limits. Finally, frbpoppy convolves the generated intrinsic population with the generated survey to simulate an observed FRB population.

[ascl:2106.028] FRBSTATS: A web-based platform for visualization of fast radio burst properties

FRBSTATS provides a user-friendly web interface to an open-access catalog of fast radio bursts (FRBs) published up to date, along with a highly accurate statistical overview of the observed events. The platform supports the retrieval of fundamental FRB data either directly through the FRBSTATS API, or in the form of a CSV/JSON-parsed database, while enabling the plotting of parameter distributions for a variety of visualizations. These features allow researchers to conduct more thorough population studies while narrowing down the list of astrophysical models describing the origins and emission mechanisms behind these sources. Lastly, the platform provides a visualization tool that illustrates associations between primary bursts and repeaters, complementing basic repeater information provided by the Transient Name Server.

[ascl:1906.003] FREDDA: A fast, real-time engine for de-dispersing amplitudes

FREDDA detects Fast Radio Bursts (FRBs) in power data. It is optimized for use at ASKAP, namely GHz frequencies with 10s of beams, 100s of channels and millisecond integration times. The code is written in CUDA for NVIDIA Graphics Processing Units.

[ascl:1610.014] Freddi: Fast Rise Exponential Decay accretion Disk model Implementation

Freddi (Fast Rise Exponential Decay: accretion Disk model Implementation) solves 1-D evolution equations of the Shakura-Sunyaev accretion disk. It simulates fast rise exponential decay (FRED) light curves of low mass X-ray binaries (LMXBs). The basic equation of the viscous evolution relates the surface density and viscous stresses and is of diffusion type; evolution of the accretion rate can be found on solving the equation. The distribution of viscous stresses defines the emission from the source. The standard model for the accretion disk is implied; the inner boundary of the disk is at the ISCO or can be explicitely set. The boundary conditions in the disk are the zero stress at the inner boundary and the zero accretion rate at the outer boundary. The conditions are suitable during the outbursts in X-ray binary transients with black holes. In a binary system, the accretion disk is radially confined. In Freddi, the outer radius of the disk can be set explicitely or calculated as the position of the tidal truncation radius.

[ascl:1211.002] FreeEOS: Equation of State for stellar interiors calculations

FreeEOS is a Fortran library for rapidly calculating the equation of state using an efficient free-energy minimization technique that is suitable for physical conditions in stellar interiors. Converged FreeEOS solutions can be reliably determined for the first time for physical conditions occurring in stellar models with masses between 0.1 M and the hydrogen-burning limit near 0.07 M and hot brown-dwarf models just below that limit. However, an initial survey of results for those conditions showed EOS discontinuities (plasma phase transitions) and other problems which will need to be addressed in future work by adjusting the interaction radii characterizing the pressure ionization used for the FreeEOS calculations.

[ascl:2104.011] Freeture: Free software to capTure meteors

FreeTure monitors images from GigE all-sky cameras to detect and record falling stars and fireball. Originally, it was developed for the FRIPON (Fireball Recovery and InterPlanetary Observation Network) project, which sought to cover all of France with 100 fish eyes cameras, but can be used by any station that has a GigE camera.

[ascl:1508.004] FRELLED: FITS Realtime Explorer of Low Latency in Every Dimension

FRELLED (FITS Realtime Explorer of Low Latency in Every Dimension) creates 3D images in real time from 3D FITS files and is written in Python for the 3D graphics suite Blender. Users can interactively generate masks around regions of arbitrary geometry and use them to catalog sources, hide regions, and perform basic analysis (e.g., image statistics within the selected region, generate contour plots, query NED and the SDSS). World coordinates are supported and multi-volume rendering is possible. FRELLED is designed for viewing HI data cubes and provides a number of tasks to commonly-used MIRIAD (ascl:1106.007) tasks (e.g. mbspect); however, many of its features are suitable for any type of data set. It also includes an n-body particle viewer with the ability to display 3D vector information as well as the ability to render time series movies of multiple FITS files and setup simple turntable rotation movies for single files.

[ascl:2305.001] FRIDDA: Fisher foRecast code for combIned reDshift Drift and Alpha

FRIDDA forecasts the cosmological impact of measurements of the redshift drift and the fine-structure constant (alpha) as well as their combination. The code is based on Fisher Matrix Analysis techniques and works for various fiducial cosmological models. Though designed for the ArmazoNes high Dispersion Echelle Spectrograph (ANDES), it is easily adaptable to other fiducial cosmological models and to other instruments with similar scientific goals.

[ascl:2309.019] FRISBHEE: FRIedmann Solver for Black Hole Evaporation in the Early-universe

FRISBHEE (FRIedmann Solver for Black Hole Evaporation in the Early-universe solves the Friedmann - Boltzmann equations for Primordial Black Holes + SM radiation + BSM Models. Considering the collapse of density fluctuations as the PBH formation mechanism, the code handles monochromatic and extended mass and spin distributions. FRISBHEE can return the full evolution of the PBH, SM and Dark Radiation comoving energy densities, together with the evolution of the PBH mass and spin as a function of the log10 at scale factor, and can determine the relic abundance in the case of Dark Matter produced from BH evaporation for monochromatic and extended distributions.

[ascl:1406.006] FROG: Time-series analysis

FROG performs time series analysis and display. It provides a simple user interface for astronomers wanting to do time-domain astrophysics but still offers the powerful features found in packages such as PERIOD (ascl:1406.005). FROG includes a number of tools for manipulation of time series. Among other things, the user can combine individual time series, detrend series (multiple methods) and perform basic arithmetic functions. The data can also be exported directly into the TOPCAT (ascl:1101.010) application for further manipulation if needed.

[ascl:1911.010] Fruitbat: Fast radio burst redshift estimation

Fruitbat estimates the redshift of Fast Radio Bursts (FRB) from their dispersion measure. The code combines various dispersion measure (DM) and redshift relations with the YMW16 galactic dispersion measure model into a single easy to use API.

[ascl:1506.006] fsclean: Faraday Synthesis CLEAN imager

Fsclean produces 3D Faraday spectra using the Faraday synthesis method, transforming directly from multi-frequency visibility data to the Faraday depth-sky plane space. Deconvolution is accomplished using the CLEAN algorithm, and the package includes Clark and Högbom style CLEAN algorithms. Fsclean reads in MeasurementSet visibility data and produces HDF5 formatted images; it handles images and data of arbitrary size, using scratch HDF5 files as buffers for data that is not being immediately processed, and is limited only by available disk space.

[ascl:1710.012] FSFE: Fake Spectra Flux Extractor

The fake spectra flux extractor generates simulated quasar absorption spectra from a particle or adaptive mesh-based hydrodynamic simulation. It is implemented as a python module. It can produce both hydrogen and metal line spectra, if the simulation includes metals. The cloudy table for metal ionization fractions is included. Unlike earlier spectral generation codes, it produces absorption from each particle close to the sight-line individually, rather than first producing an average density in each spectral pixel, thus substantially preserving more of the small-scale velocity structure of the gas. The code supports both Gadget (ascl:0003.001) and AREPO (ascl:1909.010).

[ascl:1010.043] FSPS: Flexible Stellar Population Synthesis

FSPS is a flexible SPS package that allows the user to compute simple stellar populations (SSPs) for a range of IMFs and metallicities, and for a variety of assumptions regarding the morphology of the horizontal branch, the blue straggler population, the post--AGB phase, and the location in the HR diagram of the TP-AGB phase. From these SSPs the user may then generate composite stellar populations (CSPs) for a variety of star formation histories (SFHs) and dust attenuation prescriptions. Outputs include the "observed" spectra and magnitudes of the SSPs and CSPs at arbitrary redshift. In addition to these fortran routines, several IDL routines are provided that allow easy manipulation of the output. FSPS was designed with the intention that the user would make full use of the provided fortran routines. However, the full FSPS package is quite large, and requires some time for the user to become familiar with all of the options and syntax. Some users may only need SSPs for a range of metallicities and IMFs. For such users, standard SSP sets for several IMFs, evolutionary tracks, and spectral libraries are available here.

[ascl:1711.003] FTbg: Background removal using Fourier Transform

FTbg performs Fourier transforms on FITS images and separates low- and high-spatial frequency components by a user-specified cut. Both components are then inverse Fourier transformed back to image domain. FTbg can remove large-scale background/foreground emission in many astrophysical applications. FTbg has been designed to identify and remove Galactic background emission in Herschel/Hi-GAL continuum images, but it is applicable to any other (e.g., Planck) images when background/foreground emission is a concern.

[ascl:9912.002] FTOOLS: A general package of software to manipulate FITS files

FTOOLS, a highly modular collection of utilities for processing and analyzing data in the FITS (Flexible Image Transport System) format, has been developed in support of the HEASARC (High Energy Astrophysics Research Archive Center) at NASA's Goddard Space Flight Center. The FTOOLS package contains many utility programs which perform modular tasks on any FITS image or table, as well as higher-level analysis programs designed specifically for data from current and past high energy astrophysics missions. The utility programs for FITS tables are especially rich and powerful, and provide functions for presentation of file contents, extraction of specific rows or columns, appending or merging tables, binning values in a column or selecting subsets of rows based on a boolean expression. Individual FTOOLS programs can easily be chained together in scripts to achieve more complex operations such as the generation and displaying of spectra or light curves. FTOOLS development began in 1991 and has produced the main set of data analysis software for the current ASCA and RXTE space missions and for other archival sets of X-ray and gamma-ray data. The FTOOLS software package is supported on most UNIX platforms and on Windows machines. The user interface is controlled by standard parameter files that are very similar to those used by IRAF. The package is self documenting through a stand alone help task called fhelp. Software is written in ANSI C and FORTRAN to provide portability across most computer systems. The data format dependencies between hardware platforms are isolated through the FITSIO library package.

[ascl:2112.025] FTP: Fast Template Periodogram

The Fast Template Periodogram extends the Generalised Lomb Scargle periodogram (Zechmeister and Kurster 2009) for arbitrary (periodic) signal shapes. A template is first approximated by a truncated Fourier series of length H. The Nonequispaced Fast Fourier Transform NFFT is used to efficiently compute frequency-dependent sums. Template fitting can now be done in NlogN time, improving existing algorithms by an order of magnitude for even small datasets. The FTP can be used in conjunction with gradient descent to accelerate a non-linear model fit, or be used in place of the multi-harmonic periodogram for non-sinusoidal signals with a priori known shapes.

[ascl:2004.011] FUNDPAR: Deriving FUNDamental PARameters from equivalent widths

FUNDPAR determines fundamental parameters of solar-type stars, by using as input the Equivalent Widths of Fe I,II lines. The code uses solar-scaled ATLAS9 model atmospheres with NEWODF opacities, together with the 2009 version of the MOOG (ascl:1202.009) program. Parameter files control different details, such as the mixing-length parameter, the overshooting, and the damping of the lines. FUNDPAR also derives the uncertainties of the parameters.

[ascl:1112.002] Funtools: FITS Users Need Tools

Funtools is a "minimal buy-in" FITS library and utility package developed at the the High Energy Astrophysics Division of SAO. The Funtools library provides simplified access to a wide array of file types: standard astronomical FITS images and binary tables, raw arrays and binary event lists, and even tables of ASCII column data. A sophisticated region filtering library (compatible with ds9) filters images and tables using boolean operations between geometric shapes, support world coordinates, etc. Funtools also supports advanced capabilities such as optimized data searching using index files.

Because Funtools consists of a library and a set of user programs, it is most appropriately built from source. Funtools has been ported to Solaris, Linux, LinuxPPC, SGI, Alpha OSF1, Mac OSX (darwin) and Windows 98/NT/2000/XP. Once the source code tar file is retrieved, Funtools can be built and installed easily using standard commands.

[ascl:1205.005] Fv: Interactive FITS file editor

Fv is an easy-to-use graphical program for viewing and editing any FITS format image or table. The Fv software is small, completely self-contained and runs on Windows PCs, most Unix platforms and Mac OS-X. Fv also provides a portal into the Hera data analysis service from the HEASARC.

[ascl:1010.015] Fyris Alpha: Computational Fluid Dynamics Code

Fyris Alpha is a high resolution, shock capturing, multi-phase, up-wind Godunov method hydrodynamics code that includes a variable equation of state and optional microphysics such as cooling, gravity and multiple tracer variables. The code has been designed and developed for use primarily in astrophysical applications, such as galactic and interstellar bubbles, hypersonic shocks, and a range of jet phenomena. Fyris Alpha boasts both higher performance and more detailed microphysics than its predecessors, with the aim of producing output that is closer to the observational domain, such as emission line fluxes, and eventually, detailed spectral synthesis. Fyris Alpha is approximately 75,000 lines of C code; it encapsulates the split sweep semi-lagrangian remap PPM method used by ppmlr (in turn developed from VH1, Blondin et al. 1998) but with an improved Riemann solver, which is derived from the exact solver of Gottlieb and Groth (1988), a significantly faster solution than previous solvers. It has a number of optimisations that have improved the speed so that additional calculations neeed for multi-phase simulations become practical.

[ascl:2202.001] GA Galaxy: Interacting galaxies model fitter

GA Galaxy fits models of interacting galaxies to synthetic data using a genetic algorithm and custom fitness function. The genetic algorithm is real-coded and uses a mixed Gaussian kernel for mutation. The fitness function incorporates 1.) a direct pixel-to-pixel comparison between the target and model images and 2.) a comparison of the degree of tidal distortion present in the target and model image such that target-model pairs which are similarly distorted will have a higher relative fitness. The genetic algorithm is written in Python 2.7 while the simulation code (SPAM: Stellar Particle Animation Module) is written in Fortran 90.

[ascl:1801.011] GABE: Grid And Bubble Evolver

GABE (Grid And Bubble Evolver) evolves scalar fields (as well as other purposes) on an expanding background for non-canonical and non-linear classical field theory. GABE is based on the Runge-Kutta method.

[ascl:0003.001] GADGET-2: A Code for Cosmological Simulations of Structure Formation

The cosmological simulation code GADGET-2, a new massively parallel TreeSPH code, is capable of following a collisionless fluid with the N-body method, and an ideal gas by means of smoothed particle hydrodynamics (SPH). The implementation of SPH manifestly conserves energy and entropy in regions free of dissipation, while allowing for fully adaptive smoothing lengths. Gravitational forces are computed with a hierarchical multipole expansion, which can optionally be applied in the form of a TreePM algorithm, where only short-range forces are computed with the `tree'-method while long-range forces are determined with Fourier techniques. Time integration is based on a quasi-symplectic scheme where long-range and short-range forces can be integrated with different timesteps. Individual and adaptive short-range timesteps may also be employed. The domain decomposition used in the parallelisation algorithm is based on a space-filling curve, resulting in high flexibility and tree force errors that do not depend on the way the domains are cut. The code is efficient in terms of memory consumption and required communication bandwidth. It has been used to compute the first cosmological N-body simulation with more than 10^10 dark matter particles, reaching a homogeneous spatial dynamic range of 10^5 per dimension in a 3D box. It has also been used to carry out very large cosmological SPH simulations that account for radiative cooling and star formation, reaching total particle numbers of more than 250 million. GADGET-2 is publicly released to the research community.

[ascl:2204.014] GADGET-4: Parallel cosmological N-body and SPH code

GADGET-4 (GAlaxies with Dark matter and Gas intEracT) is a parallel cosmological N-body and SPH code that simulates cosmic structure formation and calculations relevant for galaxy evolution and galactic dynamics. It is massively parallel and flexible, and can be applied to a variety of different types of simulations, offering a number of sophisticated simulation algorithms. GADGET-4 supports collisionless simulations and smoothed particle hydrodynamics on massively parallel computers.

The code can be used for plain Newtonian dynamics, or for cosmological integrations in arbitrary cosmologies, both with or without periodic boundary conditions. Stretched periodic boxes, and special cases such as simulations with two periodic dimensions and one non-periodic dimension are supported as well. The modeling of hydrodynamics is optional. The code is adaptive both in space and in time, and its Lagrangian character makes it particularly suitable for simulations of cosmic structure formation. Several post-processing options such as group- and substructure finding, or power spectrum estimation are built in and can be carried out on the fly or applied to existing snapshots. Through a built-in cosmological initial conditions generator, it is also particularly easy to carry out cosmological simulations. In addition, merger trees can be determined directly by the code.

[ascl:1108.005] Gaepsi: Gadget Visualization Toolkit

Gaepsi is a PYTHON extension for visualizing cosmology simulations produced by Gadget. Visualization is the most important facet of Gaepsi, but it also allows data analysis on GADGET simulations with its growing number of physics related subroutines and constants. Unlike mesh based scheme, SPH simulations are directly visible in the sense that a splatting process is required to produce raster images from the simulations. Gaepsi produces images of 2-dimensional line-of-sight projections of the simulation. Scalar fields and vector fields are both supported.

Besides the traditional way of slicing a simulation, Gaepsi also has built-in support of 'Survey-like' domain transformation proposed by Carlson & White. An improved implementation is used in Gaepsi. Gaepsi both implements an interactive shell for plotting and exposes its API for batch processing. When complied with OpenMP, Gaepsi automatically takes the advantage of the multi-core computers. In interactive mode, Gaepsi is capable of producing images of size up to 32000 x 32000 pixels. The user can zoom, pan and rotate the field with a command in on the finger tip. The interactive mode takes full advantages of matplotlib's rich annotating, labeling and image composition facilities. There are also built-in commands to add objects that are commonly used in cosmology simulations to the figures.

[ascl:2312.032] gaia_tools: Tools for working with Gaia and related data sets

gaia_tools contains codes for working with the ESA/Gaia data and related data sets (APOGEE, GALAH, LAMOST DR2, and RAVE). Written in Python, it includes tools to read catalogs, perform cross-matching, read RVS or XP spectra, and query the Gaia archive. gaia_tools also contains various matching recipes, such as matching APOGEE or APOGEE-RC to Gaia DR2, and RAVE to TGAS (taking into account the epoch difference).

[ascl:1403.024] GAIA: Graphical Astronomy and Image Analysis Tool

GAIA is an image and data-cube display and analysis tool for astronomy. It provides the usual facilities of image display tools, plus more astronomically useful ones such as aperture and optimal photometry, contouring, source detection, surface photometry, arbitrary region analysis, celestial coordinate readout, calibration and modification, grid overlays, blink comparison, defect patching and the ability to query on-line catalogues and image servers. It can also display slices from data-cubes, extract and visualize spectra as well as perform full 3D rendering. GAIA uses the Starlink software environment (ascl:1110.012) and is derived from the ESO SkyCat tool (ascl:1109.019).

[ascl:1707.006] Gala: Galactic astronomy and gravitational dynamics

Gala is a Python package (and Astropy affiliated package) for Galactic astronomy and gravitational dynamics. The bulk of the package centers around implementations of gravitational potentials, numerical integration, nonlinear dynamics, and astronomical velocity transformations (i.e. proper motions). Gala uses the Astropy units and coordinates subpackages extensively to provide a clean, pythonic interface to these features but does any heavy-lifting in C and Cython for speed.

[ascl:1302.011] GALA: Stellar atmospheric parameters and chemical abundances

GALA is a freely distributed Fortran code to derive the atmospheric parameters (temperature, gravity, microturbulent velocity and overall metallicity) and abundances for individual species of stellar spectra using the classical method based on the equivalent widths of metallic lines. The abundances of individual spectral lines are derived by using the WIDTH9 code developed by R. L. Kurucz. GALA is designed to obtain the best model atmosphere, by optimizing temperature, surface gravity, microturbulent velocity and metallicity, after rejecting the discrepant lines. Finally, it computes accurate internal errors for each atmospheric parameter and abundance. The code obtains chemical abundances and atmospheric parameters for large stellar samples quickly, thus making GALA an useful tool in the epoch of the multi-object spectrographs and large surveys.

[ascl:2103.018] GalacticDNSMass: Bayesian inference determination of mass distribution of Galactic double neutron stars

GalacticDNSMass performs Bayesian inference on Galactic double neutron stars (DNS) to investigate their mass distribution. Each DNS is comprised of two neutron stars (NS), a recycled NS and a non-recycled (slow) NS. It compares two hypotheses: A - recycled NS and non-recycled NS follow an identical mass distribution, and B - they are drawn from two distinct populations. Within each hypothesis it also explore three possible functional models: Gaussian, two-Gaussian (mixture model), and uniform mass distributions.

[ascl:1109.011] GalactICS: Galaxy Model Building Package

GalactICS generates N-body realizations of axisymmetric galaxy models consisting of disk, bulge and halo. Some of the code is in Fortran 77, using lines longer than 72 characters in some cases. The -e flag in the makefile allow for this for a Solaris f77 compiler. Other programs are written in C. Again, the linking between these routines works on Solaris systems, but may need to be adjusted for other architectures. We have found that linking using f77 instead of ld will often automatically load the appropriate libraries.

The graphics output by some of the programs (dbh, plotforce, diskdf, plothalo) uses the PGPLOT library. Alternatively, remove all calls to routines with names starting with "PG", as well as the -lpgplot flag in the Makefile, and the programs should still run fine.

[ascl:1108.004] Galacticus: A Semi-Analytic Model of Galaxy Formation

Galacticus is designed to solve the physics involved in the formation of galaxies within the current standard cosmological framework. It is of a type of model known as “semi-analytic” in which the numerous complex non-linear physics involved are solved using a combination of analytic approximations and empirical calibrations from more detailed, numerical solutions. Models of this type aim to begin with the initial state of the Universe (specified shortly after the Big Bang) and apply physical principles to determine the properties of galaxies in the Universe at later times, including the present day. Typical properties computed include the mass of stars and gas in each galaxy, broad structural properties (e.g. radii, rotation speeds, geometrical shape etc.), dark matter and black hole contents, and observable quantities such as luminosities, chemical composition etc.

[ascl:1303.018] Galactus: Modeling and fitting of galaxies from neutral hydrogen (HI) cubes

Galactus, written in python, is an astronomical software tool for the modeling and fitting of galaxies from neutral hydrogen (HI) cubes. Galactus uses a uniform medium to generate a cube. Galactus can perform the full-radiative transfer for the HI, so can model self-absorption in the galaxy.

[ascl:1408.011] GALAPAGOS-C: Galaxy Analysis over Large Areas

GALAPAGOS-C is a C implementation of the IDL code GALAPAGOS (ascl:1203.002). It processes a complete set of survey images through automation of source detection via SExtractor (ascl:1010.064), postage stamp cutting, object mask preparation, sky background estimation and complex two-dimensional light profile Sérsic modelling via GALFIT (ascl:1104.010). GALAPAGOS-C uses MPI-parallelization, thus allowing quick processing of large data sets. The code can fit multiple Sérsic profiles to each galaxy, each representing distinct galaxy components (e.g. bulge, disc, bar), and optionally can fit asymmetric Fourier mode distortions.

[ascl:1203.002] GALAPAGOS: Galaxy Analysis over Large Areas: Parameter Assessment by GALFITting Objects from SExtractor

GALAPAGOS, Galaxy Analysis over Large Areas: Parameter Assessment by GALFITting Objects from SExtractor (ascl:1010.064), automates source detection, two-dimensional light-profile Sersic modelling and catalogue compilation in large survey applications. Based on a single setup, GALAPAGOS can process a complete set of survey images. It detects sources in the data, estimates a local sky background, cuts postage stamp images for all sources, prepares object masks, performs Sersic fitting including neighbours and compiles all objects in a final output catalogue. For the initial source detection GALAPAGOS applies SExtractor, while GALFIT (ascl:1104.010) is incorporated for modelling Sersic profiles. It measures the background sky involved in the Sersic fitting by means of a flux growth curve. GALAPAGOS determines postage stamp sizes based on SExtractor shape parameters. In order to obtain precise model parameters GALAPAGOS incorporates a complex sorting mechanism and makes use of multiplexing capabilities. It combines SExtractor and GALFIT data in a single output table. When incorporating information from overlapping tiles, GALAPAGOS automatically removes multiple entries from identical sources.

GALAPAGOS is programmed in the Interactive Data Language, IDL. A C implementation of the software, GALAPAGOS-C (ascl:1408.011), is available, and a multi-band Galapagos version is also available.

[ascl:1710.022] galario: Gpu Accelerated Library for Analyzing Radio Interferometer Observations

The galario library exploits the computing power of modern graphic cards (GPUs) to accelerate the comparison of model predictions to radio interferometer observations. It speeds up the computation of the synthetic visibilities given a model image (or an axisymmetric brightness profile) and their comparison to the observations.

[ascl:1503.002] Galax2d: 2D isothermal Euler equations solver

Galax2d computes the 2D stationary solution of the isothermal Euler equations of gas dynamics in a rotating galaxy with a weak bar. The gravitational potential represents a weak bar and controls the flow. A damped Newton method solves the second-order upwind discretization of the equations for a steady-state solution, using a consistent linearization and a direct solver. The code can be applied as a tool for generating flow models if used on not too fine meshes, up to 256 by 256 cells for half a disk in polar coordinates.

[ascl:1104.005] GALAXEV: Evolutionary Stellar Population Synthesis Models

GALAXEV is a library of evolutionary stellar population synthesis models computed using the new isochrone synthesis code of Bruzual & Charlot (2003). This code allows one to computes the spectral evolution of stellar populations in wide ranges of ages and metallicities at a resolution of 3 Å across the whole wavelength range from 3200 Å to 9500 Å, and at lower resolution outside this range.

[ascl:1901.005] Galaxia_wrap: Galaxia wrapper for generating mock stellar surveys

Galaxia_wrap is a python wrap around the popular Galaxia tool (ascl:1101.007) for generating mock stellar surveys, such as a magnitude limited survey, using a built-in Galaxy model or directly from n-body data. It also offers n-body functionality and has been used to infer the age distribution of a specific stellar tracer population.

[ascl:1101.007] Galaxia: A Code to Generate a Synthetic Survey of the Milky Way

We present here a fast code for creating a synthetic survey of the Milky Way. Given one or more color-magnitude bounds, a survey size and geometry, the code returns a catalog of stars in accordance with a given model of the Milky Way. The model can be specified by a set of density distributions or as an N-body realization. We provide fast and efficient algorithms for sampling both types of models. As compared to earlier sampling schemes which generate stars at specified locations along a line of sight, our scheme can generate a continuous and smooth distribution of stars over any given volume. The code is quite general and flexible and can accept input in the form of a star formation rate, age metallicity relation, age velocity dispersion relation and analytic density distribution functions. Theoretical isochrones are then used to generate a catalog of stars and support is available for a wide range of photometric bands. As a concrete example we implement the Besancon Milky Way model for the disc. For the stellar halo we employ the simulated stellar halo N-body models of Bullock & Johnston (2005). In order to sample N-body models, we present a scheme that disperses the stars spawned by an N-body particle, in such a way that the phase space density of the spawned stars is consistent with that of the N-body particles. The code is ideally suited to generating synthetic data sets that mimic near future wide area surveys such as GAIA, LSST and HERMES. As an application we study the prospect of identifying structures in the stellar halo with a simulated GAIA survey.

[submitted] GalaXimView

GalaXimView (for Galaxies Simulations Viewer) is a python3+matplotlib tool designed to visualise simulations which use particles, providing notably a rotatable 3D view and corresponding projections in 2D, together with a way of navigating through snapshots of a simulation keeping the same projection.

[ascl:1904.002] GALAXY: N-body simulation software for isolated, collisionless stellar systems

GALAXY evolves (almost) isolated, collisionless stellar systems, both disk-like and ellipsoidal. In addition to the N-body code galaxy, which offers eleven different methods to compute the gravitational accelerations, the package also includes sophisticated set-up and analysis software. While not as versatile as tree codes, for certain restricted applications the particle-mesh methods in GALAXY are 50 to 200 times faster than a widely-used tree code. After reading in data providing the initial positions, velocities, and (optionally) masses of the particles, GALAXY compute the gravitational accelerations acting on each particle and integrates forward the velocities and positions of the particles for a short time step, repeating these two steps as desired. Intermediate results can be saved, as can the final moment in a state from which the integration could be resumed. Particles can have individual masses and their motion can be integrated using a range of time steps for greater efficiency; message-passing-interface (MPI) calls are available to enable GALAXY's use on parallel machines with high efficiency.

[ascl:1312.010] GalaxyCount: Galaxy counts and variance calculator

GalaxyCount calculates the number and standard deviation of galaxies in a magnitude limited observation of a given area. The methods to calculate both the number and standard deviation may be selected from different options. Variances may be computed for circular, elliptical and rectangular window functions.

[ascl:1702.006] GalaxyGAN: Generative Adversarial Networks for recovery of galaxy features

GalaxyGAN uses Generative Adversarial Networks to reliably recover features in images of galaxies. The package uses machine learning to train on higher quality data and learns to recover detailed features such as galaxy morphology by effectively building priors. This method opens up the possibility of recovering more information from existing and future imaging data.

[ascl:2301.022] GalCEM: GALactic Chemical Evolution Model

GalCEM (GALactic Chemical Evolution Model) tracks isotope masses as a function of time in a given galaxy. The list of tracked isotopes automatically adapts to the complete set provided by the input yields. The prescription includes massive stars, low-to-intermediate mass stars, and Type Ia supernovae as enrichment channels. Multi-dimensional interpolation curves are extracted from the input yield tables with a preprocessing tool; these interpolation curves improve the computation speeds of the full convolution integrals, which are computed for each isotope and for each enrichment channel. GalCEM also provides tools to track the mass rate change of individual isotopes on a typical spiral galaxy with a final baryonic mass of 5×1010M⊙.

[ascl:2312.027] galclaim: GALaxy Chance of Local Alignment algorIthM

galclaim identifies association between astrophysical transient sources and host galaxy. This association is made by estimating the chance alignment between a given transient sky localization and nearby galaxies. The code can be used with various catalogs, including Pan-STARRS, HSC, AllWISE and GLADE. galclaim also pre-checks for nearby bright galaxy using the RC3 catalog (https://heasarc.gsfc.nasa.gov/w3browse/all/rc3.html). When a nearby galaxy is found, a warning is raised and the properties of the galaxy are saved in a dedicated output file. The package can create plots displaying the computed pval for the found objects for each transient and each catalog; plots are stored in the result/plots directory.

[ascl:1812.009] galclassify: Stellar classifications using a galactic population synthesis model

The stellar classification code galclassify is a stand-alone version of Galaxia (ascl:1101.007). It classifies and generates a synthetic population for each star using input containing observables in a fixed format rather than using a precomputed population over a large field. It is suitable for individual stellar classifications, but slow if you want to classify large samples of stars.

[ascl:2410.001] GalCraft: Building integral-field spectrograph data cubes of the Milky Way

GalCraft creates mock integral-field spectroscopic (IFS) observations of the Milky Way and other hydrodynamical/N-body simulations. It conducts all the procedures from inputting data and spectral templates to the output of IFS data cubes in FITS format. The produced mock data cubes can be analyzed in the same way as real IFS observations by many methods, particularly codes like Voronoi binning (ascl:1211.006), pPXF (ascl:1210.002), line-strength indices, or a combination of them (e.g., the GIST pipeline, ascl:1907.025). The code is implemented using Python-native parallelization. GalCraft will be particularly useful for directly comparing the Milky Way with other MW-like galaxies in terms of kinematics and stellar population parameters and ultimately linking the Galactic and extragalactic to study galaxy evolution.

[ascl:1010.033] GALEV Evolutionary Synthesis Models

GALEV evolutionary synthesis models describe the evolution of stellar populations in general, of star clusters as well as of galaxies, both in terms of resolved stellar populations and of integrated light properties over cosmological timescales of > 13 Gyr from the onset of star formation shortly after the Big Bang until today.

For galaxies, GALEV includes a simultaneous treatment of the chemical evolution of the gas and the spectral evolution of the stellar content, allowing for a chemically consistent treatment using input physics (stellar evolutionary tracks, stellar yields and model atmospheres) for a large range of metallicities and consistently account for the increasing initial abundances of successive stellar generations.

[ascl:1810.001] galfast: Milky Way mock catalog generator

galfast generates catalogs for arbitrary, user-supplied Milky Way models, including empirically derived ones. The built-in model set is based on fits to SDSS stellar observations over 8000 deg2 of the sky and includes a three-dimensional dust distribution map. Because of the capability to use empirically derived models, galfast typically produces closer matches to the actual observed counts and color-magnitude diagrams. In particular, galfast-generated catalogs are used to derive the stellar component of “Universe Model” catalogs used by the LSST Project. A key distinguishing characteristic of galfast is its speed. Galfast uses the GPU (with kernels written in NVIDIA C/C++ for CUDA) to offload compute intensive model sampling computations to the GPU, enabling the generation of realistic catalogs to full LSST depth in hours (instead of days or weeks), making it possible to study proposed science cases with high precision.

[ascl:1104.010] GALFIT: Detailed Structural Decomposition of Galaxy Images

GALFIT is a two-dimensional (2-D) fitting algorithm designed to extract structural components from galaxy images, with emphasis on closely modeling light profiles of spatially well-resolved, nearby galaxies observed with the Hubble Space Telescope. The algorithm improves on previous techniques in two areas: 1.) by being able to simultaneously fit a galaxy with an arbitrary number of components, and 2.) with optimization in computation speed, suited for working on large galaxy images. 2-D models such as the "Nuker'' law, the Sersic (de Vaucouleurs) profile, an exponential disk, and Gaussian or Moffat functions are used. The azimuthal shapes are generalized ellipses that can fit disky and boxy components. Many galaxies with complex isophotes, ellipticity changes, and position-angle twists can be modeled accurately in 2-D. When examined in detail, even simple-looking galaxies generally require at least three components to be modeled accurately rather than the one or two components more often employed. This is illustrated by way of seven case studies, which include regular and barred spiral galaxies, highly disky lenticular galaxies, and elliptical galaxies displaying various levels of complexities. A useful extension of this algorithm is to accurately extract nuclear point sources in galaxies.

[ascl:1510.005] GALFORM: Galactic modeling

GALFORM is a semi-analytic model for calculating the formation and evolution of galaxies in hierarchical clustering cosmologies. Using a Monte Carlo algorithm to follow the merging evolution of dark matter haloes with arbitrary mass resolution, it incorporates realistic descriptions of the density profiles of dark matter haloes and the gas they contain. It follows the chemical evolution of gas and stars, and the associated production of dust and includes a detailed calculation of the sizes of discs and spheroids.

[ascl:1408.008] GALIC: Galaxy initial conditions construction

GalIC (GALaxy Initial Conditions) is an implementation of an iterative method to construct steady state composite halo-disk-bulge galaxy models with prescribed density distribution and velocity anisotropy that can be used as initial conditions for N-body simulations. The code is parallelized for distributed memory based on MPI. While running, GalIC produces "snapshot files" that can be used as initial conditions files. GalIC supports the three file formats ('type1' format, the slightly improved 'type2' format, and an HDF5 format) of the GADGET (ascl:0003.001) code for its output snapshot files.

[ascl:2209.011] GaLight: 2D modeling of galaxy images

GaLight (Galaxy shapes of Light) performs two-dimensional model fitting of optical and near-infrared images to characterize the light distribution of galaxies with components including a disk, bulge, bar and quasar. Light is decomposes into PSF and Sersic, and the fitting is based on lenstronomy (ascl:1804.01). GaLight's automated features including searching PSF stars in the FOV, automatically estimating the background noise level, and cutting out the target object galaxies (QSOs) and preparing the materials to model the data. It can also detect objects in the cutout stamp and quickly create Sersic keywords to model them, and model QSOs and galaxies using 2D Sersic profile and scaled point source.

[ascl:1511.010] Galileon-Solver: N-body code

Galileon-Solver adds an extra force to PMCode (ascl:9909.001) using a modified Poisson equation to provide a non-linearly transformed density field, with the operations all performed in real space. The code's implicit spherical top-hat assumption only works over fairly long distance averaging scales, where the coarse-grained picture it relies on is a good approximation of reality; it uses discrete Fourier transforms and cyclic reduction in the usual way.

[ascl:1903.010] GalIMF: Galaxy-wide Initial Mass Function

GalIMF (Galaxy-wide Initial Mass Function) computes the galaxy-wide initial stellar mass function by integrating over a whole galaxy, parameterized by star formation rate and metallicity. The generated stellar mass distribution depends on the galaxy-wide star formation rate (SFR, which is related to the total mass of a galalxy) and the galaxy-wide metallicity. The code can generate a galaxy-wide IMF (IGIMF) and can also generate all the stellar masses within a galaxy with optimal sampling (OSGIMF). To compute the IGIMF or the OSGIMF, the GalIMF module contains all local IMF properties (e.g. the dependence of the stellar IMF on the metallicity, on the density of the star-cluster forming molecular cloud cores), and this software module can, therefore, be also used to obtain only the stellar IMF with various prescriptions, or to investigate other features of the stellar population such as what is the most massive star that can be formed in a star cluster.

[ascl:1711.011] galkin: Milky Way rotation curve data handler

galkin is a compilation of kinematic measurements tracing the rotation curve of our Galaxy, together with a tool to treat the data. The compilation is optimized to Galactocentric radii between 3 and 20 kpc and includes the kinematics of gas, stars and masers in a total of 2780 measurements collected from almost four decades of literature. The user-friendly software provided selects, treats and retrieves the data of all source references considered. This tool is especially designed to facilitate the use of kinematic data in dynamical studies of the Milky Way with various applications ranging from dark matter constraints to tests of modified gravity.

[ascl:2103.027] GalLenspy: Reconstruction of mass profile in disc-like galaxies from the gravitational lensing effect

Gallenspy uses the gravitational lensing effect (GLE) to reconstruct mass profiles in disc-like galaxies. The algorithm inverts the lens equation for gravitational potentials with spherical symmetry, in addition to the estimation in the position of the source, given the positions of the images produced by the lens. Gallenspy also computes critical and caustic curves and the Einstein ring.

[ascl:2202.017] GALLUMI: GALaxy LUMInosity function pipeline

GALLUMI (GALaxy LUMInosity) is a likelihood code that extracts cosmological and astrophysical parameters from the UV galaxy luminosity function. The code is implemented in the MCMC sampler MontePython (ascl:1307.002) and can be readily run in conjunction with other likelihood codes.

[ascl:1903.005] Galmag: Computation of realistic galactic magnetic fields

Galmag computes galactic magnetic fields based on mean field dynamo theory. Written in Python, Galmag allows quick exploration of solutions to the mean field dynamo equation based on galaxy parameters specified by the user, such as the scale height profile and the galaxy rotation curves. The magnetic fields are solenoidal by construction and can be helical.

[ascl:2404.005] GalMOSS: GPU-accelerated galaxy surface brightness fitting via gradient descent

GalMOSS performs two-dimensional fitting of galaxy profiles. This Python-based, Torch-powered tool seamlessly enables GPU parallelization and meets the high computational demands of large-scale galaxy surveys. It incorporates widely used profiles such as the Sérsic, Exponential disk, Ferrer, King, Gaussian, and Moffat profiles, and allows for the easy integration of more complex models. Tested on over 8,000 galaxies from the Sloan Digital Sky Survey (SDSS) g-band with a single NVIDIA A100 GPU, GalMOSS completed classical Sérsic profile fitting in about 10 minutes. Benchmark tests show that GalMOSS achieves computational speeds that are significantly faster than those of default implementations.

[ascl:1501.014] GalPaK 3D: Galaxy parameters and kinematics extraction from 3D data

GalPaK 3D extracts the intrinsic (i.e. deconvolved) galaxy parameters and kinematics from any 3-dimensional data. The algorithm uses a disk parametric model with 10 free parameters (which can also be fixed independently) and a MCMC approach with non-traditional sampling laws in order to efficiently probe the parameter space. More importantly, it uses the knowledge of the 3-dimensional spread-function to return the intrinsic galaxy properties and the intrinsic data-cube. The 3D spread-function class is flexible enough to handle any instrument.

GalPaK 3D can simultaneously constrain the kinematics and morphological parameters of (non-merging, i.e. regular) galaxies observed in non-optimal seeing conditions and can also be used on AO data or on high-quality, high-SNR data to look for non-axisymmetric structures in the residuals.

[ascl:1611.006] GalPot: Galaxy potential code

GalPot finds the gravitational potential associated with axisymmetric density profiles. The package includes code that performs transformations between commonly used coordinate systems for both positions and velocities (the class OmniCoords), and that integrates orbits in the potentials. GalPot is a stand-alone version of Walter Dehnen's GalaxyPotential C++ code taken from the falcON code in the NEMO Stellar Dynamics Toolbox (ascl:1010.051).

[ascl:1010.028] GALPROP: Code for Cosmic-ray Transport and Diffuse Emission Production

GALPROP is a numerical code for calculating the propagation of relativistic charged particles and the diffuse emissions produced during their propagation. The GALPROP code incorporates as much realistic astrophysical input as possible together with latest theoretical developments. The code calculates the propagation of cosmic-ray nuclei, antiprotons, electrons and positrons, and computes diffuse γ-rays and synchrotron emission in the same framework. Each run of the code is governed by a configuration file allowing the user to specify and control many details of the calculation. Thus, each run of the code corresponds to a potentially different "model." The code continues to be developed and is available to the scientific community.

[ascl:1411.008] galpy: Galactic dynamics package

galpy is a python package for galactic dynamics. It supports orbit integration in a variety of potentials, evaluating and sampling various distribution functions, and the calculation of action-angle coordinates for all static potentials.

[ascl:2102.013] GalRotpy: Parametrize the rotation curve and gravitational potential of disk-like galaxies

GalRotpy models the dynamical mass of disk-like galaxies and makes a parametric fit of the rotation curve by means of the composed gravitational potential of the galaxy. It can be used to check the presence of an assumed mass type component in a observed rotation curve, to determine quantitatively the main mass contribution in a galaxy by means of the mass ratios of a given set of five potentials, and to bound the contribution of each mass component given its gravitational potential parameters.

[ascl:1402.009] GalSim: Modular galaxy image simulation toolkit

GalSim is a fast, modular software package for simulation of astronomical images. Though its primary purpose is for tests of weak lensing analysis methods, it can be used for other purposes. GalSim allows galaxies and PSFs to be represented in a variety of ways, and can apply shear, magnification, dilation, or rotation to a galaxy profile including lensing-based models from a power spectrum or NFW halo profile. It can write images in regular FITS files, FITS data cubes, or multi-extension FITS files. It can also compress the output files using various compressions including gzip, bzip2, and rice. The user interface is in python or via configuration scripts, and the computations are done in C++ for speed.

[ascl:1711.007] galstep: Initial conditions for spiral galaxy simulations

galstep generates initial conditions for disk galaxy simulations with GADGET-2 (ascl:0003.001), RAMSES (ascl:1011.007) and GIZMO (ascl:1410.003), including a stellar disk, a gaseous disk, a dark matter halo and a stellar bulge. The first two components follow an exponential density profile, and the last two a Dehnen density profile with gamma=1 by default, corresponding to a Hernquist profile.

[ascl:1711.010] galstreams: Milky Way streams footprint library and toolkit

galstreams provides a compilation of spatial information for known stellar streams and overdensities in the Milky Way and includes Python tools for visualizing them. ASCII tables are also provided for quick viewing of the stream's footprints using TOPCAT (ascl:1101.010). As of 2022, the library provides celestial, distance, proper motion and radial velocity tracks for each stream (pm/vrad when available) stored as AstroPy (ascl:1304.002) SkyCoord objects and a stream's (heliocentric) coordinate frame is realized as an AstroPy reference frame. The code offers polygon footprints and pole (at mid point) and pole tracks in the heliocentric and Galactocentric (GSR) frames. It also offers angular momentum tracks in a heliocentric reference frame at rest with respect to the Galactic center, and provides uniformly reported stream length, end points and mid-point, heliocentric and Galactocentric mid-pole, track and discovery references and information flag denoting which of the 6D attributes (sky, distance, proper motions and radial velocity) are available in the track object.

[ascl:1304.003] GALSVM: Automated Morphology Classification

GALSVM is IDL software for automated morphology classification. It was specially designed for high redshift data but can be used at low redshift as well. It analyzes morphologies of galaxies based on a particular family of learning machines called support vector machines. The method can be seen as a generalization of the classical CAS classification but with an unlimited number of dimensions and non-linear boundaries between decision regions. It is fully automated and consequently well adapted to large cosmological surveys.

[ascl:1708.030] GAMBIT: Global And Modular BSM Inference Tool

GAMBIT (Global And Modular BSM Inference Tool) performs statistical global fits of generic physics models using a wide range of particle physics and astrophysics data. Modules provide native simulations of collider and astrophysics experiments, a flexible system for interfacing external codes (the backend system), a fully featured statistical and parameter scanning framework, and additional tools for implementing and using hierarchical models.

[ascl:1912.012] GAME: GAlaxy Machine learning for Emission lines

GAME infers different ISM physical properties by analyzing the emission line intensities in a galaxy spectrum. The code is trained with a large library of synthetic spectra spanning many different ISM phases, including HII (ionized) regions, PDRs, and neutral regions. GAME is based on a Supervised Machine Learning algorithm called AdaBoost with Decision Trees as base learner. Given a set of input lines in a spectrum, the code performs a training on the library and then evaluates the line intensities to give a determination of the physical properties. The errors on the input emission line intensities and the uncertainties on the physical properties determinations are also taken into account. GAME infers gas density, column density, far-ultraviolet (FUV, 6–13.6 eV) flux, ionization parameter, metallicity, escape fraction, and visual extinction. A web interface for using the code is available.

[ascl:1612.017] GAMER: GPU-accelerated Adaptive MEsh Refinement code

GAMER (GPU-accelerated Adaptive MEsh Refinement) serves as a general-purpose adaptive mesh refinement + GPU framework and solves hydrodynamics with self-gravity. The code supports adaptive mesh refinement (AMR), hydrodynamics with self-gravity, and a variety of GPU-accelerated hydrodynamic and Poisson solvers. It also supports hybrid OpenMP/MPI/GPU parallelization, concurrent CPU/GPU execution for performance optimization, and Hilbert space-filling curve for load balance. Although the code is designed for simulating galaxy formation, it can be easily modified to solve a variety of applications with different governing equations. All optimization strategies implemented in the code can be inherited straightforwardly.

[ascl:2203.007] GAMERA: Source modeling in gamma astronomy

GAMERA handles spectral modeling of non-thermally emitting astrophysical sources in a simple and modular way. It allows the user to devise time-dependent models of leptonic and hadronic particle populations in a general astrophysical context (including SNRs, PWNs and AGNs) and to compute their subsequent photon emission. GAMERA can calculate the spectral evolution of a particle population in the presence of time-dependent or constant injection, energy losses and particle escape; it also calculates the radiation spectrum from a parent particle population.

[ascl:2104.024] GAMMA: Relativistic hydro and local cooling on a moving mesh

GAMMA models relativistic hydrodynamics and non-thermal emission on a moving mesh. It uses an arbitrary Lagrangian-Eulerian approach only in the dominant direction of fluid motion to avoid mesh entanglement and associated computational costs. Shock detection, particle injection and local calculation of their evolution including radiative cooling are done at runtime. The package is modular; though it was designed with GRB physics applications in mind, new solvers and geometries can be implemented easily, making GAMMA suitable for a wide range of applications.

[ascl:2109.001] gammaALPs: Conversion probability between photons and axions/axionlike particles

gammaALPs calculates the conversion probability between photons and axions/axion-like particles in various astrophysical magnetic fields. Though focused on environments relevant to mixing between gamma rays and ALPs, this suite, written in Python, can also be used for broader applications. The code also implements various models of astrophysical magnetic fields, which can be useful for applications beyond ALP searches.

[ascl:1110.007] GammaLib: Toolbox for High-level Analysis of Astronomical Gamma-ray Data

The GammaLib is a versatile toolbox for the high-level analysis of astronomical gamma-ray data. It is implemented as a C++ library that is fully scriptable in the Python scripting language. The library provides core functionalities such as data input and output, interfaces for parameter specifications, and a reporting and logging interface. It implements instruments specific functionalities such as instrument response functions and data formats. Instrument specific functionalities share a common interface to allow for extension of the GammaLib to include new gamma-ray instruments. The GammaLib provides an abstract data analysis framework that enables simultaneous multi-mission analysis.

[ascl:1711.014] Gammapy: Python toolbox for gamma-ray astronomy

Gammapy analyzes gamma-ray data and creates sky images, spectra and lightcurves, from event lists and instrument response information; it can also determine the position, morphology and spectra of gamma-ray sources. It is used to analyze data from H.E.S.S., Fermi-LAT, and the Cherenkov Telescope Array (CTA).

[ascl:1105.011] Ganalyzer: A tool for automatic galaxy image analysis

Ganalyzer is a model-based tool that automatically analyzes and classifies galaxy images. Ganalyzer works by separating the galaxy pixels from the background pixels, finding the center and radius of the galaxy, generating the radial intensity plot, and then computing the slopes of the peaks detected in the radial intensity plot to measure the spirality of the galaxy and determine its morphological class. Unlike algorithms that are based on machine learning, Ganalyzer is based on measuring the spirality of the galaxy, a task that is difficult to perform manually, and in many cases can provide a more accurate analysis compared to manual observation. Ganalyzer is simple to use, and can be easily embedded into other image analysis applications. Another advantage is its speed, which allows it to analyze ~10,000,000 galaxy images in five days using a standard modern desktop computer. These capabilities can make Ganalyzer a useful tool in analyzing large datasets of galaxy images collected by autonomous sky surveys such as SDSS, LSST or DES.

[ascl:1708.012] GANDALF: Gas AND Absorption Line Fitting

GANDALF (Gas AND Absorption Line Fitting) accurately separates the stellar and emission-line contributions to observed spectra. The IDL code includes reddening by interstellar dust and also returns formal errors on the position, width, amplitude and flux of the emission lines. Example wrappers that make use of pPXF (ascl:1210.002) to derive the stellar kinematics are included.

[ascl:1602.015] GANDALF: Graphical Astrophysics code for N-body Dynamics And Lagrangian Fluids

GANDALF, a successor to SEREN (ascl:1102.010), is a hybrid self-gravitating fluid dynamics and collisional N-body code primarily designed for investigating star formation and planet formation problems. GANDALF uses various implementations of Smoothed Particle Hydrodynamics (SPH) to perform hydrodynamical simulations of gas clouds undergoing gravitational collapse to form new stars (or other objects), and can perform simulations of pure N-body dynamics using high accuracy N-body integrators, model the intermediate phase of cluster evolution, and provide visualizations via its python interface as well as interactive simulations. Although based on many of the SEREN routines, GANDALF has been largely re-written from scratch in C++ using more optimal algorithms and data structures.

[ascl:1303.027] GaPP: Gaussian Processes in Python

The algorithm Gaussian processes can reconstruct a function from a sample of data without assuming a parameterization of the function. The GaPP code can be used on any dataset to reconstruct a function. It handles individual error bars on the data and can be used to determine the derivatives of the reconstructed function. The data sample can consist of observations of the function and of its first derivative.

[ascl:1010.049] Gas-momentum-kinetic SZ cross-correlations

We present a new method for detecting the missing baryons by generating a template for the kinematic Sunyaev-Zel'dovich effect. The template is computed from the product of a reconstructed velocity field with a galaxy field. We provide maps of such templates constructed from SDSS Data Release 7 spectroscopic data (SDSS VAGC sample) along side with their expected two point correlation functions with CMB temperature anisotropies. Codes of generating such coefficients of the two point correlation function are also released to provide users of the gas-momentum map a way to change the parameters such as cosmological parameters, reionization history, ionization parameters, etc.

[ascl:1210.020] GASGANO: Data File Organizer

GASGANO is a GUI software tool for managing and viewing data files produced by VLT Control System (VCS) and the Data Flow System (DFS). It is developed and maintained by ESO to help its user community manage and organize astronomical data observed and produced by all VLT compliant telescopes in a systematic way. The software understands FITS, PAF, and ASCII files, and Reduction Blocks, and can group, sort, classify, filter, and search data in addition to allowing the user to browse, view, and manage them.

[ascl:1710.019] GASOLINE: Smoothed Particle Hydrodynamics (SPH) code

Gasoline solves the equations of gravity and hydrodynamics in astrophysical problems, including simulations of planets, stars, and galaxies. It uses an SPH method that features correct mixing behavior in multiphase fluids and minimal artificial viscosity. This method is identical to the SPH method used in the ChaNGa code (ascl:1105.005), allowing users to extend results to problems requiring >100,000 cores. Gasoline uses a fast, memory-efficient O(N log N) KD-Tree to solve Poisson's Equation for gravity and avoids artificial viscosity in non-shocking compressive flows.

[ascl:2410.016] Gaspery: Radial velocity (RV) observing strategies

Gaspery uses the Fisher Information Matrix (FIM) to evaluate different radial velocity (RV) observing strategies; this assists observational exoplanet astronomers in constructing the observing strategy that maximizes information (or minimizes uncertainty) on the RV semi-amplitude K. The code is flexible and generalizable, however, and can maximize information on any free parameter from any model, given a time series support (x-axis).

[ascl:2406.001] GAStimator: Python MCMC gibbs-sampler with adaptive stepping

GAStimator implements a Python MCMC Gibbs-sampler with adaptive stepping. The code is simple, robust, and stable and well suited to high dimensional problems with many degrees of freedom and very sharp likelihood features. It has been used extensively for kinematic modeling of molecular gas in galaxies, but is fully general and may be used for any problem MCMC methods can tackle.

[ascl:2409.015] GASTLI: GAS gianT modeL for Interiors

GASTLI (GAS gianT modeL for Interiors) calculates the interior structure models for gas giants exoplanets. The code computes mass-radius curves, thermal evolution curves, and interior composition retrievals to fit a interior structure model to your mass, radius, age, and if available, atmospheric metallicity data. GASTLI can also plot the results, including internal and atmospheric profiles, a pressure-temperature diagram, mass-radius relations, and thermal evolution curves.

[ascl:1610.007] gatspy: General tools for Astronomical Time Series in Python

Gatspy contains efficient, well-documented implementations of several common routines for Astronomical time series analysis, including the Lomb-Scargle periodogram, the Supersmoother method, and others.

[ascl:2405.007] GauPro: R package for Gaussian process modeling

GauPro fits a Gaussian process regression model to a dataset. A Gaussian process (GP) is a commonly used model in computer simulation. It assumes that the distribution of any set of points is multivariate normal. A major benefit of GP models is that they provide uncertainty estimates along with their predictions.

[ascl:1406.018] GAUSSCLUMPS: Gaussian-shaped clumping from a spectral map

GAUSSCLUMPS decomposes a spectral map into Gaussian-shape clumps. The clump-finding algorithm decomposes a spectral data cube by iteratively removing 3-D Gaussians as representative clumps. GAUSSCLUMPS was originally a separate code distribution but is now a contributed package in GILDAS (ascl:1305.010). A reimplementation can also be found in CUPID (ascl:1311.007).

[ascl:1305.009] GaussFit: Solving least squares and robust estimation problems

GaussFit solves least squares and robust estimation problems; written originally for reduction of NASA Hubble Space Telescope data, it includes a complete programming language designed especially to formulate estimation problems, a built-in compiler and interpreter to support the programming language, and a built-in algebraic manipulator for calculating the required partial derivatives analytically. The code can handle nonlinear models, exact constraints, correlated observations, and models where the equations of condition contain more than one observed quantity. Written in C, GaussFit includes an experimental robust estimation capability so data sets contaminated by outliers can be handled simply and efficiently.

[ascl:1907.019] GaussPy: Python implementation of the Autonomous Gaussian Decomposition algorithm

GaussPy implements the Autonomous Gaussian Decomposition (AGD) algorithm, which uses computer vision and machine learning techniques to provide optimized initial guesses for the parameters of a multi-component Gaussian model automatically and efficiently. The speed and adaptability of AGD allow it to interpret large volumes of spectral data efficiently. Although it was initially designed for applications in radio astrophysics, AGD can be used to search for one-dimensional Gaussian (or any other single-peaked spectral profile)-shaped components in any data set. To determine how many Gaussian functions to include in a model and what their parameters are, AGD uses a technique called derivative spectroscopy. The derivatives of a spectrum can efficiently identify shapes within that spectrum corresponding to the underlying model, including gradients, curvature and edges.

[ascl:1907.020] GaussPy+: Gaussian decomposition package for emission line spectra

GaussPy+ is a fully automated Gaussian decomposition package for emission line spectra. It is based on GaussPy (ascl:1907.019) and offers several improvements, including automating preparatory steps and providing an accurate noise estimation, improving the fitting routine, and providing a routine to refit spectra based on neighboring fit solutions. GaussPy+ handles complex emission and low to moderate signal-to-noise values.

[ascl:1710.014] GBART: Determination of the orbital elements of spectroscopic binaries

GBART is an improved version of the code for determining the orbital elements for spectroscopic binaries originally written by Bertiau & Grobben (1968).

[ascl:2210.011] gbdes: DECam instrumental signature fitting and processing programs

gbdes derives photometric and astrometric calibration solutions for complex multi-detector astronomical imagers. The package includes routines to filter catalogs down to useful stellar objects, collect metadata from the catalogs and create a config file holding FITS binary tables describing exposures, instruments, fields, and other available information in the data, and uses a friends-of-friends matching algorithm to link together all detections of common objects found in distinct exposures. gbdes also calculates airmasses and parallactic angles for each exposure, calculates and saves the expected differential chromatic refraction (DCR) needed for precision astrometry, optimizes the parameters of a photometric model to maximize agreement between magnitudes measured in different exposures of the same source, and optimizing the parameters of an astrometric model to maximize agreement among the exposures and any reference catalogs, and performs other tasks. The solutions derived and used by gbdes are stored in YAML format; gbdes uses the Python code pixmappy (ascl:2210.012) to read the astrometric solution files and execute specified transformations.

[ascl:1908.006] GBKFIT: Galaxy kinematic modeling

GBKFIT performs galaxy kinematic modeling. It can be used to extract morphological and kinematical properties of galaxies by fitting models to spatially resolved kinematic data. The software can also take beam smearing into account by using the knowledge of the line and point spread functions. GBKFIT can take advantage of many-core and massively parallel architectures such as multi-core CPUs and Graphics Processing Units (GPUs), making it suitable for modeling large-scale surveys of thousands of galaxies within a very seasonable time frame. GBKFIT features an extensible object-oriented architecture that supports arbitrary models and optimization techniques in the form of modules; users can write custom modules without modifying GBKFIT’s source code. The software is written in C++ and conforms to the latest ISO standards.

[ascl:1303.019] GBTIDL: Reduction and Analysis of GBT Spectral Line Data

GBTIDL is an interactive package for reduction and analysis of spectral line data taken with the Robert C. Byrd Green Bank Telescope (GBT). The package, written entirely in IDL, consists of straightforward yet flexible calibration, averaging, and analysis procedures (the "GUIDE layer") modeled after the UniPOPS and CLASS data reduction philosophies, a customized plotter with many built-in visualization features, and Data I/O and toolbox functionality that can be used for more advanced tasks. GBTIDL makes use of data structures which can also be used to store intermediate results. The package consumes and produces data in GBT SDFITS format. GBTIDL can be run online and have access to the most recent data coming off the telescope, or can be run offline on preprocessed SDFITS files.

[ascl:2111.015] gCMCRT: 3D Monte Carlo Radiative Transfer for exoplanet atmospheres using GPUs

gCMCRT globally processes 3D atmospheric data, and as a fully 3D model, it avoids the biases and assumptions present when using 1D models to process 3D structures. It is well suited to performing the post-processing of large parameter GCM model grids, and provides simple pipelines that convert the 3D GCM structures from many well used GCMs in the community to the gCMCRT format, interpolating chemical abundances (if needed) and performing the required spectra calculation. The high-resolution spectra modes of gCMCRT provide an additional highly useful capability for 3D modellers to directly compare output to high-resolution spectral data.

[ascl:2302.018] GCP: Automated GILDAS-CLASS Pipeline

This library of scripts provides a simple interface for running the CLASS software from GILDAS (ascl:1305.010) in a semi-automatic way. Using these scripts, one can extract and organize spectra from data files in CLASS format (for example, .30m and .40m), reduce them, and even combine or average them once they are reduced. The library contains five Python scripts and two optional Julia scripts.

[ascl:1811.018] gdr2_completeness: GaiaDR2 data retrieval and manipulation

gdr2_completeness queries Gaia DR2 TAP services and divides the queries into sub-queries chunked into arbitrary healpix bins. Downloaded data are formatted into arrays. Internal completeness is calculated by dividing the total starcount and starcounts with an applied cut (e.g., radial velocity measurement and good parallax). Independent determination of the external GDR2 completeness per healpix (level 6) and G magnitude bin (3 coarse bins: 8-12,12-15,15-18) is inferred from a crossmatch with 2MASS data. The overall completeness of a specific GDR2 sample can be approximated by multiplying the internal with the external completeness map, which is useful when data are compared to models thereof. Jupyter notebooks showcasing both utilities enable the user to easily construct the overall completeness for arbitrary samples of the GDR2 catalogue.

[ascl:1010.079] Geant4: A Simulation Toolkit for the Passage of Particles through Matter

Geant4 is a toolkit for simulating the passage of particles through matter. It includes a complete range of functionality including tracking, geometry, physics models and hits. The physics processes offered cover a comprehensive range, including electromagnetic, hadronic and optical processes, a large set of long-lived particles, materials and elements, over a wide energy range starting, in some cases, from 250eV and extending in others to the TeV energy range. It has been designed and constructed to expose the physics models utilised, to handle complex geometries, and to enable its easy adaptation for optimal use in different sets of applications. The toolkit is the result of a worldwide collaboration of physicists and software engineers. It has been created exploiting software engineering and object-oriented technology and implemented in the C++ programming language. It has been used in applications in particle physics, nuclear physics, accelerator design, space engineering and medical physics.

[ascl:1608.006] Gemini IRAF: Data reduction software for the Gemini telescopes

The Gemini IRAF package processes observational data obtained with the Gemini telescopes. It is an external package layered upon IRAF and supports data from numerous instruments, including FLAMINGOS-2, GMOS-N, GMOS-S, GNIRS, GSAOI, NIFS, and NIRI. The Gemini IRAF package is organized into sub-packages; it contains a generic tools package, "gemtools", along with instrument-specific packages. The raw data from the Gemini facility instruments are stored as Multi-Extension FITS (MEF) files. Therefore, all the tasks in the Gemini IRAF package, intended for processing data from the Gemini facility instruments, are capable of handling MEF files.

[ascl:1007.003] GEMINI: A toolkit for analytical models of two-point correlations and inhomogeneous structure formation

Gemini is a toolkit for analytical models of two-point correlations and inhomogeneous structure formation. It extends standard Press-Schechter theory to inhomogeneous situations, allowing a realistic, analytical calculation of halo correlations and bias.

[ascl:1212.005] General complex polynomial root solver

This general complex polynomial root solver, implemented in Fortran and further optimized for binary microlenses, uses a new algorithm to solve polynomial equations and is 1.6-3 times faster than the ZROOTS subroutine that is commercially available from Numerical Recipes, depending on application. The largest improvement, when compared to naive solvers, comes from a fail-safe procedure that permits skipping the majority of the calculations in the great majority of cases, without risking catastrophic failure in the few cases that these are actually required.

[ascl:2006.020] GenetIC: Initial conditions generator for cosmological simulations

GenetIC generates initial conditions for cosmological simulations, especially for zoom simulations of galaxies. It provides support for "genetic modifications" of specific attributes of simulations to allow study of the impact of such modifications on the outcomes; the code can also produce constrained initial conditions.

[ascl:1812.014] GENGA: Gravitational ENcounters with Gpu Acceleration

GENGA (Gravitational ENcounters with Gpu Acceleration) integrates planet and planetesimal dynamics in the late stage of planet formation and stability analyses of planetary systems. It uses mixed variable integration when the motion is a perturbed Kepler orbit and combines this with a direct N-body Bulirsch-Stoer method during close encounters. It supports three simulation modes: 1.) integration of up to 2048 massive bodies; 2.) integration with up to a million test particles; and 3.) parallel integration of a large number of individual planetary systems.

[ascl:1706.006] GenPK: Power spectrum generator

GenPK generates the 3D matter power spectra for each particle species from a Gadget snapshot. Written in C++, it requires both FFTW3 and GadgetReader.

[ascl:1011.015] Geokerr: Computing Photon Orbits in a Kerr Spacetime

Relativistic radiative transfer problems require the calculation of photon trajectories in curved spacetime. Programmed in Fortran, Geokerr uses a novel technique for rapid and accurate calculation of null geodesics in the Kerr metric. The equations of motion from the Hamilton-Jacobi equation are reduced directly to Carlson's elliptic integrals, simplifying algebraic manipulations and allowing all coordinates to be computed semi-analytically for the first time.

[ascl:1511.015] George: Gaussian Process regression

George is a fast and flexible library, implemented in C++ with Python bindings, for Gaussian Process regression useful for accounting for correlated noise in astronomical datasets, including those for transiting exoplanet discovery and characterization and stellar population modeling.

[ascl:1412.012] GeoTOA: Geocentric TOA tools

GeoTOA computes the pulse times of arrival (TOAs) at an observatory (or spacecraft) from unbinned Fermi LAT data. Written in Python, the software requires NumPy, matplotlib, SciPy, Fermitools (ascl:1905.011), and Tempo2 (ascl:1210.015).

[ascl:2306.058] GER: Global Extinction Reduction

The Global Extinction Reduction IDL codes compare optical photometry from the twin Gemini North and South Multi-Object Spectrographs (GMOS-N and GMOS-S) against the expected worsening of atmospheric transparency due to global climate change. Data from the Gemini instruments are first reduced by DRAGONS (ascl:1811.002). GER then calibrates them against the Sloan Digital Sky Survey (SDSS) and Gaia G-band catalogs; image rotation and alignment is accomplished via identification of sufficiently-bright stars in Gaia. A simple model of Gemini and their site characteristics is generated, including meteorology, cloudy-fractions, number of reflections, dates of re-coatings modulated by rate of efficiency decay, together with response of detectors and associated zeropoints, and can be compared with the decline of transparency due to rising temperature and associated humidity increase.

Previous
123Next

Would you like to view a random code?