ASCL.net

Astrophysics Source Code Library

Making codes discoverable since 1999

Browsing Codes

Results 1501-1750 of 2094 (2063 ASCL, 31 submitted)

Order
Title Date
 
Mode
Abstract Compact
Per Page
50100250All
[ascl:1802.001] FAC: Flexible Atomic Code

FAC calculates various atomic radiative and collisional processes, including radiative transition rates, collisional excitation and ionization by electron impact, energy levels, photoionization, and autoionization, and their inverse processes radiative recombination and dielectronic capture. The package also includes a collisional radiative model to construct synthetic spectra for plasmas under different physical conditions.

[ascl:1705.006] f3: Full Frame Fotometry for Kepler Full Frame Images

Light curves from the Kepler telescope rely on "postage stamp" cutouts of a few pixels near each of 200,000 target stars. These light curves are optimized for the detection of short-term signals like planet transits but induce systematics that overwhelm long-term variations in stellar flux. Longer-term effects can be recovered through analysis of the Full Frame Images, a set of calibration data obtained monthly during the Kepler mission. The Python package f3 analyzes the Full Frame Images to infer long-term astrophysical variations in the brightness of Kepler targets, such as magnetic activity or sunspots on slowly rotating stars.

[ascl:1208.021] EzGal: A Flexible Interface for Stellar Population Synthesis Models

EzGal is a flexible Python program which generates observable parameters (magnitudes, colors, and mass-to-light ratios) for arbitrary input stellar population synthesis (SPS) models; it enables simple, direct comparison of different model sets so that the uncertainty introduced by choice of model set can be quantified. EzGal is also capable of generating composite stellar population models (CSPs) for arbitrary input star-formation histories and reddening laws, and can be used to interpolate between metallicities for a given model set.

[ascl:1210.004] EZ: A Tool For Automatic Redshift Measurement

EZ (Easy-Z) estimates redshifts for extragalactic objects. It compares the observed spectrum with a set of (user given) spectral templates to find out the best value for the redshift. To accomplish this task, it uses a highly configurable set of algorithms. EZ is easily extendible with new algorithms. It is implemented as a set of C programs and a number of python classes. It can be used as a standalone program, or the python classes can be directly imported by other applications.

[ascl:1407.019] EZ_Ages: Stellar population age calculator

EZ_Ages is an IDL code package that computes the mean, light-weighted stellar population age, [Fe/H], and abundance enhancements [Mg/Fe], [C/Fe], [N/Fe], and [Ca/Fe] for unresolved stellar populations. This is accomplished by comparing Lick index line strengths between the data and the stellar population models of Schiavon (2007), using a method described in Graves & Schiavon (2008). The algorithm uses the inversion of index-index model grids to determine ages and abundances, and exploits the sensitivities of the various Lick indices to measure Mg, C, N, and Ca enhancements over their solar abundances with respect to Fe.

[ascl:1010.061] EyE: Enhance Your Extraction

In EyE (Enhance Your Extraction) an artificial neural network connected to pixels of a moving window (retina) is trained to associate these input stimuli to the corresponding response in one or several output image(s). The resulting filter can be loaded in SExtractor to operate complex, wildly non-linear filters on astronomical images. Typical applications of EyE include adaptive filtering, feature detection and cosmetic corrections.

[ascl:1010.032] Extreme Deconvolution: Density Estimation using Gaussian Mixtures in the Presence of Noisy, Heterogeneous and Incomplete Data

Extreme-deconvolution is a general algorithm to infer a d-dimensional distribution function from a set of heterogeneous, noisy observations or samples. It is fast, flexible, and treats the data's individual uncertainties properly, to get the best description possible for the underlying distribution. It performs well over the full range of density estimation, from small data sets with only tens of samples per dimension, to large data sets with hundreds of thousands of data points.

[ascl:1803.011] ExtLaw_H18: Extinction law code

ExtLaw_H18 generates the extinction law between 0.8 - 2.2 microns. The law is derived using the Westerlund 1 (Wd1) main sequence (A_Ks ~ 0.6 mag) and Arches cluster field Red Clump at the Galactic Center (A_Ks ~ 2.7 mag). To derive the law a Wd1 cluster age of 5 Myr is assumed, though changing the cluster age between 4 Myr -- 7 Myr has no effect on the law. This extinction law can be applied to highly reddened stellar populations that have similar foreground material as Wd1 and the Arches RC, namely dust from the spiral arms of the Milky Way in the Galactic Plane.

[ascl:1708.025] extinction-distances: Estimating distances to dark clouds

Extinction-distances uses the number of foreground stars and a Galactic model of the stellar distribution to estimate the distance to dark clouds. It exploits the relatively narrow range of intrinsic near-infrared colors of stars to separate foreground from background stars. An advantage of this method is that the distribution of stellar colors in the Galactic model need not be precisely correct, only the number density as a function of distance from the Sun.

[ascl:9906.002] EXTINCT: A computerized model of large-scale visual interstellar extinction

The program EXTINCT.FOR is a FORTRAN subroutine summarizing a three-dimensional visual Galactic extinction model, based on a number of published studies. INPUTS: Galactic latitude (degrees), Galactic longitude (degrees), and source distance (kpc). OUTPUTS (magnitudes): Extinction, extinction error, a statistical correction term, and an array containing extinction and extinction error from each subroutine. The model is useful for correcting visual magnitudes of Galactic sources (particularly in statistical models), and has been used to find Galactic extinction of extragalactic sources. The model's limited angular resolution (subroutine-dependent, but with a minimum resolution of roughly 2 degrees) is necessitated by its ability to describe three-dimensional structure.

[ascl:1212.013] EXSdetect: Extended X-ray Source Detection

EXSdetect is a python implementation of an X-ray source detection algorithm which is optimally designed to detected faint extended sources and makes use of Voronoi tessellation and Friend-of-Friend technique. It is a flexible tool capable of detecting extended sources down to the lowest flux levels attainable within instrumental limitations while maintaining robust photometry, high completeness, and low contamination, regardless of source morphology. EXSdetect was developed mainly to exploit the ever-increasing wealth of archival X-ray data, but is also ideally suited to explore the scientific capabilities of future X-ray facilities, with a strong focus on investigations of distant groups and clusters of galaxies.

[ascl:1902.009] ExPRES: Exoplanetary and Planetary Radio Emissions Simulator

ExPRES (Exoplanetary and Planetary Radio Emission Simulator) reproduces the occurrence of CMI-generated radio emissions from planetary magnetospheres, exoplanets or star-planet interacting systems in time-frequency plane, with special attention given to computation of the radio emission beaming at and near its source. Physical information drawn from such radio observations may include the location and dynamics of the radio sources, the type of current system leading to electron acceleration and their energy and, for exoplanetary systems, the magnetic field strength, the orbital period of the emitting body and the rotation period, tilt and offset of the planetary magnetic field. Most of these parameters can be remotely measured only via radio observations. ExPRES code provides the proper framework of analysis and interpretation for past (Cassini, Voyager, Galileo), current (Juno, ground-based radio telescopes) and future (BepiColombo, Juice) observations of planetary radio emissions, as well as for future detection of radio emissions from exoplanetary systems.

[ascl:1706.001] Exotrending: Fast and easy-to-use light curve detrending software for exoplanets

The simple, straightforward Exotrending code detrends exoplanet transit light curves given a light curve (flux versus time) and good ephemeris (epoch of first transit and orbital period). The code has been tested with Kepler and K2 light curves and should work with any other light curve.

[ascl:1708.023] ExoSOFT: Exoplanet Simple Orbit Fitting Toolbox

ExoSOFT provides orbital analysis of exoplanets and binary star systems. It fits any combination of astrometric and radial velocity data, and offers four parameter space exploration techniques, including MCMC. It is packaged with an automated set of post-processing and plotting routines to summarize results, and is suitable for performing orbital analysis during surveys with new radial velocity and direct imaging instruments.

[ascl:1706.010] EXOSIMS: Exoplanet Open-Source Imaging Mission Simulator

EXOSIMS generates and analyzes end-to-end simulations of space-based exoplanet imaging missions. The software is built up of interconnecting modules describing different aspects of the mission, including the observatory, optical system, and scheduler (encoding mission rules) as well as the physical universe, including the assumed distribution of exoplanets and their physical and orbital properties. Each module has a prototype implementation that is inherited by specific implementations for different missions concepts, allowing for the simulation of widely variable missions.

[ascl:1703.008] exorings: Exoring Transit Properties

Exorings is suitable for surveying entire catalogs of transiting planet candidates for exoring candidates, providing a subset of objects worthy of more detailed light curve analysis. Moreover, it is highly suited for uncovering evidence of a population of ringed planets by comparing the radius anomaly and PR-effects in ensemble studies.

[ascl:1501.012] Exorings: Exoring modelling software

Exorings, written in Python, contains tools for displaying and fitting giant extrasolar planet ring systems; it uses FITS formatted data for input.

[ascl:1603.010] ExoPriors: Accounting for observational bias of transiting exoplanets

ExoPriors calculates a log-likelihood penalty for an input set of transit parameters to account for observational bias (geometric and signal-to-noise ratio detection bias) of transiting exoplanets. Written in Python, the code calculates this log-likelihood penalty in one of seven user-specified cases specified with Boolean input parameters for geometric and/or SNR bias, grazing or non-grazing events, and occultation events.

[ascl:1407.008] Exopop: Exoplanet population inference

Exopop is a general hierarchical probabilistic framework for making justified inferences about the population of exoplanets. Written in python, it requires that the occurrence rate density be a smooth function of period and radius (employing a Gaussian process) and takes survey completeness and observational uncertainties into account. Exopop produces more accurate estimates of the whole population than standard procedures based on weighting by inverse detection efficiency.

[ascl:1501.015] Exoplanet: Trans-dimensional MCMC method for exoplanet discovery

Exoplanet determines the posterior distribution of exoplanets by use of a trans-dimensional Markov Chain Monte Carlo method within Nested Sampling. This method finds the posterior distribution in a single run rather than requiring multiple runs with trial values.

[ascl:1910.005] exoplanet: Probabilistic modeling of transit or radial velocity observations of exoplanets

exoplanet is a toolkit for probabilistic modeling of transit and/or radial velocity observations of exoplanets and other astronomical time series using PyMC3 (ascl:1610.016), a flexible and high-performance model building language and inference engine. exoplanet extends PyMC3's language to support many of the custom functions and distributions required when fitting exoplanet datasets. These features include a fast and robust solver for Kepler's equation; scalable Gaussian processes using celerite (ascl:1709.008); and fast and accurate limb darkened light curves using the code starry (ascl:1810.005). It also offers common reparameterizations for limb darkening parameters, and planet radius and impact parameters.

[submitted] ExoPlanet

ExoPlanet provides a graphical interface for the construction, evaluation and application of a machine learning model in predictive analysis. With the back-end built using the numpy and scikit-learn libraries, ExoPlanet couples fast and well tested algorithms, a UI designed over the PyQt framework, and graphs rendered using Matplotlib. This serves to provide the user with a rich interface, rapid analytics and interactive visuals.

ExoPlanet is designed to have a minimal learning curve to allow researchers to focus more on the applicative aspect of machine learning algorithms rather than their implementation details and supports both methods of learning, providing algorithms for unsupervised and supervised training, which may be done with continuous or discrete labels. The parameters of each algorithms can be adjusted to ensure the best fit for the data. Training data is read from a CSV file, and after training is complete, ExoPlanet automates the building of the visual representations for the trained model. Once training and evaluation yield satisfactory results, the model may be used to make data based predictions on a new data set.

[ascl:1806.020] exoinformatics: Compute the entropy of a planetary system's size-ordering

exoinformatics computes the entropy of a planetary system's size ordering using three different entropy methods: tally-scores, integral path, and change points.

[ascl:1812.007] ExoGAN: Exoplanets Generative Adversarial Network

ExoGAN (Exoplanets Generative Adversarial Network) analyzes exoplanetary atmospheres using an unsupervised deep-learning algorithm that recognizes molecular features, atmospheric trace-gas abundances, and planetary parameters. After training, ExoGAN can be applied to a large number of instruments and planetary types and can be used either as a final atmospheric analysis or to provide prior constraints to subsequent retrieval.

[ascl:1201.009] ExoFit: Orbital parameters of extra-solar planets from radial velocity

ExoFit is a freely available software package for estimating orbital parameters of extra-solar planets. ExoFit can search for either one or two planets and employs a Bayesian Markov Chain Monte Carlo (MCMC) method to fit a Keplerian radial velocity curve onto the radial velocity data.

[ascl:1710.003] EXOFASTv2: Generalized publication-quality exoplanet modeling code

EXOFASTv2 improves upon EXOFAST (ascl:1207.001) for exoplanet modeling. It uses a differential evolution Markov Chain Monte Carlo code to fit an arbitrary number of transits (each with their own error scaling, normalization, TTV, and/or detrending parameters), an arbitrary number of RV sources (each with their own zero point and jitter), and an arbitrary number of planets, changing nothing but command line arguments and configuration files. The global model includes integrated isochrone and SED models to constrain the stellar properties and can accept priors on any fitted or derived quantities (e.g., parallax from Gaia). It is easily extensible to add additional effects or parameters.

[ascl:1207.001] EXOFAST: Fast transit and/or RV fitter for single exoplanet

EXOFAST is a fast, robust suite of routines written in IDL which is designed to fit exoplanetary transits and radial velocity variations simultaneously or separately, and characterize the parameter uncertainties and covariances with a Differential Evolution Markov Chain Monte Carlo method. Our code self-consistently incorporates both data sets to simultaneously derive stellar parameters along with the transit and RV parameters, resulting in consistent, but tighter constraints on an example fit of the discovery data of HAT-P-3b that is well-mixed in under two minutes on a standard desktop computer. EXOFAST has an easy-to-use online interface for several basic features of our transit and radial velocity fitting. A more robust version of EXOFAST, EXOFASTv2 (ascl:1710.003), is also available.

[ascl:1512.011] ExoData: Open Exoplanet Catalogue exploration and analysis tool

ExoData is a python interface for accessing and exploring the Open Exoplanet Catalogue. It allows searching of planets (including alternate names) and easy navigation of hierarchy, parses spectral types and fills in missing parameters based on programmable specifications, and provides easy reference of planet parameters such as GJ1214b.ra, GJ1214b.T, and GJ1214b.R. It calculates values such as transit duration, can easily rescale units, and can be used as an input catalog for large scale simulation and analysis of planets.

[ascl:1803.014] ExoCross: Spectra from molecular line lists

ExoCross generates spectra and thermodynamic properties from molecular line lists in ExoMol, HITRAN, or several other formats. The code is parallelized and also shows a high degree of vectorization; it works with line profiles such as Doppler, Lorentzian and Voigt and supports several broadening schemes. ExoCross is also capable of working with the recently proposed method of super-lines. It supports calculations of lifetimes, cooling functions, specific heats and other properties. ExoCross converts between different formats, such as HITRAN, ExoMol and Phoenix, and simulates non-LTE spectra using a simple two-temperature approach. Different electronic, vibronic or vibrational bands can be simulated separately using an efficient filtering scheme based on the quantum numbers.

[ascl:1805.007] exocartographer: Constraining surface maps orbital parameters of exoplanets

exocartographer solves the exo-cartography inverse problem. This flexible forward-modeling framework, written in Python, retrieves the albedo map and spin geometry of a planet based on time-resolved photometry; it uses a Markov chain Monte Carlo method to extract albedo maps and planet spin and their uncertainties. Gaussian Processes use the data to fit for the characteristic length scale of the map and enforce smooth maps.

[ascl:1611.005] Exo-Transmit: Radiative transfer code for calculating exoplanet transmission spectra

Exo-Transmit calculates the transmission spectrum of an exoplanet atmosphere given specified input information about the planetary and stellar radii, the planet's surface gravity, the atmospheric temperature-pressure (T-P) profile, the location (in terms of pressure) of any cloud layers, the composition of the atmosphere, and opacity data for the atoms and molecules that make up the atmosphere. The code solves the equation of radiative transfer for absorption of starlight passing through the planet's atmosphere as it transits, accounting for the oblique path of light through the planetary atmosphere along an Earth-bound observer's line of sight. The fraction of light absorbed (or blocked) by the planet plus its atmosphere is calculated as a function of wavelength to produce the wavelength-dependent transmission spectrum. Functionality is provided to simulate the presence of atmospheric aerosols in two ways: an optically thick (gray) cloud deck can be generated at a user-specified height in the atmosphere, and the nominal Rayleigh scattering can be increased by a specified factor.

[ascl:1806.029] EXO-NAILER: EXOplanet traNsits and rAdIal veLocity fittER

EXO-NAILER (EXOplanet traNsits and rAdIal veLocity fittER) efficiently fits exoplanet transit lightcurves, radial velocities (RVs) or both. The code handles data taken with different instruments. For RVs, a different center-of-mass velocity can be fitted for each instrument to account for offsets between them; if jitter is included, a different jitter term can also fitted for each instrument. For transits, a different photometric jitter can be fitted to each instrument as can different limb-darkening coefficients and different transit depths. In addition to general options that need to be set, EXO-NAILER also requires that photometry and radial velocity options be defined for each instrument.

[ascl:1204.011] EXCOP: EXtraction of COsmological Parameters

The EXtraction of COsmological Parameters software (EXCOP) is a set of C and IDL programs together with a very large database of cosmological models generated by CMBFAST that will compute likelihood functions for cosmological parameters given some CMB data. This is the software and database used in the Stompor et al. (2001) analysis of a high resoultion Maxima1 CMB anisotropy map.

[ascl:1905.003] evolstate: Assign simple evolutionary states to stars

evolstate assigns crude evolutionary states (main-sequence, subgiant, red giant) to stars given an input temperature and radius/surface gravity, based on physically motivated boundaries from solar metallicity interior models.

[ascl:1807.029] EVEREST: Tools for de-trending stellar photometry

EVEREST (EPIC Variability Extraction and Removal for Exoplanet Science Targets) removes instrumental noise from light curves with pixel level decorrelation and Gaussian processes. The code, written in Python, generates the EVEREST catalog and offers tools for accessing and interacting with the de-trended light curves. EVEREST exploits correlations across the pixels on the CCD to remove systematics introduced by the spacecraft’s pointing error. For K2, it yields light curves with precision comparable to that of the original Kepler mission. Interaction with the EVEREST catalog catalog is available via the command line and through the Python interface. Though written for K2, EVEREST can be applied to additional surveys, such as the TESS mission, to correct for instrumental systematics and enable the detection of low signal-to-noise transiting exoplanets.

[ascl:1307.018] ETC++: Advanced Exposure-Time Calculations

ETC++ is a exposure-time calculator that considers the effect of cosmic rays, undersampling, dithering, and imperfect pixel response functions. Errors on astrometry and galaxy shape measurements can be predicted as well as photometric errors.

[ascl:1311.012] ETC: Exposure Time Calculator

Written for the Wide-Field Infrared Survey Telescope (WFIRST) high-latitude survey, the exposure time calculator (ETC) works in both imaging and spectroscopic modes. In addition to the standard ETC functions (e.g. background and S/N determination), the calculator integrates over the galaxy population and forecasts the density and redshift distribution of galaxy shapes usable for weak lensing (in imaging mode) and the detected emission lines (in spectroscopic mode). The program may be useful outside of WFIRST but no warranties are made regarding its suitability for general purposes. The software is available for download; IPAC maintains a web interface for those who wish to run a small number of cases without having to download the package.

[ascl:1305.001] ESTER: Evolution STEllaire en Rotation

The ESTER code computes the steady state of an isolated star of mass larger than two solar masses. The only convective region computed as such is the core where isentropy is assumed. ESTER provides solutions of the partial differential equations, for the pressure, density, temperature, angular velocity and meridional velocity for the whole volume. The angular velocity (differential rotation) and meridional circulation are computed consistently with the structure and are driven by the baroclinic torque. The code uses spectral methods, both radially and horizontally, with spherical harmonics and Chebyshev polynomials. The iterations follow Newton's algorithm. The code is object-oriented and is written in C++; a python suite allows an easy visualization of the results. While running, PGPLOT graphs are displayed to show evolution of the iterations.

[ascl:1405.017] ESP: Extended Surface Photometry

ESP (Extended Surface Photometry) determines the photometric properties of galaxies and other extended objects. It has applications that detect flatfielding faults, remove cosmic rays, median filter images, determine image statistics and local background values, perform galaxy profiling, fit 2-D Gaussian profiles to galaxies, generate pie slice cross-sections of galaxies, and display profiling results. It is distributed as part of the Starlink software collection (ascl:1110.012).

[ascl:1504.003] EsoRex: ESO Recipe Execution Tool

EsoRex (ESO Recipe Execution Tool) lists, configures, and executes Common Pipeline Library (CPL) (ascl:1402.010) recipes from the command line. Its features include automatically generating configuration files, recursive recipe-path searching, command line and configuration file parameters, and recipe product naming control, among many others.

[ascl:1302.017] ESO-MIDAS: General tools for image processing and data reduction

The ESO-MIDAS system provides general tools for image processing and data reduction with emphasis on astronomical applications including imaging and special reduction packages for ESO instrumentation at La Silla and the VLT at Paranal. In addition it contains applications packages for stellar and surface photometry, image sharpening and decomposition, statistics, data fitting, data presentation in graphical form, and more.

[ascl:1603.005] EQUIB: Atomic level populations and line emissivities calculator

The Fortran program EQUIB solves the statistical equilibrium equation for each ion and yields atomic level populations and line emissivities for given physical conditions, namely electron temperature and electron density, appropriate to the zones in an ionized nebula where the ions are expected to exist.

[ascl:1802.016] eqpair: Electron energy distribution calculator

eqpair computes the electron energy distribution resulting from a balance between heating and direct acceleration of particles, and cooling processes. Electron-positron pair balance, bremstrahlung, and Compton cooling, including external soft photon input, are among the processes considered, and the final electron distribution can be hybrid, thermal, or non-thermal.

[ascl:1204.017] epsnoise: Pixel noise in ellipticity and shear measurements

epsnoise simulates pixel noise in weak-lensing ellipticity and shear measurements. This open-source python code can efficiently create an intrinsic ellipticity distribution, shear it, and add noise, thereby mimicking a "perfect" measurement that is not affected by shape-measurement biases. For theoretical studies, we provide the Marsaglia distribution, which describes the ratio of normal variables in the general case of non-zero mean and correlation. We also added a convenience method that evaluates the Marsaglia distribution for the ratio of moments of a Gaussian-shaped brightness distribution, which gives a very good approximation of the measured ellipticity distribution also for galaxies with different radial profiles. We provide four shear estimators, two based on the ε ellipticity measure, two on χ. While three of them are essentially plain averages, we introduce a new estimator which requires a functional minimization.

[ascl:1909.013] EPOS: Exoplanet Population Observation Simulator

EPOS (Exoplanet Population Observation Simulator) simulates observations of exoplanet populations. It provides an interface between planet formation simulations and exoplanet surveys such as Kepler. EPOS can also be used to estimate planet occurrence rates and the orbital architectures of planetary systems.

[ascl:1302.005] EPICS: Experimental Physics and Industrial Control System

EPICS is a set of software tools and applications developed collaboratively and used to create distributed soft real-time control systems for scientific instruments such as particle accelerators and telescopes. Such distributed control systems typically comprise tens or even hundreds of computers, networked together to allow communication between them and to provide control and feedback of the various parts of the device from a central control room, or even remotely over the internet. EPICS uses Client/Server and Publish/Subscribe techniques to communicate between the various computers. A Channel Access Gateway allows engineers and physicists elsewhere in the building to examine the current state of the IOCs, but prevents them from making unauthorized adjustments to the running system. In many cases the engineers can make a secure internet connection from home to diagnose and fix faults without having to travel to the site.

EPICS is used by many facilities worldwide, including the Advanced Photon Source at Argonne National Laboratory, Fermilab, Keck Observatory, Laboratori Nazionali di Legnaro, Brazilian Synchrotron Light Source, Los Alamos National Laboratory, Australian Synchrotron, and Stanford Linear Accellerator Center.

[ascl:1511.021] EPIC: E-field Parallel Imaging Correlator

E-field Parallel Imaging Correlator (EPIC), a highly parallelized Object Oriented Python package, implements the Modular Optimal Frequency Fourier (MOFF) imaging technique. It also includes visibility-based imaging using the software holography technique and a simulator for generating electric fields from a sky model. EPIC can accept dual-polarization inputs and produce images of all four instrumental cross-polarizations.

[ascl:1010.072] Enzo: AMR Cosmology Application

Enzo is an adaptive mesh refinement (AMR), grid-based hybrid code (hydro + N-Body) which is designed to do simulations of cosmological structure formation. It uses the algorithms of Berger & Collela to improve spatial and temporal resolution in regions of large gradients, such as gravitationally collapsing objects. The Enzo simulation software is incredibly flexible, and can be used to simulate a wide range of cosmological situations with the available physics packages.

Enzo has been parallelized using the MPI message-passing library and can run on any shared or distributed memory parallel supercomputer or PC cluster. Simulations using as many as 1024 processors have been successfully carried out on the San Diego Supercomputing Center's Blue Horizon, an IBM SP.

[ascl:1501.008] Enrico: Python package to simplify Fermi-LAT analysis

Enrico analyzes Fermi data. It produces spectra (model fit and flux points), maps and lightcurves for a target by editing a config file and running a python script which executes the Fermi science tool chain.

[ascl:1706.007] encube: Large-scale comparative visualization and analysis of sets of multidimensional data

Encube is a qualitative, quantitative and comparative visualization and analysis framework, with application to high-resolution, immersive three-dimensional environments and desktop displays, providing a capable visual analytics experience across the display ecology. Encube includes mechanisms for the support of: 1) interactive visual analytics of sufficiently large subsets of data; 2) synchronous and asynchronous collaboration; and 3) documentation of the discovery workflow. The framework is modular, allowing additional functionalities to be included as required.

[ascl:1109.012] EnBiD: Fast Multi-dimensional Density Estimation

We present a method to numerically estimate the densities of a discretely sampled data based on a binary space partitioning tree. We start with a root node containing all the particles and then recursively divide each node into two nodes each containing roughly equal number of particles, until each of the nodes contains only one particle. The volume of such a leaf node provides an estimate of the local density and its shape provides an estimate of the variance. We implement an entropy-based node splitting criterion that results in a significant improvement in the estimation of densities compared to earlier work. The method is completely metric free and can be applied to arbitrary number of dimensions. We use this method to determine the appropriate metric at each point in space and then use kernel-based methods for calculating the density. The kernel-smoothed estimates were found to be more accurate and have lower dispersion. We apply this method to determine the phase-space densities of dark matter haloes obtained from cosmological N-body simulations. We find that contrary to earlier studies, the volume distribution function v(f) of phase-space density f does not have a constant slope but rather a small hump at high phase-space densities. We demonstrate that a model in which a halo is made up by a superposition of Hernquist spheres is not capable in explaining the shape of v(f) versus f relation, whereas a model which takes into account the contribution of the main halo separately roughly reproduces the behaviour as seen in simulations. The use of the presented method is not limited to calculation of phase-space densities, but can be used as a general purpose data-mining tool and due to its speed and accuracy it is ideally suited for analysis of large multidimensional data sets.

[ascl:1010.018] Emu CMB: Power spectrum emulator

Emu CMB is a fast emulator the CMB temperature power spectrum based on CAMB (Jan 2010 version). Emu CMB is based on a "space-filling" Orthogonal Array Latin Hypercube design in a de-correlated parameter space obtained by using a fiducial WMAP5 CMB Fisher matrix as a rotation matrix. This design strategy allows for accurate interpolation with small numbers of simulation design points. The emulator presented here is calibrated with 100 CAMB runs that are interpolated over the design space using a global quadratic polynomial fit.

[ascl:1708.027] empiriciSN: Supernova parameter generator

empiriciSN generates realistic supernova parameters given photometric observations of a potential host galaxy, based entirely on empirical correlations measured from supernova datasets. It is intended to be used to improve supernova simulation for DES and LSST. It is extendable such that additional datasets may be added in the future to improve the fitting algorithm or so that additional light curve parameters or supernova types may be fit.

[ascl:1201.004] emGain: Determination of EM gain of CCD

The determination of the EM gain of the CCD is best done by fitting the histogram of many low-light frames. Typically, the dark+CIC noise of a 30ms frame itself is a sufficient amount of signal to determine accurately the EM gain with about 200 512x512 frames. The IDL code emGain takes as an input a cube of frames and fit the histogram of all the pixels with the EM stage output probability function. The function returns the EM gain of the frames as well as the read-out noise and the mean signal level of the frames.

[ascl:1910.006] EMERGE: Empirical ModEl for the foRmation of GalaxiEs

Emerge (Empirical ModEl for the foRmation of GalaxiEs) populates dark matter halo merger trees with galaxies using simple empirical relations between galaxy and halo properties. For each model represented by a set of parameters, it computes a mock universe, which it then compares to observed statistical data to obtain a likelihood. Parameter space can be explored with several advanced stochastic algorithms such as MCMC to find the models that are in agreement with the observations.

[ascl:1303.002] emcee: The MCMC Hammer

emcee is an extensible, pure-Python implementation of Goodman & Weare's Affine Invariant Markov chain Monte Carlo (MCMC) Ensemble sampler. It's designed for Bayesian parameter estimation. The algorithm behind emcee has several advantages over traditional MCMC sampling methods and has excellent performance as measured by the autocorrelation time (or function calls per independent sample). One advantage of the algorithm is that it requires hand-tuning of only 1 or 2 parameters compared to $sim N^2$ for a traditional algorithm in an N-dimensional parameter space. Exploiting the parallelism of the ensemble method, emcee permits any user to take advantage of multiple CPU cores without extra effort.

[ascl:1203.006] EMACSS: Evolve Me A Cluster of StarS

The star cluster evolution code Evolve Me A Cluster of StarS (EMACSS) is a simple yet physically motivated computational model that describes the evolution of some fundamental properties of star clusters in static tidal fields. The prescription is based upon the flow of energy within the cluster, which is a constant fraction of the total energy per half-mass relaxation time. According to Henon's predictions, this flow is independent of the precise mechanisms for energy production within the core, and therefore does not require a complete description of the many-body interactions therein. Dynamical theory and analytic descriptions of escape mechanisms is used to construct a series of coupled differential equations expressing the time evolution of cluster mass and radius for a cluster of equal-mass stars. These equations are numerically solved using a fourth-order Runge-Kutta integration kernel; the results were benchmarked against a data base of direct N-body simulations. EMACSS is publicly available and reproduces the N-body results to within ~10 per cent accuracy for the entire post-collapse evolution of star clusters.

[ascl:1106.024] ELMAG: Simulation of Electromagnetic Cascades

A Monte Carlo program for the simulation of electromagnetic cascades initiated by high-energy photons and electrons interacting with extragalactic background light (EBL) is presented. Pair production and inverse Compton scattering on EBL photons as well as synchrotron losses and deflections of the charged component in extragalactic magnetic fields (EGMF) are included in the simulation. Weighted sampling of the cascade development is applied to reduce the number of secondary particles and to speed up computations. As final result, the simulation procedure provides the energy, the observation angle, and the time delay of secondary cascade particles at the present epoch. Possible applications are the study of TeV blazars and the influence of the EGMF on their spectra or the calculation of the contribution from ultrahigh energy cosmic rays or dark matter to the diffuse extragalactic gamma-ray background. As an illustration, we present results for deflections and time-delays relevant for the derivation of limits on the EGMF.

[ascl:1603.016] ellc: Light curve model for eclipsing binary stars and transiting exoplanets

ellc analyzes the light curves of detached eclipsing binary stars and transiting exoplanet systems. The model represents stars as triaxial ellipsoids, and the apparent flux from the binary is calculated using Gauss-Legendre integration over the ellipses that are the projection of these ellipsoids on the sky. The code can also calculate the fluxweighted radial velocity of the stars during an eclipse (Rossiter-McLaghlin effect). ellc can model a wide range of eclipsing binary stars and extrasolar planetary systems, and can enable the use of modern Monte Carlo methods for data analysis and model testing.

[ascl:1904.022] eleanor: Extracted and systematics-corrected light curves for TESS-observed stars

eleanor extracts target pixel files from TESS Full Frame Images and produces systematics-corrected light curves for any star observed by the TESS mission. eleanor takes a TIC ID, a Gaia source ID, or (RA, Dec) coordinates of a star observed by TESS and returns, as a single object, a light curve and accompanying target pixel data. The process can be customized, allowing, for example, examination of intermediate data products and changing the aperture used for light curve extraction. eleanor also offers tools that make it easier to work with stars observed in multiple TESS sectors.

[ascl:1102.014] Einstein Toolkit for Relativistic Astrophysics

The Einstein Toolkit is a collection of software components and tools for simulating and analyzing general relativistic astrophysical systems. Such systems include gravitational wave space-times, collisions of compact objects such as black holes or neutron stars, accretion onto compact objects, core collapse supernovae and Gamma-Ray Bursts.

The Einstein Toolkit builds on numerous software efforts in the numerical relativity community including CactusEinstein, Whisky, and Carpet. The Einstein Toolkit currently uses the Cactus Framework as the underlying computational infrastructure that provides large-scale parallelization, general computational components, and a model for collaborative, portable code development.

[ascl:1904.013] EightBitTransit: Calculate light curves from pixel grids

EightBitTransit calculates the light curve of any pixelated image transiting a star and inverts a light curve to recover the "shadow image" that produced it.

[ascl:1904.004] ehtim: Imaging, analysis, and simulation software for radio interferometry

ehtim (eht-imaging) simulates and manipulates VLBI data and produces images with regularized maximum likelihood methods. The package contains several primary classes for loading, simulating, and manipulating VLBI data. The main classes are the Image, Array, Obsdata, Imager, and Caltable classes, which provide tools for loading images and data, producing simulated data from realistic u-v tracks, calibrating, inspecting, and plotting data, and producing images from data sets in various polarizations using various data terms and regularizers.

[ascl:1804.008] EGG: Empirical Galaxy Generator

The Empirical Galaxy Generator (EGG) generates fake galaxy catalogs and images with realistic positions, morphologies and fluxes from the far-ultraviolet to the far-infrared. The catalogs are generated by egg-gencat and stored in binary FITS tables (column oriented). Another program, egg-2skymaker, is used to convert the generated catalog into ASCII tables suitable for ingestion by SkyMaker (ascl:1010.066) to produce realistic high resolution images (e.g., Hubble-like), while egg-gennoise and egg-genmap can be used to generate the low resolution images (e.g., Herschel-like). These tools can be used to test source extraction codes, or to evaluate the reliability of any map-based science (stacking, dropout identification, etc.).

[ascl:1512.004] EDRSX: Extensions to the EDRS package

EDRSX extends the Electronography Data Reduction System (EDRS, ascl:1512.0030). It makes more versatile analysis of IRAS images than was otherwise available possible. EDRSX provides facilities for converting images into and out of EDRS format, accesses RA and DEC information stored with IRAS images, and performs several standard image processing operations such as displaying image histograms and statistics, and Fourier transforms. This enables such operations to be performed as estimation and subtraction of non-linear backgrounds, de-striping of IRAS images, modelling of image features, and easy aligning of separate images, among others.

[ascl:1512.003] EDRS: Electronography Data Reduction System

The Electronography Data Reduction System (EDRS) reduces and analyzes large format astronomical images and was written to be used from within ASPIC (ascl:1510.006). In its original form it specialized in the reduction of electronographic data but was built around a set of utility programs which were widely applicable to astronomical images from other sources. The programs align and calibrate images, handle lists of (X,Y) positions, apply linear geometrical transformations and do some stellar photometry. This package is now obsolete.

[ascl:1901.010] eddy: Extracting Disk DYnamics

The Python suite eddy recovers precise rotation profiles of protoplanetary disks from Doppler shifted line emission, providing an easy way to fit first moment maps and the inference of a rotation velocity from an annulus of spectra.

[ascl:1112.001] Eclipse: ESO C Library for an Image Processing Software Environment

Written in ANSI C, eclipse is a library offering numerous services related to astronomical image processing: FITS data access, various image and cube loading methods, binary image handling and filtering (including convolution and morphological filters), 2-D cross-correlation, connected components, cube and image arithmetic, dead pixel detection and correction, object detection, data extraction, flat-fielding with robust fit, image generation, statistics, photometry, image-space resampling, image combination, and cube stacking. It also contains support for mathematical tools like random number generation, FFT, curve fitting, matrices, fast median computation, and point-pattern matching. The main feature of this library is its ability to handle large amounts of input data (up to 2GB in the current version) regardless of the amount of memory and swap available on the local machine. Another feature is the very high speed allowed by optimized C, making it an ideal base tool for programming efficient number-crunching applications, e.g., on parallel (Beowulf) systems.

[ascl:1910.008] ECLIPS3D: Linear wave and circulation calculations

ECLIPS3D (Eigenvectors, Circulation, and Linear Instabilities for Planetary Science in 3 Dimensions) calculates a posteriori energy equations for the study of linear processes in planetary atmospheres with an arbitrary steady state, and provides both increased robustness and physical meaning to the obtained eigenmodes. It was developed originally for planetary atmospheres and includes python scripts for data analysis. ECLIPS3D can be used to study the initial spin up of superrotation of GCM simulations of hot Jupiters in addition to being applied to other problems.

[ascl:1810.011] Eclairs: Efficient Codes for the LArge scales of the unIveRSe

Eclairs calculates matter power spectrum based on standard perturbation theory and regularized pertubation theory. The codes are written in C++ with a python wrapper which is designed to be easily combined with MCMC samplers.

[ascl:1405.018] ECHOMOP: Echelle data reduction package

ECHOMOP extracts spectra from 2-D data frames. These data can be single-order spectra or multi-order echelle spectra. A substantial degree of automation is provided, particularly in the traditionally manual functions for cosmic-ray detection and wavelength calibration; manual overrides are available. Features include robust and flexible order tracing, optimal extraction, support for variance arrays, and 2-D distortion fitting and extraction. ECHOMOP is distributed as part of the Starlink software collection (ascl:1110.012).

[ascl:1810.006] Echelle++: Generic spectrum simulator

Echelle++ simulates realistic raw spectra based on the Zemax model of any spectrograph, with a particular emphasis on cross-dispersed Echelle spectrographs. The code generates realistic spectra of astronomical and calibration sources, with accurate representation of optical aberrations, the shape of the point spread function, detector characteristics, and photon noise. It produces high-fidelity spectra fast, an important feature when testing data reduction pipelines with a large set of different input spectra, when making critical choices about order spacing in the design phase of the instrument, or while aligning the spectrograph during construction. Echelle++ also works with low resolution, low signal to noise, multi-object, IFU, or long slit spectra, for simulating a wide array of spectrographs.

[ascl:1411.017] ECCSAMPLES: Bayesian Priors for Orbital Eccentricity

ECCSAMPLES solves the inverse cumulative density function (CDF) of a Beta distribution, sometimes called the IDF or inverse transform sampling. This allows one to sample from the relevant priors directly. ECCSAMPLES actually provides joint samples for both the eccentricity and the argument of periastron, since for transiting systems they display non-zero covariance.

[ascl:1203.007] EBTEL: Enthalpy-Based Thermal Evolution of Loops

Observational and theoretical evidence suggests that coronal heating is impulsive and occurs on very small cross-field spatial scales. A single coronal loop could contain a hundred or more individual strands that are heated quasi-independently by nanoflares. It is therefore an enormous undertaking to model an entire active region or the global corona. Three-dimensional MHD codes have inadequate spatial resolution, and 1D hydro codes are too slow to simulate the many thousands of elemental strands that must be treated in a reasonable representation. Fortunately, thermal conduction and flows tend to smooth out plasma gradients along the magnetic field, so "0D models" are an acceptable alternative. We have developed a highly efficient model called Enthalpy-Based Thermal Evolution of Loops (EBTEL) that accurately describes the evolution of the average temperature, pressure, and density along a coronal strand. It improves significantly upon earlier models of this type--in accuracy, flexibility, and capability. It treats both slowly varying and highly impulsive coronal heating; it provides the differential emission measure distribution, DEM(T), at the transition region footpoints; and there are options for heat flux saturation and nonthermal electron beam heating. EBTEL gives excellent agreement with far more sophisticated 1D hydro simulations despite using four orders of magnitude less computing time. It promises to be a powerful new tool for solar and stellar studies.

[ascl:1909.007] EBHLIGHT: General relativistic radiation magnetohydrodynamics with Monte Carlo transport

EBHLIGHT solves the equations of general relativistic radiation magnetohydrodynamics in stationary spacetimes. Fluid integration is performed with the second order shock-capturing scheme HARM (ascl:1209.005) and frequency-dependent radiation transport is performed with the second order Monte Carlo code grmonty (ascl:1306.002). Fluid and radiation exchange four-momentum in an explicit first-order operator-split fashion.

[ascl:1908.018] EBAI: Eclipsing Binaries with Artificial Intelligence

Eclipsing Binaries via Artificial Intelligence (EBAI) automates the process of solving light curves of eclipsing binary stars. EBAI is based on the back-propagating neural network paradigm and is highly flexible in construction of neural networks. EBAI comes in two flavors, serial (ebai) and multi-processor (ebai.mpi), and can be run in training, continued training, and recognition mode.

[ascl:1010.052] EAZY: A Fast, Public Photometric Redshift Code

EAZY, Easy and Accurate Zphot from Yale, determines photometric redshifts. The program is optimized for cases where spectroscopic redshifts are not available, or only available for a biased subset of the galaxies. The code combines features from various existing codes: it can fit linear combinations of templates, it includes optional flux- and redshift-based priors, and its user interface is modeled on the popular HYPERZ (ascl:1108.010) code. The default template set, as well as the default functional forms of the priors, are not based on (usually highly biased) spectroscopic samples, but on semi-analytical models. Furthermore, template mismatch is addressed by a novel rest-frame template error function. This function gives different wavelength regions different weights, and ensures that the formal redshift uncertainties are realistic. A redshift quality parameter, Q_z, provides a robust estimate of the reliability of the photometric redshift estimate.

[ascl:1011.013] EasyLTB: Code for Testing LTB Models against CosmologyConfronting Lemaitre-Tolman-Bondi Models with Observational Cosmology

The possibility that we live in a special place in the universe, close to the centre of a large void, seems an appealing alternative to the prevailing interpretation of the acceleration of the universe in terms of a LCDM model with a dominant dark energy component. In this paper we confront the asymptotically flat Lemaitre-Tolman-Bondi (LTB) models with a series of observations, from Type Ia Supernovae to Cosmic Microwave Background and Baryon Acoustic Oscillations data. We propose two concrete LTB models describing a local void in which the only arbitrary functions are the radial dependence of the matter density Omega_M and the Hubble expansion rate H. We find that all observations can be accommodated within 1 sigma, for our models with 4 or 5 independent parameters. The best fit models have a chi^2 very close to that of the LCDM model. We perform a simple Bayesian analysis and show that one cannot exclude the hypothesis that we live within a large local void of an otherwise Einstein-de Sitter model.

[ascl:1812.008] easyaccess: SQL command line interpreter for astronomical surveys

easyaccess facilitates access to astronomical catalogs stored in SQL Databases. It is an enhanced command line interpreter and provides a custom interface with custom commands and was specifically designed to access data from the Dark Energy Survey Oracle database, including autocompletion of tables, columns, users and commands, simple ways to upload and download tables using csv, fits and HDF5 formats, iterators, search and description of tables among others. It can easily be extended to other surveys or SQL databases. The package is written in Python and supports customized addition of commands and functionalities.

[ascl:1612.010] Earthshine simulator: Idealized images of the Moon

Terrestrial albedo can be determined from observations of the relative intensity of earthshine. Images of the Moon at different lunar phases can be analyzed to derive the semi-hemispheric mean albedo of the Earth, and an important tool for doing this is simulations of the appearance of the Moon for any time. This software produces idealized images of the Moon for arbitrary times. It takes into account the libration of the Moon and the distances between Sun, Moon and the Earth, as well as the relevant geometry. The images of the Moon are produced as FITS files. User input includes setting the Julian Day of the simulation. Defaults for image size and field of view are set to produce approximately 1x1 degree images with the Moon in the middle from an observatory on Earth, currently set to Mauna Loa.

[ascl:1611.012] EarthShadow: Calculator for dark matter particle velocity distribution after Earth-scattering

EarthShadow calculates the impact of Earth-scattering on the distribution of Dark Matter (DM) particles. The code calculates the speed and velocity distributions of DM at various positions on the Earth and also helps with the calculation of the average scattering probabilities. Tabulated data for DM-nuclear scattering cross sections and various numerical results, plots and animations are also included in the code package.

[ascl:1805.004] EARL: Exoplanet Analytic Reflected Lightcurves package

EARL (Exoplanet Analytic Reflected Lightcurves) computes the analytic form of a reflected lightcurve, given a spherical harmonic decomposition of the planet albedo map and the viewing and orbital geometries. The EARL Mathematica notebook allows rapid computation of reflected lightcurves, thus making lightcurve numerical experiments accessible.

[ascl:1106.004] E3D: The Euro3D Visualization Tool

E3D is a package of tools for the analysis and visualization of IFS data. It is capable of reading, writing, and visualizing reduced data from 3D spectrographs of any kind.

[ascl:1910.013] E0102-VR: Virtual Reality application to visualize the optical ejecta in SNR 1E 0102.2-7219

E0102-VR facilitates the characterization of the 3D structure of the oxygen-rich optical ejecta in the young supernova remnant 1E 0102.2-7219 in the Small Magellanic Cloud. This room-scale Virtual Reality application written for the HTC Vive contributes to the exploration of the scientific potential of this technology for the field of observational astrophysics.

[ascl:1407.017] e-MERLIN data reduction pipeline

Written in Python and utilizing ParselTongue (ascl:1208.020) to interface with AIPS (ascl:9911.003), the e-MERLIN data reduction pipeline processes, calibrates and images data from the UK's radio interferometric array (Multi-Element Remote-Linked Interferometer Network). Driven by a plain text input file, the pipeline is modular and can be run in stages. The software includes options to load raw data, average in time and/or frequency, flag known sources of interference, flag more comprehensively with SERPent (ascl:1312.001), carry out some or all of the calibration procedures (including self-calibration), and image in either normal or wide-field mode. It also optionally produces a number of useful diagnostic plots at various stages so data quality can be assessed.

[ascl:1902.010] dyPolyChord: Super fast dynamic nested sampling with PolyChord

dyPolyChord implements dynamic nested sampling using the efficient PolyChord (ascl:1502.011) sampler to provide state-of-the-art nested sampling performance. Any likelihoods and priors which work with PolyChord can be used (Python, C++ or Fortran), and the output files produced are in the PolyChord format.

[ascl:1809.013] dynesty: Dynamic Nested Sampling package

dynesty is a Dynamic Nested Sampling package for estimating Bayesian posteriors and evidences. dynesty samples from a given distribution when provided with a loglikelihood function, a prior_transform function (that transforms samples from the unit cube to the target prior), and the dimensionality of the parameter space.

[ascl:1602.004] DUSTYWAVE: Linear waves in gas and dust

Written in Fortran, DUSTYWAVE computes the exact solution for linear waves in a two-fluid mixture of gas and dust. The solutions are general with respect to both the dust-to-gas ratio and the amplitude of the drag coefficient.

[ascl:9911.001] DUSTY: Radiation transport in a dusty environment

DUSTY solves the problem of radiation transport in a dusty environment. The code can handle both spherical and planar geometries. The user specifies the properties of the radiation source and dusty region, and the code calculates the dust temperature distribution and the radiation field in it. The solution method is based on a self-consistent equation for the radiative energy density, including dust scattering, absorption and emission, and does not introduce any approximations. The solution is exact to within the specified numerical accuracy. DUSTY has built in optical properties for the most common types of astronomical dust and comes with a library for many other grains. It supports various analytical forms for the density distribution, and can perform a full dynamical calculation for radiatively driven winds around AGB stars. The spectral energy distribution of the source can be specified analytically as either Planckian or broken power-law. In addition, arbitrary dust optical properties, density distributions and external radiation can be entered in user supplied files. Furthermore, the wavelength grid can be modified to accommodate spectral features. A single DUSTY run can process an unlimited number of models, with each input set producing a run of optical depths, as specified. The user controls the detail level of the output, which can include both spectral and imaging properties as well as other quantities of interest.

[ascl:1307.001] DustEM: Dust extinction and emission modelling

DustEM computes the extinction and the emission of interstellar dust grains heated by photons. It is written in Fortran 95 and is jointly developed by IAS and CESR. The dust emission is calculated in the optically thin limit (no radiative transfer) and the default spectral range is 40 to 108 nm. The code is designed so dust properties can easily be changed and mixed and to allow for the inclusion of new grain physics.

[ascl:1908.016] DustCharge: Charge distribution for a dust grain

DustCharge calculates the equilibrium charge distribution for a dust grain of a given size and composition, depending on the local interstellar medium conditions, such as density, temperature, ionization fraction, local radiation field strength, and cosmic ray ionization fraction.

[ascl:1503.005] dust: Dust scattering and extinction in the X-ray

Written in Python, dust calculates X-ray dust scattering and extinction in the intergalactic and local interstellar media.

[ascl:1605.014] DUO: Spectra of diatomic molecules

Duo computes rotational, rovibrational and rovibronic spectra of diatomic molecules. The software, written in Fortran 2003, solves the Schrödinger equation for the motion of the nuclei for the simple case of uncoupled, isolated electronic states and also for the general case of an arbitrary number and type of couplings between electronic states. Possible couplings include spin–orbit, angular momenta, spin-rotational and spin–spin. Introducing the relevant couplings using so-called Born–Oppenheimer breakdown curves can correct non-adiabatic effects.

[ascl:1201.011] Duchamp: A 3D source finder for spectral-line data

Duchamp is software designed to find and describe sources in 3-dimensional, spectral-line data cubes. Duchamp has been developed with HI (neutral hydrogen) observations in mind, but is widely applicable to many types of astronomical images. It features efficient source detection and handling methods, noise suppression via smoothing or multi-resolution wavelet reconstruction, and a range of graphical and text-based outputs to allow the user to understand the detections.

[ascl:1505.034] dStar: Neutron star thermal evolution code

dStar is a collection of modules for computing neutron star structure and evolution, and uses the numerical, utility, and equation of state libraries of MESA (ascl:1010.083).

[ascl:1501.004] dst: Polarimeter data destriper

Dst is a fully parallel Python destriping code for polarimeter data; destriping is a well-established technique for removing low-frequency correlated noise from Cosmic Microwave Background (CMB) survey data. The software destripes correctly formatted HDF5 datasets and outputs hitmaps, binned maps, destriped maps and baseline arrays.

[ascl:1010.006] DSPSR: Digital Signal Processing Software for Pulsar Astronomy

DSPSR, written primarily in C++, is an open-source, object-oriented, digital signal processing software library and application suite for use in radio pulsar astronomy. The library implements an extensive range of modular algorithms for use in coherent dedispersion, filterbank formation, pulse folding, and other tasks. The software is installed and compiled using the standard GNU configure and make system, and is able to read astronomical data in 18 different file formats, including FITS, S2, CPSR, CPSR2, PuMa, PuMa2, WAPP, ASP, and Mark5.

[ascl:1610.003] DSDEPROJ: Direct Spectral Deprojection

Deprojection of X-ray data by methods such as PROJCT, which are model dependent, can produce large and unphysical oscillating temperature profiles. Direct Spectral Deprojection (DSDEPROJ) solves some of the issues inherent to model-dependent deprojection routines. DSDEPROJ is a model-independent approach, assuming only spherical symmetry, which subtracts projected spectra from each successive annulus to produce a set of deprojected spectra.

[ascl:1212.011] DrizzlePac: HST image software

DrizzlePac allows users to easily and accurately align and combine HST images taken at multiple epochs, and even with different instruments. It is a suite of supporting tasks for AstroDrizzle which includes:

  • astrodrizzle to align and combine images
  • tweakreg and tweakback for aligning images in different visits
  • pixtopix transforms an X,Y pixel position to its pixel position after distortion corrections
  • skytopix transforms sky coordinates to X,Y pixel positions. A reverse transformation can be done using the task pixtosky.

[ascl:1504.006] drive-casa: Python interface for CASA scripting

drive-casa provides a Python interface for scripting of CASA (ascl:1107.013) subroutines from a separate Python process, allowing for utilization alongside other Python packages which may not easily be installed into the CASA environment. This is particularly useful for embedding use of CASA subroutines within a larger pipeline. drive-casa runs plain-text casapy scripts directly; alternatively, the package includes a set of convenience routines which try to adhere to a consistent style and make it easy to chain together successive CASA reduction commands to generate a command-script programmatically.

[ascl:1507.012] DRAMA: Instrumentation software environment

DRAMA is a fast, distributed environment for writing instrumentation control systems. It allows low level instrumentation software to be controlled from user interfaces running on UNIX, MS Windows or VMS machines in a consistent manner. Such instrumentation tasks can run either on these machines or on real time systems such as VxWorks. DRAMA uses techniques developed by the AAO while using the Starlink-ADAM environment, but is optimized for the requirements of instrumentation control, portability, embedded systems and speed. A special program is provided which allows seamless communication between ADAM and DRAMA tasks.

[ascl:1811.002] DRAGONS: Gemini Observatory data reduction platform

DRAGONS (Data Reduction for Astronomy from Gemini Observatory North and South) is Gemini's Python-based data reduction platform. DRAGONS offers an automation system that allows for hands-off pipeline reduction of Gemini data, or of any other astronomical data once configured. The platform also allows researchers to control input parameters and in some cases will offer to interactively optimize some data reduction steps, e.g. change the order of fit and visualize the new solution.

[ascl:1011.009] DRAGON: Monte Carlo Generator of Particle Production from a Fragmented Fireball in Ultrarelativistic Nuclear Collisions

A Monte Carlo generator of the final state of hadrons emitted from an ultrarelativistic nuclear collision is introduced. An important feature of the generator is a possible fragmentation of the fireball and emission of the hadrons from fragments. Phase space distribution of the fragments is based on the blast wave model extended to azimuthally non-symmetric fireballs. Parameters of the model can be tuned and this allows to generate final states from various kinds of fireballs. A facultative output in the OSCAR1999A format allows for a comprehensive analysis of phase-space distributions and/or use as an input for an afterburner. DRAGON's purpose is to produce artificial data sets which resemble those coming from real nuclear collisions provided fragmentation occurs at hadronisation and hadrons are emitted from fragments without any further scattering. Its name, DRAGON, stands for DRoplet and hAdron GeneratOr for Nuclear collisions. In a way, the model is similar to THERMINATOR, with the crucial difference that emission from fragments is included.

[ascl:1106.011] DRAGON: Galactic Cosmic Ray Diffusion Code

DRAGON adopts a second-order Cranck-Nicholson scheme with Operator Splitting and time overrelaxation to solve the diffusion equation. This provides a fast solution that is accurate enough for the average user. Occasionally, users may want to have very accurate solutions to their problem. To enable this feature, users may get close to the accurate solution by using the fast method, and then switch to a more accurate solution scheme featuring the Alternating-Direction-Implicit (ADI) Cranck-Nicholson scheme.

[ascl:1512.009] DRACULA: Dimensionality Reduction And Clustering for Unsupervised Learning in Astronomy

DRACULA classifies objects using dimensionality reduction and clustering. The code has an easy interface and can be applied to separate several types of objects. It is based on tools developed in scikit-learn, with some usage requiring also the H2O package.

[ascl:1712.005] draco: Analysis and simulation of drift scan radio data

draco analyzes transit radio data with the m-mode formalism. It is telescope agnostic, and is used as part of the analysis and simulation pipeline for the CHIME (Canadian Hydrogen Intensity Mapping Experiment) telescope. It can simulate time stream data from maps of the sky (using the m-mode formalism) and add gain fluctuations and correctly correlated instrumental noise (i.e. Wishart distributed). Further, it can perform various cuts on the data and make maps of the sky from data using the m-mode formalism.

[ascl:1303.025] DPUSER: Interactive language for image analysis

DPUSER is an interactive language capable of handling numbers (both real and complex), strings, and matrices. Its main aim is to do astronomical image analysis, for which it provides a comprehensive set of functions, but it can also be used for many other applications.

[ascl:1804.003] DPPP: Default Pre-Processing Pipeline

DPPP (Default Pre-Processing Pipeline, also referred to as NDPPP) reads and writes radio-interferometric data in the form of Measurement Sets, mainly those that are created by the LOFAR telescope. It goes through visibilities in time order and contains standard operations like averaging, phase-shifting and flagging bad stations. Between the steps in a pipeline, the data is not written to disk, making this tool suitable for operations where I/O dominates. More advanced procedures such as gain calibration are also included. Other computing steps can be provided by loading a shared library; currently supported external steps are the AOFlagger (ascl:1010.017) and a bridge that enables loading python steps.

[ascl:1504.012] DPI: Symplectic mapping for binary star systems for the Mercury software package

DPI is a FORTRAN77 library that supplies the symplectic mapping method for binary star systems for the Mercury N-Body software package (ascl:1201.008). The binary symplectic mapping is implemented as a hybrid symplectic method that allows close encounters and collisions between massive bodies and is therefore suitable for planetary accretion simulations.

[ascl:1206.011] Double Eclipsing Binary Fitting

The parameters of the mutual orbit of eclipsing binaries that are physically connected can be obtained by precision timing of minima over time through light travel time effect, apsidal motion or orbital precession. This, however, requires joint analysis of data from different sources obtained through various techniques and with insufficiently quantified uncertainties. In particular, photometric uncertainties are often underestimated, which yields too small uncertainties in minima timings if determined through analysis of a χ2 surface. The task is even more difficult for double eclipsing binaries, especially those with periods close to a resonance such as CzeV344, where minima get often blended with each other.

This code solves the double binary parameters simultaneously and then uses these parameters to determine minima timings (or more specifically O-C values) for individual datasets. In both cases, the uncertainties (or more precisely confidence intervals) are determined through bootstrap resampling of the original data. This procedure to a large extent alleviates the common problem with underestimated photometric uncertainties and provides a check on possible degeneracies in the parameters and the stability of the results. While there are shortcomings to this method as well when compared to Markov Chain Monte Carlo methods, the ease of the implementation of bootstrapping is a significant advantage.

[ascl:1709.004] DOOp: DAOSPEC Output Optimizer pipeline

The DAOSPEC Output Optimizer pipeline (DOOp) runs efficient and convenient equivalent widths measurements in batches of hundreds of spectra. It uses a series of BASH scripts to work as a wrapper for the FORTRAN code DAOSPEC (ascl:1011.002) and uses IRAF (ascl:9911.002) to automatically fix some of the parameters that are usually set by hand when using DAOSPEC. This allows batch-processing of quantities of spectra that would be impossible to deal with by hand. DOOp was originally built for the large quantity of UVES and GIRAFFE spectra produced by the Gaia-ESO Survey, but just like DAOSPEC, it can be used on any high resolution and high signal-to-noise ratio spectrum binned on a linear wavelength scale.

[ascl:1608.013] DOLPHOT: Stellar photometry

DOLPHOT is a stellar photometry package that was adapted from HSTphot for general use. It supports two modes; the first is a generic PSF-fitting package, which uses analytic PSF models and can be used for any camera. The second mode uses ACS PSFs and calibrations, and is effectively an ACS adaptation of HSTphot. A number of utility programs are also included with the DOLPHOT distribution, including basic image reduction routines.

[ascl:1604.007] DNest3: Diffusive Nested Sampling

DNest3 is a C++ implementation of Diffusive Nested Sampling (ascl:1010.029), a Markov Chain Monte Carlo (MCMC) algorithm for Bayesian Inference and Statistical Mechanics. Relative to older DNest versions, DNest3 has improved performance (in terms of the sampling overhead, likelihood evaluations still dominate in general) and is cleaner code: implementing new models should be easier than it was before. In addition, DNest3 is multi-threaded, so one can run multiple MCMC walkers at the same time, and the results will be combined together.

[ascl:1010.029] DNEST: Diffusive Nested Sampling

This code is a general Monte Carlo method based on Nested Sampling (NS) for sampling complex probability distributions and estimating the normalising constant. The method uses one or more particles, which explore a mixture of nested probability distributions, each successive distribution occupying ~e^-1 times the enclosed prior mass of the previous distribution. While NS technically requires independent generation of particles, Markov Chain Monte Carlo (MCMC) exploration fits naturally into this technique. This method can achieve four times the accuracy of classic MCMC-based Nested Sampling, for the same computational effort; equivalent to a factor of 16 speedup. An additional benefit is that more samples and a more accurate evidence value can be obtained simply by continuing the run for longer, as in standard MCMC.

[ascl:1506.002] dmdd: Dark matter direct detection

The dmdd package enables simple simulation and Bayesian posterior analysis of recoil-event data from dark-matter direct-detection experiments under a wide variety of scattering theories. It enables calculation of the nuclear-recoil rates for a wide range of non-relativistic and relativistic scattering operators, including non-standard momentum-, velocity-, and spin-dependent rates. It also accounts for the correct nuclear response functions for each scattering operator and takes into account the natural abundances of isotopes for a variety of experimental target elements.

[ascl:1705.002] DMATIS: Dark Matter ATtenuation Importance Sampling

DMATIS (Dark Matter ATtenuation Importance Sampling) calculates the trajectories of DM particles that propagate in the Earth's crust and the lead shield to reach the DAMIC detector using an importance sampling Monte-Carlo simulation. A detailed Monte-Carlo simulation avoids the deficiencies of the SGED/KS method that uses a mean energy loss description to calculate the lower bound on the DM-proton cross section. The code implementing the importance sampling technique makes the brute-force Monte-Carlo simulation of moderately strongly interacting DM with nucleons computationally feasible. DMATIS is written in Python 3 and MATHEMATICA.

[ascl:1910.004] DM_phase: Algorithm for correcting dispersion of radio signals

DM_phase maximizes the coherent power of a radio signal instead of its intensity to calculate the best dispersion measure (DM) for a burst such as those emitted by pulsars and fast radio bursts (FRBs). It is robust to complex burst structures and interference, thus mitigating the limitations of traditional methods that search for the best DM value of a source by maximizing the signal-to-noise ratio (S/N) of the detected signal.

[ascl:1812.012] distlink: Minimum orbital intersection distance (MOID) computation library

distlink computes the minimum orbital intersection distance (MOID), or global minimum of the distance between the points lying on two Keplerian ellipses by finding all stationary points of the distance function, based on solving an algebraic polynomial equation of 16th degree. The program tracks numerical errors and carefully treats nearly degenerate cases, including practical cases with almost circular and almost coplanar orbits. Benchmarks confirm its high numeric reliability and accuracy, and even with its error-controlling overheads, this algorithm is a fast MOID computation method that may be useful in processing large catalogs. Written in C++, the library also includes auxiliary functions.

[ascl:1302.015] DisPerSE: Discrete Persistent Structures Extractor

DisPerSE is open source software for the identification of persistent topological features such as peaks, voids, walls and in particular filamentary structures within noisy sampled distributions in 2D, 3D. Using DisPerSE, structure identification can be achieved through the computation of the discrete Morse-Smale complex. The software can deal directly with noisy datasets via the concept of persistence (a measure of the robustness of topological features). Although developed for the study of the properties of filamentary structures in the cosmic web of galaxy distribution over large scales in the Universe, the present version is quite versatile and should be useful for any application where a robust structure identification is required, such as for segmentation or for studying the topology of sampled functions (for example, computing persistent Betti numbers). Currently, it can be applied can work indifferently on many kinds of cell complex (such as structured and unstructured grids, 2D manifolds embedded within a 3D space, discrete point samples using delaunay tesselation, and Healpix tesselations of the sphere). The only constraint is that the distribution must be defined over a manifold, possibly with boundaries.

[ascl:1708.006] DISORT: DIScrete Ordinate Radiative Transfer

DISORT (DIScrete Ordinate Radiative Transfer) solves the problem of 1D scalar radiative transfer in a single optical medium, such as a planetary atmosphere. The code correctly accounts for multiple scattering by an isotropic or plane-parallel beam source, internal Planck sources, and reflection from a lower boundary. Provided that polarization effects can be neglected, DISORT efficiently calculates accurate fluxes and intensities at any user-specified angle and location within the user-specified medium.

[ascl:1108.015] DISKSTRUCT: A Simple 1+1-D Disk Structure Code

DISKSTRUCT is a simple 1+1-D code for modeling protoplanetary disks. It is not based on multidimensional radiative transfer! Instead, a flaring-angle recipe is used to compute the irradiation of the disk, while the disk vertical structure at each cylindrical radius is computed in a 1-D fashion; the models computed with this code are therefore approximate. Moreover, this model cannot deal with the dust inner rim.

In spite of these simplifications and drawbacks, the code can still be very useful for disk studies, for the following reasons:

  • It allows the disk structure to be studied in a 1-D vertical fashion (one radial cylinder at a time). For understanding the structure of disks, and also for using it as a basis of other models, this can be a great advantage.
  • For very optically thick disks this code is likely to be much faster than the RADMC full disk model.
  • Viscous internal heating of the disk is implemented and converges quickly, whereas the RADMC code is still having difficulty to deal with high optical depth combined with viscously generated internal heat.

[ascl:1811.013] DiskSim: Modeling Accretion Disk Dynamics with SPH

DiskSim is a source-code distribution of the SPH accretion disk modeling code previously released in a Windows executable form as FITDisk (ascl:1305.011). The code released now is the full research code in Fortran and can be modified as needed by the user.

[ascl:1603.011] DiskJockey: Protoplanetary disk modeling for dynamical mass derivation

DiskJockey derives dynamical masses for T Tauri stars using the Keplerian motion of their circumstellar disks, applied to radio interferometric data from the Atacama Large Millimeter Array (ALMA) and the Submillimeter Array (SMA). The package relies on RADMC-3D (ascl:1202.015) to perform the radiative transfer of the disk model. DiskJockey is designed to work in a parallel environment where the calculations for each frequency channel can be distributed to independent processors. Due to the computationally expensive nature of the radiative synthesis, fitting sizable datasets (e.g., SMA and ALMA) will require a substantial amount of CPU cores to explore a posterior distribution in a reasonable timeframe.

[ascl:1209.011] DiskFit: Modeling Asymmetries in Disk Galaxies

DiskFit implements procedures for fitting non-axisymmetries in either kinematic or photometric data. DiskFit can analyze H-alpha and CO velocity field data as well as HI kinematics to search for non-circular motions in the disk galaxies. DiskFit can also be used to constrain photometric models of the disc, bar and bulge. It deprecates an earlier version, by a subset of these authors, called velfit.

[ascl:1605.011] DISCO: 3-D moving-mesh magnetohydrodynamics package

DISCO evolves orbital fluid motion in two and three dimensions, especially at high Mach number, for studying astrophysical disks. The software uses a moving-mesh approach with a dynamic cylindrical mesh that can shear azimuthally to follow the orbital motion of the gas, thus removing diffusive advection errors and permitting longer timesteps than a static grid. DISCO uses an HLLD Riemann solver and a constrained transport scheme compatible with the mesh motion to implement magnetohydrodynamics.

[ascl:1403.020] disc2vel: Tangential and radial velocity components derivation

Disc2vel derives tangential and radial velocity components in the equatorial plane of a barred stellar disc from the observed line-of-sight velocity, assuming geometry of a thin disc. The code is written in IDL, and the method assumes that the bar is close to steady state (i.e. does not evolve fast) and that both morphology and kinematics are symmetrical with respect to the major axis of the bar.

[ascl:1102.021] DIRT: Dust InfraRed Toolbox

DIRT is a Java applet for modelling astrophysical processes in circumstellar dust shells around young and evolved stars. With DIRT, you can select and display over 500,000 pre-run model spectral energy distributions (SEDs), find the best-fit model to your data set, and account for beam size in model fitting. DIRT also allows you to manipulate data and models with an interactive viewer, display gas and dust density and temperature profiles, and display model intensity profiles at various wavelengths.

[ascl:1806.016] DirectDM-py: Dark matter direct detection

DirectDM, written in Python, takes the Wilson coefficients of relativistic operators that couple DM to the SM quarks, leptons, and gauge bosons and matches them onto a non-relativistic Galilean invariant EFT in order to calculate the direct detection scattering rates. A Mathematica implementation of DirectDM is also available (ascl:1806.015).

[ascl:1806.015] DirectDM-mma: Dark matter direct detection

The Mathematica code DirectDM takes the Wilson coefficients of relativistic operators that couple DM to the SM quarks, leptons, and gauge bosons and matches them onto a non-relativistic Galilean invariant EFT in order to calculate the direct detection scattering rates. A Python implementation of DirectDM is also available (ascl:1806.016).

[ascl:1405.016] DIPSO: Spectrum analysis code

DIPSO plots spectroscopic data rapidly and combines analysis and high-quality graphical output in a simple command-line driven interactive environment. It can be used, for example, to fit emission lines, measure equivalent widths and fluxes, do Fourier analysis, and fit models to spectra. A macro facility allows convenient execution of regularly used sequences of commands, and a simple Fortran interface permits "personal" software to be integrated with the program. DIPSO is part of the Starlink software collection (ascl:1110.012).

[ascl:1908.005] dips: Detrending periodic signals in timeseries

dips detrends timeseries of strictly periodic signals. It does not assume any functional form for the signal or the background or the noise; it disentangles the strictly periodic component from everything else. It has been used for detrending Kepler, K2 and TESS timeseries of periodic variable stars, eclipsing binary stars, and exoplanets.

[ascl:1010.031] DimReduce: Nonlinear Dimensionality Reduction of Very Large Datasets with Locally Linear Embedding (LLE) and its Variants

DimReduce is a C++ package for performing nonlinear dimensionality reduction of very large datasets with Locally Linear Embedding (LLE) and its variants. DimReduce is built for speed, using the optimized linear algebra packages BLAS, LAPACK, and ARPACK. Because of the need for storing very large matrices (1000 by 10000, for our SDSS LLE work), DimReduce is designed to use binary FITS files as inputs and outputs. This means that using the code is a bit more cumbersome. For smaller-scale LLE, where speed of computation is not as much of an issue, the Modular Data Processing toolkit may be a better choice. It is a python toolkit with some LLE functionality, which VanderPlas contributed.

This code has been rewritten and included in scikit-learn and an improved version is included in http://mmp2.github.io/megaman/

[ascl:1904.023] digest2: NEO binary classifier

digest2 classifies Near-Earth Object (NEO) candidates by providing a score, D2, that represents a pseudo-probability that a tracklet belongs to a given solar system orbit type. The code accurately and precisely distinguishes NEOs from non-NEOs, thus helping to identify those to be prioritized for follow-up observation. This fast, short-arc orbit classifier for small solar system bodies code is built upon the Pangloss code developed by Robert McNaught and further developed by Carl Hergenrother and Tim Spahr and Robert Jedicke's 223.f code.

[ascl:1102.024] DiFX2: A more flexible, efficient, robust and powerful software correlator

Software correlation, where a correlation algorithm written in a high-level language such as C++ is run on commodity computer hardware, has become increasingly attractive for small to medium sized and/or bandwidth constrained radio interferometers. In particular, many long baseline arrays (which typically have fewer than 20 elements and are restricted in observing bandwidth by costly recording hardware and media) have utilized software correlators for rapid, cost-effective correlator upgrades to allow compatibility with new, wider bandwidth recording systems and improve correlator flexibility. The DiFX correlator, made publicly available in 2007, has been a popular choice in such upgrades and is now used for production correlation by a number of observatories and research groups worldwide. Here we describe the evolution in the capabilities of the DiFX correlator over the past three years, including a number of new capabilities, substantial performance improvements, and a large amount of supporting infrastructure to ease use of the code. New capabilities include the ability to correlate a large number of phase centers in a single correlation pass, the extraction of phase calibration tones, correlation of disparate but overlapping sub-bands, the production of rapidly sampled filterbank and kurtosis data at minimal cost, and many more. The latest version of the code is at least 15% faster than the original, and in certain situations many times this value. Finally, we also present detailed test results validating the correctness of the new code.

[ascl:1103.001] Difmap: Synthesis Imaging of Visibility Data

Difmap is a program developed for synthesis imaging of visibility data from interferometer arrays of radio telescopes world-wide. Its prime advantages over traditional packages are its emphasis on interactive processing, speed, and the use of Difference mapping techniques.

[ascl:1304.008] Diffusion.f: Diffusion of elements in stars

Diffusion.f is an exportable subroutine to calculate the diffusion of elements in stars. The routine solves exactly the Burgers equations and can include any number of elements as variables. The code has been used successfully by a number of different groups; applications include diffusion in the sun and diffusion in globular cluster stars. There are many other possible applications to main sequence and to evolved stars. The associated README file explains how to use the subroutine.

[ascl:1512.012] DiffuseModel: Modeling the diffuse ultraviolet background

DiffuseModel calculates the scattered radiation from dust scattering in the Milky Way based on stars from the Hipparcos catalog. It uses Monte Carlo to implement multiple scattering and assumes a user-supplied grid for the dust distribution. The output is a FITS file with the diffuse light over the Galaxy. It is intended for use in the UV (900 - 3000 A) but may be modified for use in other wavelengths and galaxies.

[ascl:1704.013] Difference-smoothing: Measuring time delay from light curves

The Difference-smoothing MATLAB code measures the time delay from the light curves of images of a gravitationally lendsed quasar. It uses a smoothing timescale free parameter, generates more realistic synthetic light curves to estimate the time delay uncertainty, and uses X2 plot to assess the reliability of a time delay measurement as well as to identify instances of catastrophic failure of the time delay estimator. A systematic bias in the measurement of time delays for some light curves can be eliminated by applying a correction to each measured time delay.

[ascl:1801.010] DICE/ColDICE: 6D collisionless phase space hydrodynamics using a lagrangian tesselation

DICE is a C++ template library designed to solve collisionless fluid dynamics in 6D phase space using massively parallel supercomputers via an hybrid OpenMP/MPI parallelization. ColDICE, based on DICE, implements a cosmological and physical VLASOV-POISSON solver for cold systems such as dark matter (CDM) dynamics.

[ascl:1607.002] DICE: Disk Initial Conditions Environment

DICE models initial conditions of idealized galaxies to study their secular evolution or their more complex interactions such as mergers or compact groups using N-Body/hydro codes. The code can set up a large number of components modeling distinct parts of the galaxy, and creates 3D distributions of particles using a N-try MCMC algorithm which does not require a prior knowledge of the distribution function. The gravitational potential is then computed on a multi-level Cartesian mesh by solving the Poisson equation in the Fourier space. Finally, the dynamical equilibrium of each component is computed by integrating the Jeans equations for each particles. Several galaxies can be generated in a row and be placed on Keplerian orbits to model interactions. DICE writes the initial conditions in the Gadget1 or Gadget2 (ascl:0003.001) format and is fully compatible with Ramses (ascl:1011.007).

[ascl:1410.001] DIAMONDS: high-DImensional And multi-MOdal NesteD Sampling

DIAMONDS (high-DImensional And multi-MOdal NesteD Sampling) provides Bayesian parameter estimation and model comparison by means of the nested sampling Monte Carlo (NSMC) algorithm, an efficient and powerful method very suitable for high-dimensional and multi-modal problems; it can be used for any application involving Bayesian parameter estimation and/or model selection in general. Developed in C++11, DIAMONDS is structured in classes for flexibility and configurability. Any new model, likelihood and prior PDFs can be defined and implemented upon a basic template.

[ascl:1805.002] dftools: Distribution function fitting

dftools, written in R, finds the most likely P parameters of a D-dimensional distribution function (DF) generating N objects, where each object is specified by D observables with measurement uncertainties. For instance, if the objects are galaxies, it can fit a mass function (D=1), a mass-size distribution (D=2) or the mass-spin-morphology distribution (D=3). Unlike most common fitting approaches, this method accurately accounts for measurement in uncertainties and complex selection functions.

[ascl:1904.017] dfitspy: A dfits/fitsort implementation in Python

dfitspy searches and displays metadata contained in FITS files. Written in Python, it displays the results of a metadata search and is able to grep certain values of keywords inside large samples of files in the terminal. dfitspy can be used directly with the command line interface and can also be imported as a python module into other python code or the python interpreter.

[ascl:1112.015] Dexter: Data Extractor for scanned graphs

The NASA Astrophysics Data System (ADS) now holds 1.3 million scanned pages, containing numerous plots and figures for which the original data sets are lost or inaccessible. The availability of scans of the figures can significantly ease the regeneration of the data sets. For this purpose, the ADS has developed Dexter, a Java applet that supports the user in this process. Dexter's basic functionality is to let the user manually digitize a plot by marking points and defining the coordinate transformation from the logical to the physical coordinate system. Advanced features include automatic identification of axes, tracing lines and finding points matching a template.

[ascl:1402.022] DexM: Semi-numerical simulations for very large scales

DexM (Deus ex Machina) efficiently generates density, halo, and ionization fields on very large scales and with a large dynamic range through seminumeric simulation. These properties are essential for reionization studies, especially those involving rare, massive QSOs, since one must be able to statistically capture the ionization field. DexM can also generate ionization fields directly from the evolved density field to account for the ionizing contribution of small halos. Semi-numerical simulations use more approximate physics than numerical simulations, but independently generate 3D cosmological realizations. DexM is portable and fast, and allows for explorations of wide swaths of astrophysical parameter space and an unprecedented dynamic range.

[ascl:1907.008] Dewarp: Distortion removal and on-sky orientation solution for LBTI detectors

Dewarp constructs pipelines to remove distortion from a detector and find the orientation with true North. It was originally written for the LBTI LMIRcam detector, but is generalizable to any project with reference sources and/or an astrometric field paired with a machine-readable file of astrometric target locations.

[ascl:1304.007] DESPOTIC: Derive the Energetics and SPectra of Optically Thick Interstellar Clouds

DESPOTIC (Derive the Energetics and SPectra of Optically Thick Interstellar Clouds), written in Python, represents optically thick interstellar clouds using a one-zone model and calculates line luminosities, line cooling rates, and in restricted cases line profiles using an escape probability formalism. DESPOTIC calculates clouds' equilibrium gas and dust temperatures and their time-dependent thermal evolution. The code allows rapid and interactive calculation of clouds' characteristic temperatures, identification of their dominant heating and cooling mechanisms, and prediction of their observable spectra across a wide range of interstellar environments.

[ascl:1804.011] DESCQA: Synthetic Sky Catalog Validation Framework

The DESCQA framework provides rigorous validation protocols for assessing the quality of high-quality simulated sky catalogs in a straightforward and comprehensive way. DESCQA enables the inspection, validation, and comparison of an inhomogeneous set of synthetic catalogs via the provision of a common interface within an automated framework. An interactive web interface is also available at portal.nersc.gov/project/lsst/descqa.

[ascl:1511.017] DES exposure checker: Dark Energy Survey image quality control crowdsourcer

DES exposure checker renders science-grade images directly to a web browser and allows users to mark problematic features from a set of predefined classes, thus allowing image quality control for the Dark Energy Survey to be crowdsourced through its web application. Users can also generate custom labels to help identify previously unknown problem classes; generated reports are fed back to hardware and software experts to help mitigate and eliminate recognized issues. These problem reports allow rapid correction of artifacts that otherwise may be too subtle or infrequent to be recognized.

[ascl:1904.009] deproject: Deprojection of two-dimensional annular X-ray spectra

Deproject extends Sherpa (ascl:1107.005) to facilitate deprojection of two-dimensional annular X-ray spectra to recover the three-dimensional source properties. For typical thermal models, this includes the radial temperature and density profiles. This basic method is used for X-ray cluster analysis and is the basis for the XSPEC (ascl:9910.005) model project. The deproject module is written in Python and is straightforward to use and understand. The basic physical assumption of deproject is that the extended source emissivity is constant and optically thin within spherical shells whose radii correspond to the annuli used to extract the specta. Given this assumption, one constructs a model for each annular spectrum that is a linear volume-weighted combination of shell models.

[ascl:1705.003] demc2: Differential evolution Markov chain Monte Carlo parameter estimator

demc2, also abbreviated as DE-MCMC, is a differential evolution Markov Chain parameter estimation library written in R for adaptive MCMC on real parameter spaces.

[ascl:1602.012] DELightcurveSimulation: Light curve simulation code

DELightcurveSimulation simulates light curves with any given power spectral density and any probability density function, following the algorithm described in Emmanoulopoulos et al. (2013). The simulated products have exactly the same variability and statistical properties as the observed light curves. The code is a Python implementation of the Mathematica code provided by Emmanoulopoulos et al.

[ascl:1011.012] DEFROST: A New Code for Simulating Preheating after Inflation

At the end of inflation, dynamical instability can rapidly deposit the energy of homogeneous cold inflaton into excitations of other fields. This process, known as preheating, is rather violent, inhomogeneous and non-linear, and has to be studied numerically. This paper presents a new code for simulating scalar field dynamics in expanding universe written for that purpose. Compared to available alternatives, it significantly improves both the speed and the accuracy of calculations, and is fully instrumented for 3D visualization. We reproduce previously published results on preheating in simple chaotic inflation models, and further investigate non-linear dynamics of the inflaton decay. Surprisingly, we find that the fields do not want to thermalize quite the way one would think. Instead of directly reaching equilibrium, the evolution appears to be stuck in a rather simple but quite inhomogeneous state. In particular, one-point distribution function of total energy density appears to be universal among various two-field preheating models, and is exceedingly well described by a lognormal distribution. It is tempting to attribute this state to scalar field turbulence.

[ascl:1405.004] Defringeflat: Fringe pattern removal

The IDL package Defringeflat identifies and removes fringe patterns from images such as spectrograph flat fields. It uses a wavelet transform to calculate the frequency spectrum in a region around each point of a one-dimensional array. The wavelet transform amplitude is reconstructed from (smoothed) parameters obtaining the fringe's wavelet transform, after which an inverse wavelet transform is performed to obtain the computed fringe pattern which is then removed from the flat.

[ascl:1805.029] DeepMoon: Convolutional neural network trainer to identify moon craters

DeepMoon trains a convolutional neural net using data derived from a global digital elevation map (DEM) and catalog of craters to recognize craters on the Moon. The TensorFlow-based pipeline code is divided into three parts. The first generates a set images of the Moon randomly cropped from the DEM, with corresponding crater positions and radii. The second trains a convnet using this data, and the third validates the convnet's predictions.

[ascl:1603.015] Dedalus: Flexible framework for spectrally solving differential equations

Dedalus solves differential equations using spectral methods. It implements flexible algorithms to solve initial-value, boundary-value, and eigenvalue problems with broad ranges of custom equations and spectral domains. Its primary features include symbolic equation entry, multidimensional parallelization, implicit-explicit timestepping, and flexible analysis with HDF5. The code is written primarily in Python and features an easy-to-use interface. The numerical algorithm produces highly sparse systems for many equations which are efficiently solved using compiled libraries and MPI.

[ascl:1801.006] DecouplingModes: Passive modes amplitudes

DecouplingModes calculates the amplitude of the passive modes, which requires solving the Einstein equations on superhorizon scales sourced by the anisotropic stress from the magnetic fields (prior to neutrino decoupling), and the magnetic and neutrino stress (after decoupling). The code is available as a Mathematica notebook.

[ascl:1501.005] DECA: Decomposition of images of galaxies

DECA performs photometric analysis of images of disk and elliptical galaxies having a regular structure. It is written in Python and combines the capabilities of several widely used packages for astronomical data processing such as IRAF, SExtractor, and the GALFIT code to perform two-dimensional decomposition of galaxy images into several photometric components (bulge+disk). DECA can be applied to large samples of galaxies with different orientations with respect to the line of sight (including edge-on galaxies) and requires minimum human intervention.

[ascl:1510.004] DEBiL: Detached Eclipsing Binary Light curve fitter

DEBiL rapidly fits a large number of light curves to a simple model. It is the central component of a pipeline for systematically identifying and analyzing eclipsing binaries within a large dataset of light curves; the results of DEBiL can be used to flag light curves of interest for follow-up analysis.

[ascl:0008.001] DDSCAT: The discrete dipole approximation for scattering and absorption of light by irregular particles

DDSCAT is a freely available software package which applies the "discrete dipole approximation" (DDA) to calculate scattering and absorption of electromagnetic waves by targets with arbitrary geometries and complex refractive index. The DDA approximates the target by an array of polarizable points. DDSCAT.5a requires that these polarizable points be located on a cubic lattice. DDSCAT allows accurate calculations of electromagnetic scattering from targets with "size parameters" 2 pi a/lambda < 15 provided the refractive index m is not large compared to unity (|m-1| < 1). The DDSCAT package is written in Fortran and is highly portable. The program supports calculations for a variety of target geometries (e.g., ellipsoids, regular tetrahedra, rectangular solids, finite cylinders, hexagonal prisms, etc.). Target materials may be both inhomogeneous and anisotropic. It is straightforward for the user to import arbitrary target geometries into the code, and relatively straightforward to add new target generation capability to the package. DDSCAT automatically calculates total cross sections for absorption and scattering and selected elements of the Mueller scattering intensity matrix for specified orientation of the target relative to the incident wave, and for specified scattering directions. This User Guide explains how to use DDSCAT to carry out EM scattering calculations. CPU and memory requirements are described.

[ascl:1810.020] DDS: Debris Disk Radiative Transfer Simulator

DDS simulates scattered light and thermal reemission in arbitrary optically dust distributions with spherical, homogeneous grains where the dust parameters (optical properties, sublimation temperature, grain size) and SED of the illuminating/ heating radiative source can be arbitrarily defined. The code is optimized for studying circumstellar debris disks where large grains (i.e., with large size parameters) are expected to determine the far-infrared through millimeter dust reemission spectral energy distribution. The approach to calculate dust temperatures and dust reemission spectra is only valid in the optically thin regime. The validity of this constraint is verified for each model during the runtime of the code. The relative abundances of different grains can be arbitrarily chosen, but must be constant outside the dust sublimation region., i.e., the shape of the (arbitrary) radial dust density distribution outside the dust sublimation region is the same for all grain sizes and chemistries.

[ascl:1212.012] ddisk: Debris disk time-evolution

ddisk is an IDL script that calculates the time-evolution of a circumstellar debris disk. It calculates dust abundances over time for a debris-disk that is produced by a planetesimal disk that is grinding away due to collisional erosion.

[ascl:1207.006] dcr: Cosmic Ray Removal

This code provides a method for detecting cosmic rays in single images. The algorithm is based on a simple analysis of the histogram of the image data and does not use any modeling of the picture of the object. It does not require a good signal-to-noise ratio in the image data. Identification of multiple-pixel cosmic-ray hits is realized by running the procedure for detection and replacement iteratively. The method is very effective when applied to the images with spectroscopic data, and is also very fast in comparison with other single-image algorithms found in astronomical data-processing packages. Practical implementation and examples of application are presented in the code paper.

[ascl:1709.006] DCMDN: Deep Convolutional Mixture Density Network

Deep Convolutional Mixture Density Network (DCMDN) estimates probabilistic photometric redshift directly from multi-band imaging data by combining a version of a deep convolutional network with a mixture density network. The estimates are expressed as Gaussian mixture models representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) are applied as performance criteria. DCMDN is able to predict redshift PDFs independently from the type of source, e.g. galaxies, quasars or stars and renders pre-classification of objects and feature extraction unnecessary; the method is extremely general and allows the solving of any kind of probabilistic regression problems based on imaging data, such as estimating metallicity or star formation rate in galaxies.

[ascl:1903.012] DAVE: Discovery And Vetting of K2 Exoplanets

DAVE implements a pipeline to find and vet planets planets using data from NASA's K2 mission. The pipeline contains several modules tailored to particular aspects of the vetting procedures, using photocenter analysis to rule out background eclipsing binaries and flux time-series analysis to rule out odd–even differences, secondary eclipses, low-S/N events, variability other than a transit, and size of the transiting object.

[ascl:1405.011] DATACUBE: A datacube manipulation package

DATACUBE is a command-line package for manipulating and visualizing data cubes. It was designed for integral field spectroscopy but has been extended to be a generic data cube tool, used in particular for sub-millimeter data cubes from the James Clerk Maxwell Telescope. It is part of the Starlink software collection (ascl:1110.012).

[ascl:1402.027] Darth Fader: Galaxy catalog cleaning method for redshift estimation

Darth Fader is a wavelet-based method for extracting spectral features from very noisy spectra. Spectra for which a reliable redshift cannot be measured are identified and removed from the input data set automatically, resulting in a clean catalogue that gives an extremely low rate of catastrophic failures even when the spectra have a very low S/N. This technique may offer a significant boost in the number of faint galaxies with accurately determined redshifts.

[ascl:1110.002] DarkSUSY: Supersymmetric Dark Matter Calculations

DarkSUSY, written in Fortran, is a publicly-available advanced numerical package for neutralino dark matter calculations. In DarkSUSY one can compute the neutralino density in the Universe today using precision methods which include resonances, pair production thresholds and coannihilations. Masses and mixings of supersymmetric particles can be computed within DarkSUSY or with the help of external programs such as FeynHiggs, ISASUGRA and SUSPECT. Accelerator bounds can be checked to identify viable dark matter candidates. DarkSUSY also computes a large variety of astrophysical signals from neutralino dark matter, such as direct detection in low-background counting experiments and indirect detection through antiprotons, antideuterons, gamma-rays and positrons from the Galactic halo or high-energy neutrinos from the center of the Earth or of the Sun.

[ascl:1706.004] Dark Sage: Semi-analytic model of galaxy evolution

DARK SAGE is a semi-analytic model of galaxy formation that focuses on detailing the structure and evolution of galaxies' discs. The code-base, written in C, is an extension of SAGE (ascl:1601.006) and maintains the modularity of SAGE. DARK SAGE runs on any N-body simulation with trees organized in a supported format and containing a minimum set of basic halo properties.

[ascl:1011.002] DAOSPEC: An Automatic Code for Measuring Equivalent Widths in High-resolution Stellar Spectra

DAOSPEC is a Fortran code for measuring equivalent widths of absorption lines in stellar spectra with minimal human involvement. It works with standard FITS format files and it is designed for use with high resolution (R>15000) and high signal-to-noise-ratio (S/N>30) spectra that have been binned on a linear wavelength scale. First, we review the analysis procedures that are usually employed in the literature. Next, we discuss the principles underlying DAOSPEC and point out similarities and differences with respect to conventional measurement techniques. Then experiments with artificial and real spectra are discussed to illustrate the capabilities and limitations of DAOSPEC, with special attention given to the issues of continuum placement; radial velocities; and the effects of strong lines and line crowding. Finally, quantitative comparisons with other codes and with results from the literature are also presented.

[ascl:1104.011] DAOPHOT: Crowded-field Stellar Photometry Package

The DAOPHOT program exploits the capability of photometrically linear image detectors to perform stellar photometry in crowded fields. Raw CCD images are prepared prior to analysis, and following the obtaining of an initial star list with the FIND program, synthetic aperture photometry is performed on the detected objects with the PHOT routine. A local sky brightness and a magnitude are computed for each star in each of the specified stellar apertures, and for crowded fields, the empirical point-spread function must then be obtained for each data frame. The GROUP routine divides the star list for a given frame into optimum subgroups, and then the NSTAR routine is used to obtain photometry for all the stars in the frame by means of least-squares profile fits.

[ascl:1709.005] DanIDL: IDL solutions for science and astronomy

DanIDL provides IDL functions and routines for many standard astronomy needs, such as searching for matching points between two coordinate lists of two-dimensional points where each list corresponds to a different coordinate space, estimating the full-width half-maximum (FWHM) and ellipticity of the PSF of an image, calculating pixel variances for a set of calibrated image data, and fitting a 3-parameter plane model to image data. The library also supplies astrometry, general image processing, and general scientific applications.

[ascl:1807.023] DAMOCLES: Monte Carlo line radiative transfer code

The Monte Carlo code DAMOCLES models the effects of dust, composed of any combination of species and grain size distributions, on optical and NIR emission lines emitted from the expanding ejecta of a late-time (> 1 yr) supernova. The emissivity and dust distributions follow smooth radial power-law distributions; any arbitrary distribution can be specified by providing the appropriate grid. DAMOCLES treats a variety of clumping structures as specified by a clumped dust mass fraction, volume filling factor, clump size and clump power-law distribution, and the emissivity distribution may also initially be clumped. The code has a large number of variable parameters ranging from 5 dimensions in the simplest models to > 20 in the most complex cases.

[ascl:1412.004] DAMIT: Database of Asteroid Models from Inversion Techniques

DAMIT (Database of Asteroid Models from Inversion Techniques) is a database of three-dimensional models of asteroids computed using inversion techniques; it provides access to reliable and up-to-date physical models of asteroids, i.e., their shapes, rotation periods, and spin axis directions. Models from DAMIT can be used for further detailed studies of individual objects as well as for statistical studies of the whole set. The source codes for lightcurve inversion routines together with brief manuals, sample lightcurves, and the code for the direct problem are available for download.

[ascl:1011.006] DAME: A Web Oriented Infrastructure for Scientific Data Mining & Exploration

DAME (DAta Mining & Exploration) is an innovative, general purpose, Web-based, VObs compliant, distributed data mining infrastructure specialized in Massive Data Sets exploration with machine learning methods. Initially fine tuned to deal with astronomical data only, DAME has evolved in a general purpose platform which has found applications also in other domains of human endeavor.

[ascl:1706.003] DaMaSCUS: Dark Matter Simulation Code for Underground Scatterings

DaMaSCUS calculates the density and velocity distribution of dark matter (DM) at any detector of given depth and latitude to provide dark matter particle trajectories inside the Earth. Provided a strong enough DM-matter interaction, the particles scatter on terrestrial atoms and get decelerated and deflected. The resulting local modifications of the DM velocity distribution and number density can have important consequences for direct detection experiments, especially for light DM, and lead to signatures such as diurnal modulations depending on the experiment's location on Earth. The code involves both the Monte Carlo simulation of particle trajectories and generation of data as well as the data analysis consisting of non-parametric density estimation of the local velocity distribution functions and computation of direct detection event rates.

[ascl:1803.001] DaMaSCUS-CRUST: Dark Matter Simulation Code for Underground Scatterings - Crust Edition

DaMaSCUS-CRUST determines the critical cross-section for strongly interacting DM for various direct detection experiments systematically and precisely using Monte Carlo simulations of DM trajectories inside the Earth's crust, atmosphere, or any kind of shielding. Above a critical dark matter-nucleus scattering cross section, any terrestrial direct detection experiment loses sensitivity to dark matter, since the Earth crust, atmosphere, and potential shielding layers start to block off the dark matter particles. This critical cross section is commonly determined by describing the average energy loss of the dark matter particles analytically. However, this treatment overestimates the stopping power of the Earth crust; therefore, the obtained bounds should be considered as conservative. DaMaSCUS-CRUST is a modified version of DaMaSCUS (ascl:1706.003) that accounts for shielding effects and returns a precise exclusion band.

[ascl:1507.015] DALI: Derivative Approximation for LIkelihoods

DALI (Derivative Approximation for LIkelihoods) is a fast approximation of non-Gaussian likelihoods. It extends the Fisher Matrix in a straightforward way and allows for a wider range of posterior shapes. The code is written in C/C++.

[ascl:1804.005] DaCHS: Data Center Helper Suite

DaCHS, the Data Center Helper Suite, is an integrated package for publishing astronomical data sets to the Virtual Observatory. Network-facing, it speaks the major VO protocols (SCS, SIAP, SSAP, TAP, Datalink, etc). Operator-facing, many input formats, including FITS/WCS, ASCII files, and VOTable, can be processed to publication-ready data. DaCHS puts particular emphasis on integrated metadata handling, which facilitates a tight integration with the VO's Registry

[ascl:1612.007] dacapo_calibration: Photometric calibration code

dacapo_calibration implements the DaCapo algorithm used in the Planck/LFI 2015 data release for photometric calibration. The code takes as input a set of TODs and calibrates them using the CMB dipole signal. DaCapo is a variant of the well-known family of destriping algorithms for map-making.

[ascl:1504.018] D3PO: Denoising, Deconvolving, and Decomposing Photon Observations

D3PO (Denoising, Deconvolving, and Decomposing Photon Observations) addresses the inference problem of denoising, deconvolving, and decomposing photon observations. Its primary goal is the simultaneous but individual reconstruction of the diffuse and point-like photon flux given a single photon count image, where the fluxes are superimposed. A hierarchical Bayesian parameter model is used to discriminate between morphologically different signal components, yielding a diffuse and a point-like signal estimate for the photon flux components.

[ascl:1606.003] Cygrid: Cython-powered convolution-based gridding module for Python

The Python module Cygrid grids (resamples) data to any collection of spherical target coordinates, although its typical application involves FITS maps or data cubes. The module supports the FITS world coordinate system (WCS) standard; its underlying algorithm is based on the convolution of the original samples with a 2D Gaussian kernel. A lookup table scheme allows parallelization of the code and is combined with the HEALPix tessellation of the sphere for fast neighbor searches. Cygrid's runtime scales between O(n) and O(nlog n), with n being the number of input samples.

[submitted] cuvarbase: fast period finding utilities for GPU's (Python)

cuvarbase provides a Python (2.7+) library for performing period finding (Lomb-Scargle, Phase Dispersion Minimization, Conditional Entropy, Box-least squares) on astronomical time-series datasets. Speedups over CPU implementations depend on the algorithm, dataset, and GPU capabilities but are typically ~1-2 orders of magnitude and are especially high for BLS and Lomb-Scargle. Unit tested and available via pip or from source at GitHub.

[ascl:1708.018] CUTEX: CUrvature Thresholding EXtractor

CuTEx analyzes images in the infrared bands and extracts sources from complex backgrounds, particularly star-forming regions that offer the challenges of crowding, having a highly spatially variable background, and having no-psf profiles such as protostars in their accreting phase. The code is composed of two main algorithms, the first an algorithm for source detection, and the second for flux extraction. The code is originally written in IDL language and it was exported in the license free GDL language. CuTEx could be used in other bands or in scientific cases different from the native case.

This software is also available as an on-line tool from the Multi-Mission Interactive Archive web pages dedicated to the Herschel Observatory.

[ascl:1505.016] CUTE: Correlation Utilities and Two-point Estimation

CUTE (Correlation Utilities and Two-point Estimation) extracts any two-point statistic from enormous datasets with hundreds of millions of objects, such as large galaxy surveys. The computational time grows with the square of the number of objects to be correlated; technology provides multiple means to massively parallelize this problem and CUTE is specifically designed for these kind of calculations. Two implementations are provided: one for execution on shared-memory machines using OpenMP and one that runs on graphical processing units (GPUs) using CUDA.

[ascl:1405.015] CURSA: Catalog and Table Manipulation Applications

The CURSA package manipulates astronomical catalogs and similar tabular datasets. It provides facilities for browsing or examining catalogs; selecting subsets from a catalog; sorting and copying catalogs; pairing two catalogs; converting catalog coordinates between some celestial coordinate systems; and plotting finding charts and photometric calibration. It can also extract subsets from a catalog in a format suitable for plotting using other Starlink packages such as PONGO. CURSA can access catalogs held in the popular FITS table format, the Tab-Separated Table (TST) format or the Small Text List (STL) format. Catalogs in the STL and TST formats are simple ASCII text files. CURSA also includes some facilities for accessing remote on-line catalogs via the Internet. It is part of the Starlink software collection (ascl:1110.012).

[ascl:1311.008] CUPID: Customizable User Pipeline for IRS Data

Written in c, the Customizable User Pipeline for IRS Data (CUPID) allows users to run the Spitzer IRS Pipelines to re-create Basic Calibrated Data and extract calibrated spectra from the archived raw files. CUPID provides full access to all the parameters of the BCD, COADD, BKSUB, BKSUBX, and COADDX pipelines, as well as the opportunity for users to provide their own calibration files (e.g., flats or darks). CUPID is available for Mac, Linux, and Solaris operating systems.

[ascl:1311.007] CUPID: Clump Identification and Analysis Package

The CUPID package allows the identification and analysis of clumps of emission within 1, 2 or 3 dimensional data arrays. Whilst targeted primarily at sub-mm cubes, it can be used on any regularly gridded 1, 2 or 3D data. A variety of clump finding algorithms are implemented within CUPID, including the established ClumpFind (ascl:1107.014) and GaussClumps algorithms. In addition, two new algorithms called FellWalker and Reinhold are also provided. CUPID allows easy inter-comparison between the results of different algorithms; the catalogues produced by each algorithm contains a standard set of columns containing clump peak position, clump centroid position, the integrated data value within the clump, clump volume, and the dimensions of the clump. In addition, pixel masks are produced identifying which input pixels contribute to each clump. CUPID is distributed as part of the Starlink (ascl:1110.012) software collection.

[ascl:1109.013] CULSP: Fast Calculation of the Lomb-Scargle Periodogram Using Graphics Processing Units

I introduce a new code for fast calculation of the Lomb-Scargle periodogram, that leverages the computing power of graphics processing units (GPUs). After establishing a background to the newly emergent field of GPU computing, I discuss the code design and narrate key parts of its source. Benchmarking calculations indicate no significant differences in accuracy compared to an equivalent CPU-based code. However, the differences in performance are pronounced; running on a low-end GPU, the code can match 8 CPU cores, and on a high-end GPU it is faster by a factor approaching thirty. Applications of the code include analysis of long photometric time series obtained by ongoing satellite missions and upcoming ground-based monitoring facilities; and Monte-Carlo simulation of periodogram statistical properties.

[ascl:1810.015] cuFFS: CUDA-accelerated Fast Faraday Synthesis

cuFFS (CUDA-accelerated Fast Faraday Synthesis) performs Faraday rotation measure synthesis; it is particularly well-suited for performing RM synthesis on large datasets. Compared to a fast single-threaded and vectorized CPU implementation, depending on the structure and format of the data cubes, cuFFs achieves an increase in speed of up to two orders of magnitude. The code assumes that the pixels values are IEEE single precision floating points (BITPIX=-32), and the input cubes must have 3 axes (2 spatial dimensions and 1 frequency axis) with frequency axis as NAXIS1. A package is included to reformat data with individual stokes Q and U channel maps to the required format. The code supports both the HDFITS format and the standard FITS format, and is written in C with GPU-acceleration achieved using Nvidia's CUDA parallel computing platform.

[ascl:1111.007] CUBISM: CUbe Builder for IRS Spectra Maps

CUBISM, written in IDL, constructs spectral cubes, maps, and arbitrary aperture 1D spectral extractions from sets of mapping mode spectra taken with Spitzer's IRS spectrograph. CUBISM is optimized for non-sparse maps of extended objects, e.g. the nearby galaxy sample of SINGS, but can be used with data from any spectral mapping AOR (primarily validated for maps which are designed as suggested by the mapping HOWTO).

[ascl:1805.031] CubiCal: Suite for fast radio interferometric calibration

CubiCal implements several accelerated gain solvers which exploit complex optimization for fast radio interferometric gain calibration. The code can be used for both direction-independent and direction-dependent self-calibration. CubiCal is implemented in Python and Cython, and multiprocessing is fully supported.

[ascl:1208.018] CUBEP3M: High performance P3M N-body code

CUBEP3M is a high performance cosmological N-body code which has many utilities and extensions, including a runtime halo finder, a non-Gaussian initial conditions generator, a tuneable accuracy, and a system of unique particle identification. CUBEP3M is fast, has a memory imprint up to three times lower than other widely used N-body codes, and has been run on up to 20,000 cores, achieving close to ideal weak scaling even at this problem size. It is well suited and has already been used for a broad number of science applications that require either large samples of non-linear realizations or very large dark matter N-body simulations, including cosmological reionization, baryonic acoustic oscillations, weak lensing or non-Gaussian statistics.

[ascl:1512.010] CubeIndexer: Indexer for regions of interest in data cubes

CubeIndexer indexes regions of interest (ROIs) in data cubes reducing the necessary storage space. The software can process data cubes containing megabytes of data in fractions of a second without human supervision, thus allowing it to be incorporated into a production line for displaying objects in a virtual observatory. The software forms part of the Chilean Virtual Observatory (ChiVO) and provides the capability of content-based searches on data cubes to the astronomical community.

[ascl:1805.018] CUBE: Information-optimized parallel cosmological N-body simulation code

CUBE, written in Coarray Fortran, is a particle-mesh based parallel cosmological N-body simulation code. The memory usage of CUBE can approach as low as 6 bytes per particle. Particle pairwise (PP) force, cosmological neutrinos, spherical overdensity (SO) halofinder are included.

[ascl:1609.010] CuBANz: Photometric redshift estimator

CuBANz is a photometric redshift estimator code for high redshift galaxies that uses the back propagation neural network along with clustering of the training set, making it very efficient. The training set is divided into several self learning clusters with galaxies having similar photometric properties and spectroscopic redshifts within a given span. The clustering algorithm uses the color information (i.e. u-g, g-r etc.) rather than the apparent magnitudes at various photometric bands, as the photometric redshift is more sensitive to the flux differences between different bands rather than the actual values. The clustering method enables accurate determination of the redshifts. CuBANz considers uncertainty in the photometric measurements as well as uncertainty in the neural network training. The code is written in C.

[ascl:1608.008] Cuba: Multidimensional numerical integration library

The Cuba library offers four independent routines for multidimensional numerical integration: Vegas, Suave, Divonne, and Cuhre. The four algorithms work by very different methods, and can integrate vector integrands and have very similar Fortran, C/C++, and Mathematica interfaces. Their invocation is very similar, making it easy to cross-check by substituting one method by another. For further safeguarding, the output is supplemented by a chi-square probability which quantifies the reliability of the error estimate.

[ascl:1601.005] ctools: Cherenkov Telescope Science Analysis Software

ctools provides tools for the scientific analysis of Cherenkov Telescope Array (CTA) data. Analysis of data from existing Imaging Air Cherenkov Telescopes (such as H.E.S.S., MAGIC or VERITAS) is also supported, provided that the data and response functions are available in the format defined for CTA. ctools comprises a set of ftools-like binary executables with a command-line interface allowing for interactive step-wise data analysis. A Python module allows control of all executables, and the creation of shell or Python scripts and pipelines is supported. ctools provides cscripts, which are Python scripts complementing the binary executables. Extensions of the ctools package by user defined binary executables or Python scripts is supported. ctools are based on GammaLib (ascl:1110.007).

[ascl:1307.015] CTI Correction Code

Charge Transfer Inefficiency (CTI) due to radiation damage above the Earth's atmosphere creates spurious trailing in images from Charge-Coupled Device (CCD) imaging detectors. Radiation damage also creates unrelated warm pixels, which can be used to measure CTI. This code provides pixel-based correction for CTI and has proven effective in Hubble Space Telescope Advanced Camera for Surveys raw images, successfully reducing the CTI trails by a factor of ~30 everywhere in the CCD and at all flux levels. The core is written in java for speed, and a front-end user interface is provided in IDL. The code operates on raw data by returning individual electrons to pixels from which they were unintentionally dragged during readout. Correction takes about 25 minutes per ACS exposure, but is trivially parallelisable to multiple processors.

[ascl:0104.002] CSENV: A code for the chemistry of CircumStellar ENVelopes

CSENV is a code that computes the chemical abundances for a desired set of species as a function of radius in a stationary, non-clumpy, CircumStellar ENVelope. The chemical species can be atoms, molecules, ions, radicals, molecular ions, and/or their specific quantum states. Collisional ionization or excitation can be incorporated through the proper chemical channels. The chemical species interact with one another and can are subject to photo-processes (dissociation of molecules, radicals, and molecular ions as well as ionization of all species). Cosmic ray ionization can be included. Chemical reaction rates are specified with possible activation temperatures and additional power-law dependences. Photo-absorption cross-sections vs. wavelength, with appropriate thresholds, can be specified for each species, while for H2+ a photoabsorption cross-section is provided as a function of wavelength and temperature. The photons originate from both the star and the external interstellar medium. The chemical species are shielded from the photons by circumstellar dust, by other species and by themselves (self-shielding). Shielding of continuum-absorbing species by these species (self and mutual shielding), line-absorbing species, and dust varies with radial optical depth. The envelope is spherical by default, but can be made bipolar with an opening solid-angle that varies with radius. In the non-spherical case, no provision is made for photons penetrating the envelope from the sides. The envelope is subject to a radial outflow (or wind), constant velocity by default, but the wind velocity can be made to vary with radius. The temperature of the envelope is specified (and thus not computed self-consistently).

[ascl:1308.011] CRUSH: Comprehensive Reduction Utility for SHARC-2 (and more...)

CRUSH is an astronomical data reduction/imaging tool for certain imaging cameras, especially at the millimeter, sub-millimeter, and far-infrared wavelengths. It supports the SHARC-2, LABOCA, SABOCA, ASZCA, p-ArTeMiS, PolKa, GISMO, MAKO and SCUBA-2 instruments. The code is written entirely in Java, allowing it to run on virtually any platform. It is normally run from the command-line with several arguments.

[ascl:1202.007] CRUNCH3D: Three-dimensional compressible MHD code

CRUNCH3D is a massively parallel, viscoresistive, three-dimensional compressible MHD code. The code employs a Fourier collocation spatial discretization, and uses a second-order Runge-Kutta temporal discretization. CRUNCH3D can be applied to MHD turbulence and magnetic fluxtube reconnection research.

[ascl:1412.013] CRPropa: Numerical tool for the propagation of UHE cosmic rays, gamma-rays and neutrinos

CRPropa computes the observable properties of UHECRs and their secondaries in a variety of models for the sources and propagation of these particles. CRPropa takes into account interactions and deflections of primary UHECRs as well as propagation of secondary electromagnetic cascades and neutrinos. CRPropa makes use of the public code SOPHIA (ascl:1412.014), and the TinyXML, CFITSIO (ascl:1010.001), and CLHEP libraries. A major advantage of CRPropa is its modularity, which allows users to implement their own modules adapted to specific UHECR propagation models.

[ascl:1110.020] CROSS_CMBFAST: ISW-correlation Code

This code is an extension of CMBFAST4.5.1 to compute the ISW-correlation power spectrum and the 2-point angular ISW-correlation function for a given galaxy window function. It includes dark energy models specified by a constant equation of state (w) or a linear parameterization in the scale factor (w0,wa) and a constant sound speed (c2de). The ISW computation is limited to flat geometry. Differently from the original CMBFAST4.5 version dark energy perturbations are implemented for a general dark energy fluid specified by w(z) and c2de in synchronous gauge. For time varying dark energy models it is suggested not to cross the w=-1 line, as Dr. Wenkman says: "never cross the streams", bad things can happen.

[ascl:1708.003] CRISPRED: CRISP imaging spectropolarimeter data reduction pipeline

CRISPRED reduces data from the CRISP imaging spectropolarimeter at the Swedish 1 m Solar Telescope (SST). It performs fitting routines, corrects optical aberrations from atmospheric turbulence as well as from the optics, and compensates for inter-camera misalignments, field-dependent and time-varying instrumental polarization, and spatial variation in the detector gain and in the zero level offset (bias). It has an object-oriented IDL structure with computationally demanding routines performed in C subprograms called as dynamically loadable modules (DLMs).

[ascl:1612.009] CRETE: Comet RadiativE Transfer and Excitation

CRETE (Comet RadiativE Transfer and Excitation) is a one-dimensional water excitation and radiation transfer code for sub-millimeter wavelengths based on the RATRAN code (ascl:0008.002). The code considers rotational transitions of water molecules given a Haser spherically symmetric distribution for the cometary coma and produces FITS image cubes that can be analyzed with tools like MIRIAD (ascl:1106.007). In addition to collisional processes to excite water molecules, the effect of infrared radiation from the Sun is approximated by effective pumping rates for the rotational levels in the ground vibrational state.

[ascl:1308.009] CReSyPS: Stellar population synthesis code

CReSyPS (Code Rennais de Synthèse de Populations Stellaires) is a stellar population synthesis code that determines core overshooting amount for Magellanic clouds main sequence stars.

[ascl:1111.002] CRBLASTER: A Parallel-Processing Computational Framework for Embarrassingly-Parallel Image-Analysis Algorithms

The development of parallel-processing image-analysis codes is generally a challenging task that requires complicated choreography of interprocessor communications. If, however, the image-analysis algorithm is embarrassingly parallel, then the development of a parallel-processing implementation of that algorithm can be a much easier task to accomplish because, by definition, there is little need for communication between the compute processes. I describe the design, implementation, and performance of a parallel-processing image-analysis application, called CRBLASTER, which does cosmic-ray rejection of CCD (charge-coupled device) images using the embarrassingly-parallel L.A.COSMIC algorithm. CRBLASTER is written in C using the high-performance computing industry standard Message Passing Interface (MPI) library. The code has been designed to be used by research scientists who are familiar with C as a parallel-processing computational framework that enables the easy development of parallel-processing image-analysis programs based on embarrassingly-parallel algorithms. The CRBLASTER source code is freely available at the official application website at the National Optical Astronomy Observatory. Removing cosmic rays from a single 800x800 pixel Hubble Space Telescope WFPC2 image takes 44 seconds with the IRAF script lacos_im.cl running on a single core of an Apple Mac Pro computer with two 2.8-GHz quad-core Intel Xeon processors. CRBLASTER is 7.4 times faster processing the same image on a single core on the same machine. Processing the same image with CRBLASTER simultaneously on all 8 cores of the same machine takes 0.875 seconds -- which is a speedup factor of 50.3 times faster than the IRAF script. A detailed analysis is presented of the performance of CRBLASTER using between 1 and 57 processors on a low-power Tilera 700-MHz 64-core TILE64 processor.

[ascl:1101.008] CRASH: A Block-Adaptive-Mesh Code for Radiative Shock Hydrodynamics

We describe the CRASH (Center for Radiative Shock Hydrodynamics) code, a block adaptive mesh code for multi-material radiation hydrodynamics. The implementation solves the radiation diffusion model with the gray or multigroup method and uses a flux limited diffusion approximation to recover the free-streaming limit. The electrons and ions are allowed to have different temperatures and we include a flux limited electron heat conduction. The radiation hydrodynamic equations are solved in the Eulerian frame by means of a conservative finite volume discretization in either one, two, or three-dimensional slab geometry or in two-dimensional cylindrical symmetry. An operator split method is used to solve these equations in three substeps: (1) solve the hydrodynamic equations with shock-capturing schemes, (2) a linear advection of the radiation in frequency-logarithm space, and (3) an implicit solve of the stiff radiation diffusion, heat conduction, and energy exchange. We present a suite of verification test problems to demonstrate the accuracy and performance of the algorithms. The CRASH code is an extension of the Block-Adaptive Tree Solarwind Roe Upwind Scheme (BATS-R-US) code with this new radiation transfer and heat conduction library and equation-of-state and multigroup opacity solvers. Both CRASH and BATS-R-US are part of the publicly available Space Weather Modeling Framework (SWMF).

[submitted] CR-SISTEM: Symplectic integrator for lunar core-mantle and orbital dynamics

This integrator is based on the algorithm of Touma and Wisdom (2001, http://ui.adsabs.harvard.edu/abs/2001AJ....122.1030T). The triaxial Moon has a triaxial liquid core, and is perturbed by the Sun and Earth's oblateness. Orbits of the Moon and Earth are fully integrated, and other planets (or additional point-mass satellites) may be included in the integration. Lunar and solar tides on Earth, eccentricity and obliquity tides on the Moon, and lunar core-mantle friction and all included. The tides on Earth and the Moon are treated in the same way Cuk et al (2016, http://ui.adsabs.harvard.edu/abs/2016Natur.539..402C) and many details of their closely-related code can be found in the online supplement of that paper. In the posted version, the lunar core-mantle friction torque is directly proportional to the core-mantle differential rotation, with a fixed damping timescale of 10,000 present-day sidereal months (120 yrs, after Pavlov et al. (2016, https://ui.adsabs.harvard.edu/abs/2016CeMDA.126...61P).

[ascl:1102.012] CPROPS: Bias-free Measurement of Giant Molecular Cloud Properties

CPROPS, written in IDL, processes FITS data cubes containing molecular line emission and returns the properties of molecular clouds contained within it. Without corrections for the effects of beam convolution and sensitivity to GMC properties, the resulting properties may be severely biased. This is particularly true for extragalactic observations, where resolution and sensitivity effects often bias measured values by 40% or more. We correct for finite spatial and spectral resolutions with a simple deconvolution and we correct for sensitivity biases by extrapolating properties of a GMC to those we would expect to measure with perfect sensitivity. The resulting method recovers the properties of a GMC to within 10% over a large range of resolutions and sensitivities, provided the clouds are marginally resolved with a peak signal-to-noise ratio greater than 10. We note that interferometers systematically underestimate cloud properties, particularly the flux from a cloud. The degree of bias depends on the sensitivity of the observations and the (u,v) coverage of the observations. In the Appendix to the paper we present a conservative, new decomposition algorithm for identifying GMCs in molecular-line observations. This algorithm treats the data in physical rather than observational units, does not produce spurious clouds in the presence of noise, and is sensitive to a range of morphologies. As a result, the output of this decomposition should be directly comparable among disparate data sets.

The CPROPS package contains within it a distribution of the CLUMPFIND code written by Jonathan Williams and described in Williams, de Geus, and Blitz(1994). The package is available as a stand alone package. If you make use of the CLUMPFIND functionality in the CPROPS package for a publication, please cite Jonathan's original article.

[ascl:1710.009] CppTransport: Two- and three-point function transport framework for inflationary cosmology

CppTransport solves the 2- and 3-point functions of the perturbations produced during an inflationary epoch in the very early universe. It is implemented for models with canonical kinetic terms, although the underlying method is quite general and could be scaled to handle models with a non-trivial field-space metric or an even more general non-canonical Lagrangian.

[ascl:1402.010] CPL: Common Pipeline Library

The Common Pipeline Library (CPL) is a set of ISO-C libraries that provide a comprehensive, efficient and robust software toolkit to create automated astronomical data reduction pipelines. Though initially developed as a standardized way to build VLT instrument pipelines, the CPL may be more generally applied to any similar application. The code also provides a variety of general purpose image- and signal-processing functions, making it an excellent framework for the creation of more generic data handling packages. The CPL handles low-level data types (images, tables, matrices, strings, property lists, etc.) and medium-level data access methods (a simple data abstraction layer for FITS files). It also provides table organization and manipulation, keyword/value handling and management, and support for dynamic loading of recipe modules using programs such as EsoRex (ascl:1504.003).

[ascl:1808.003] CPF: Corral Pipeline Framework

Corral generates astronomical pipelines. Data processing pipelines represent an important slice of the astronomical software library that include chains of processes that transform raw data into valuable information via data reduction and analysis. Written in Python, Corral features a Model-View-Controller design pattern on top of an SQL Relational Database capable of handling custom data models, processing stages, and communication alerts. It also provides automatic quality and structural metrics based on unit testing. The Model-View-Controller provides concept separation between the user logic and the data models, delivering at the same time multi-processing and distributed computing capabilities.

[ascl:1904.028] covdisc: Disconnected covariance of 2-point functions in large-scale structure of the Universe

covdisc computes the disconnected part of the covariance matrix of 2-point functions in large-scale structure studies, accounting for the survey window effect. This method works for both power spectrum and correlation function, and applies to the covariances for various probes including the multi- poles and the wedges of 3D clustering, the angular and the projected statistics of clustering and lensing, as well as their cross covariances.

[ascl:1512.013] CounterPoint: Zeeman-split absorption lines

CounterPoint works in concert with MoogStokes (ascl:1308.018). It applies the Zeeman effect to the atomic lines in the region of study, splitting them into the correct number of Zeeman components and adjusting their relative intensities according to the predictions of Quantum Mechanics, and finally creates a Moog-readable line list for use with MoogStokes. CounterPoint has the ability to use VALD and HITRAN line databases for both atomic and molecular lines.

[ascl:1307.010] cosmoxi2d: Two-point galaxy correlation function calculation

Cosmoxi2d is written in C and computes the theoretical two-point galaxy correlation function as a function of cosmological and galaxy nuisance parameters. It numerically evaluates the model described in detail in Reid and White 2011 (arxiv:1105.4165) and Reid et al. 2012 (arxiv:1203.6641) for the multipole moments (up to ell = 4) for the observed redshift space correlation function of biased tracers as a function of cosmological (though an input linear matter power spectrum, growth rate f, and Alcock-Paczynski geometric factors alphaperp and alphapar) as well as nuisance parameters describing the tracers (bias and small scale additive velocity dispersion, isotropicdisp1d).

This model works best for highly biased tracers where the 2nd order bias term is small. On scales larger than 100 Mpc, the code relies on 2nd order Lagrangian Perturbation theory as detailed in Matsubara 2008 (PRD 78, 083519), and uses the analytic version of Reid and White 2011 on smaller scales.

[ascl:1504.010] CosmoTransitions: Cosmological Phase Transitions

CosmoTransitions analyzes early-Universe finite-temperature phase transitions with multiple scalar fields. The code enables analysis of the phase structure of an input theory, determines the amount of supercooling at each phase transition, and finds the bubble-wall profiles of the nucleated bubbles that drive the transitions.

[ascl:1311.009] CosmoTherm: Thermalization code

CosmoTherm allows precise computation of CMB spectral distortions caused by energy release in the early Universe. Different energy-release scenarios (e.g., decaying or annihilating particles) are implemented using the Green's function of the cosmological thermalization problem, allowing fast computation of the distortion signal. The full thermalization problem can be solved on a case-by-case basis for a wide range of energy-release scenarios using the full PDE solver of CosmoTherm. A simple Monte-Carlo toolkit is included for parameter estimation and forecasts using the Green's function method.

[ascl:1701.004] CosmoSlik: Cosmology sampler of likelihoods

CosmoSlik quickly puts together, runs, and analyzes an MCMC chain for analysis of cosmological data. It is highly modular and comes with plugins for CAMB (ascl:1102.026), CLASS (ascl:1106.020), the Planck likelihood, the South Pole Telescope likelihood, other cosmological likelihoods, emcee (ascl:1303.002), and more. It offers ease-of-use, flexibility, and modularity.

[ascl:1409.012] CosmoSIS: Cosmological parameter estimation

CosmoSIS is a cosmological parameter estimation code. It structures cosmological parameter estimation to ease re-usability, debugging, verifiability, and code sharing in the form of calculation modules. Witten in python, CosmoSIS consolidates and connects existing code for predicting cosmic observables and maps out experimental likelihoods with a range of different techniques.

[ascl:1705.001] COSMOS: Carnegie Observatories System for MultiObject Spectroscopy

COSMOS (Carnegie Observatories System for MultiObject Spectroscopy) reduces multislit spectra obtained with the IMACS and LDSS3 spectrographs on the Magellan Telescopes. It can be used for the quick-look analysis of data at the telescope as well as for pipeline reduction of large data sets. COSMOS is based on a precise optical model of the spectrographs, which allows (after alignment and calibration) an accurate prediction of the location of spectra features. This eliminates the line search procedure which is fundamental to many spectral reduction programs, and allows a robust data pipeline to be run in an almost fully automatic mode, allowing large amounts of data to be reduced with minimal intervention.

[ascl:1304.017] CosmoRec: Cosmological Recombination code

CosmoRec solves the recombination problem including recombinations to highly excited states, corrections to the 2s-1s two-photon channel, HI Lyn-feedback, n>2 two-photon profile corrections, and n≥2 Raman-processes. The code can solve the radiative transfer equation of the Lyman-series photon field to obtain the required modifications to the rate equations of the resolved levels, and handles electron scattering, the effect of HeI intercombination transitions, and absorption of helium photons by hydrogen. It also allows accounting for dark matter annihilation and optionally includes detailed helium radiative transfer effects.

[ascl:1212.006] CosmoPMC: Cosmology sampling with Population Monte Carlo

CosmoPMC is a Monte-Carlo sampling method to explore the likelihood of various cosmological probes. The sampling engine is implemented with the package pmclib. It is called Population MonteCarlo (PMC), which is a novel technique to sample from the posterior. PMC is an adaptive importance sampling method which iteratively improves the proposal to approximate the posterior. This code has been introduced, tested and applied to various cosmology data sets.

[ascl:1408.018] CosmoPhotoz: Photometric redshift estimation using generalized linear models

CosmoPhotoz determines photometric redshifts from galaxies utilizing their magnitudes. The method uses generalized linear models which reproduce the physical aspects of the output distribution. The code can adopt gamma or inverse gaussian families, either from a frequentist or a Bayesian perspective. A set of publicly available libraries and a web application are available. This software allows users to apply a set of GLMs to their own photometric catalogs and generates publication quality plots with no involvement from the user. The code additionally provides a Shiny application providing a simple user interface.

[ascl:1110.019] CosmoNest: Cosmological Nested Sampling

CosmoNest is an algorithm for cosmological model selection. Given a model, defined by a set of parameters to be varied and their prior ranges, and data, the algorithm computes the evidence (the marginalized likelihood of the model in light of the data). The Bayes factor, which is proportional to the relative evidence of two models, can then be used for model comparison, i.e. to decide whether a model is an adequate description of data, or whether the data require a more complex model.

For convenience, CosmoNest, programmed in Fortran, is presented here as an optional add-on to CosmoMC (ascl:1106.025), which is widely used by the cosmological community to perform parameter fitting within a model using a Markov-Chain Monte-Carlo (MCMC) engine. For this reason it can be run very easily by anyone who is able to compile and run CosmoMC. CosmoNest implements a different sampling strategy, geared for computing the evidence very accurately and efficiently. It also provides posteriors for parameter fitting as a by-product.

[ascl:1106.025] CosmoMC: Cosmological MonteCarlo

We present a fast Markov Chain Monte-Carlo exploration of cosmological parameter space. We perform a joint analysis of results from recent CMB experiments and provide parameter constraints, including sigma_8, from the CMB independent of other data. We next combine data from the CMB, HST Key Project, 2dF galaxy redshift survey, supernovae Ia and big-bang nucleosynthesis. The Monte Carlo method allows the rapid investigation of a large number of parameters, and we present results from 6 and 9 parameter analyses of flat models, and an 11 parameter analysis of non-flat models. Our results include constraints on the neutrino mass (m_nu < 0.3eV), equation of state of the dark energy, and the tensor amplitude, as well as demonstrating the effect of additional parameters on the base parameter constraints. In a series of appendices we describe the many uses of importance sampling, including computing results from new data and accuracy correction of results generated from an approximate method. We also discuss the different ways of converting parameter samples to parameter constraints, the effect of the prior, assess the goodness of fit and consistency, and describe the use of analytic marginalization over normalization parameters.

[ascl:1110.024] CosmoMC SNLS: CosmoMC Plug-in to Analyze SNLS3 SN Data

This module is a plug-in for CosmoMC and requires that software. Though programmed to analyze SNLS3 SN data, it can also be used for other SN data provided the inputs are put in the right form. In fact, this is probably a good idea, since the default treatment that comes with CosmoMC is flawed. Note that this requires fitting two additional SN nuisance parameters (alpha and beta), but this is significantly faster than attempting to marginalize over them internally.

[ascl:1303.003] CosmoHammer: Cosmological parameter estimation with the MCMC Hammer

CosmoHammer is a Python framework for the estimation of cosmological parameters. The software embeds the Python package emcee by Foreman-Mackey et al. (2012) and gives the user the possibility to plug in modules for the computation of any desired likelihood. The major goal of the software is to reduce the complexity when one wants to extend or replace the existing computation by modules which fit the user's needs as well as to provide the possibility to easily use large scale computing environments. CosmoHammer can efficiently distribute the MCMC sampling over thousands of cores on modern cloud computing infrastructure.

[ascl:1511.019] CosmoBolognaLib: Open source C++ libraries for cosmological calculations

CosmoBolognaLib contains numerical libraries for cosmological calculations; written in C++, it is intended to define a common numerical environment for cosmological investigations of the large-scale structure of the Universe. The software aids in handling real and simulated astronomical catalogs by measuring one-point, two-point and three-point statistics in configuration space and performing cosmological analyses. These open source libraries can be included in either C++ or Python codes.

[ascl:1505.013] cosmoabc: Likelihood-free inference for cosmology

Approximate Bayesian Computation (ABC) enables parameter inference for complex physical systems in cases where the true likelihood function is unknown, unavailable, or computationally too expensive. It relies on the forward simulation of mock data and comparison between observed and synthetic catalogs. cosmoabc is a Python Approximate Bayesian Computation (ABC) sampler featuring a Population Monte Carlo variation of the original ABC algorithm, which uses an adaptive importance sampling scheme. The code can be coupled to an external simulator to allow incorporation of arbitrary distance and prior functions. When coupled with the numcosmo library, it has been used to estimate posterior probability distributions over cosmological parameters based on measurements of galaxy clusters number counts without computing the likelihood function.

[ascl:9910.004] COSMICS: Cosmological initial conditions and microwave anisotropy codes

COSMICS is a package of Fortran programs useful for computing transfer functions and microwave background anisotropy for cosmological models, and for generating gaussian random initial conditions for nonlinear structure formation simulations of such models. Four programs are provided: linger_con and linger_syn integrate the linearized equations of general relativity, matter, and radiation in conformal Newtonian and synchronous gauge, respectively; deltat integrates the photon transfer functions computed by the linger codes to produce photon anisotropy power spectra; and grafic tabulates normalized matter power spectra and produces constrained or unconstrained samples of the matter density field.

[ascl:1601.008] CosmicPy: Interactive cosmology computations

CosmicPy performs simple and interactive cosmology computations for forecasting cosmological parameters constraints; it computes tomographic and 3D Spherical Fourier-Bessel power spectra as well as Fisher matrices for galaxy clustering. Written in Python, it relies on a fast C++ implementation of Fourier-Bessel related computations, and requires NumPy, SciPy, and Matplotlib.

[ascl:1304.006] CosmicEmuLog: Cosmological Power Spectra Emulator

CosmicEmuLog is a simple Python emulator for cosmological power spectra. In addition to the power spectrum of the conventional overdensity field, it emulates the power spectra of the log-density as well as the Gaussianized density. It models fluctuations in the power spectrum at each k as a linear combination of contributions from fluctuations in each cosmological parameter. The data it uses for emulation consist of ASCII files of the mean power spectrum, together with derivatives of the power spectrum with respect to the five cosmological parameters in the space spanned by the Coyote Universe suite. This data can also be used for Fisher matrix analysis. At present, CosmicEmuLog is restricted to redshift 0.

[ascl:1010.030] CosmicEmu: Cosmic Emulator for the Dark Matter Power Spectrum

Many of the most exciting questions in astrophysics and cosmology, including the majority of observational probes of dark energy, rely on an understanding of the nonlinear regime of structure formation. In order to fully exploit the information available from this regime and to extract cosmological constraints, accurate theoretical predictions are needed. Currently such predictions can only be obtained from costly, precision numerical simulations. The "Coyote Universe'' simulation suite comprises nearly 1,000 N-body simulations at different force and mass resolutions, spanning 38 wCDM cosmologies. This large simulation suite enabled construct of a prediction scheme, or emulator, for the nonlinear matter power spectrum accurate at the percent level out to k~1 h/Mpc. This is the first cosmic emulator for the dark matter power spectrum.

[ascl:1010.040] Cosmic String Simulations

Complicated cosmic string loops will fragment until they reach simple, non-intersecting ("stable") configurations. Through extensive numerical study, these attractor loop shapes are characterized including their length, velocity, kink, and cusp distributions. An initial loop containing $M$ harmonic modes will, on average, split into 3M stable loops. These stable loops are approximately described by the degenerate kinky loop, which is planar and rectangular, independently of the number of modes on the initial loop. This is confirmed by an analytic construction of a stable family of perturbed degenerate kinky loops. The average stable loop is also found to have a 40% chance of containing a cusp. This new analytic scheme explicitly solves the string constraint equations.

[ascl:1712.008] CosApps: Simulate gravitational lensing through ray tracing and shear calculation

Cosmology Applications (CosApps) provides tools to simulate gravitational lensing using two different techniques, ray tracing and shear calculation. The tool ray_trace_ellipse calculates deflection angles on a grid for light passing a deflecting mass distribution. Using MPI, ray_trace_ellipse may calculate deflection in parallel across network connected computers, such as cluster. The program physcalc calculates the gravitational lensing shear using the relationship of convergence and shear, described by a set of coupled partial differential equations.

[ascl:1202.006] CORSIKA: An Air Shower Simulation Program

CORSIKA (COsmic Ray Simulations for KAscade) is a program for detailed simulation of extensive air showers initiated by high energy cosmic ray particles. Protons, light nuclei up to iron, photons, and many other particles may be treated as primaries. The particles are tracked through the atmosphere until they undergo reactions with the air nuclei or, in the case of unstable secondaries, decay. The hadronic interactions at high energies may be described by several reaction models. Hadronic interactions at lower energies are described, and in particle decays all decay branches down to the 1% level are taken into account. Options for the generation of Cherenkov radiation and neutrinos exist. CORSIKA may be used up to and beyond the highest energies of 100 EeV.

[ascl:1703.003] Corrfunc: Blazing fast correlation functions on the CPU

Corrfunc is a suite of high-performance clustering routines. The code can compute a variety of spatial correlation functions on Cartesian geometry as well Landy-Szalay calculations for spatial and angular correlation functions on a spherical geometry and is useful for, for example, exploring the galaxy-halo connection. The code is written in C and can be used on the command-line, through the supplied python extensions, or the C API.

[ascl:1211.004] CORRFIT: Cross-Correlation Routines

CORRFIT is a set of routines that use the cross-correlation method to extract parameters of the line-of-sight velocity distribution from galactic spectra and stellar templates observed on the same system. It works best when the broadening function is well sampled at the spectral resolution used (e.g. 200 km/s dispersion at 2 Angstrom resolution). Results become increasingly sensitive to the spectral match between galaxy and template if the broadening function is not well sampled. CORRFIT does not work well for dispersions less than the velocity sampling interval ('delta' in the code) unless the template is perfect.

[ascl:1711.005] correlcalc: Two-point correlation function from redshift surveys

correlcalc calculates two-point correlation function (2pCF) of galaxies/quasars using redshift surveys. It can be used for any assumed geometry or Cosmology model. Using BallTree algorithms to reduce the computational effort for large datasets, it is a parallelised code suitable for running on clusters as well as personal computers. It takes redshift (z), Right Ascension (RA) and Declination (DEC) data of galaxies and random catalogs as inputs in form of ascii or fits files. If random catalog is not provided, it generates one of desired size based on the input redshift distribution and mangle polygon file (in .ply format) describing the survey geometry. It also calculates different realisations of (3D) anisotropic 2pCF. Optionally it makes healpix maps of the survey providing visualization.

[ascl:1702.002] corner.py: Corner plots

corner.py uses matplotlib to visualize multidimensional samples using a scatterplot matrix. In these visualizations, each one- and two-dimensional projection of the sample is plotted to reveal covariances. corner.py was originally conceived to display the results of Markov Chain Monte Carlo simulations and the defaults are chosen with this application in mind but it can be used for displaying many qualitatively different samples. An earlier version of corner.py was known as triangle.py.

[ascl:1406.003] CoREAS: CORSIKA-based Radio Emission from Air Showers simulator

CoREAS is a Monte Carlo code for the simulation of radio emission from extensive air showers; it is an update of and successor code to REAS3 (ascl:1107.009). It implements the endpoint formalism for the calculation of electromagnetic radiation directly in CORSIKA (ascl:1202.006). As such, it is parameter-free, makes no assumptions on the emission mechanism for the radio signals, and takes into account the complete complexity of the electron and positron distributions as simulated by CORSIKA.

[ascl:1603.002] CORBITS: Efficient Geometric Probabilities of Multi-Transiting Exoplanetary Systems

CORBITS (Computed Occurrence of Revolving Bodies for the Investigation of Transiting Systems) computes the probability that any particular group of exoplanets can be observed to transit from a collection of conjectured exoplanets orbiting a star. The efficient, semi-analytical code computes the areas bounded by circular curves on the surface of a sphere by applying elementary differential geometry. CORBITS is faster than previous algorithms, based on comparisons with Monte Carlo simulations, and tests show that it is extremely accurate even for highly eccentric planets.

[ascl:1112.012] CORA: Emission Line Fitting with Maximum Likelihood

CORA analyzes emission line spectra with low count numbers and fits them to a line using the maximum likelihood technique. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise, the software derives the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. CORA has been applied to an X-ray spectrum with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory.

[ascl:1304.022] Copter: Cosmological perturbation theory

Copter is a software package for doing calculations in cosmological perturbation theory. Specifically, Copter includes code for computing statistical observables in the large-scale structure of matter using various forms of perturbation theory, including linear theory, standard perturbation theory, renormalized perturbation theory, and many others. Copter is written in C++ and makes use of the Boost C++ library headers.

[ascl:1210.013] ConvPhot: A profile-matching algorithm for precision photometry

ConvPhot measures colors between two images having different resolutions. ConvPhot is designed to work especially for faint galaxies, accurately measuring colors in relatively crowded fields. It makes full use of the spatial and morphological information contained in the highest quality images to analyze multiwavelength data with inhomogeneous image quality.

[ascl:1401.006] convolve_image.pro: Common-Resolution Convolution Kernels for Space- and Ground-Based Telescopes

The IDL package convolve_image.pro transforms images between different instrumental point spread functions (PSFs). It can load an image file and corresponding kernel and return the convolved image, thus preserving the colors of the astronomical sources. Convolution kernels are available for images from Spitzer (IRAC MIPS), Herschel (PACS SPIRE), GALEX (FUV NUV), WISE (W1 - W4), Optical PSFs (multi- Gaussian and Moffat functions), and Gaussian PSFs; they allow the study of the Spectral Energy Distribution (SED) of extended objects and preserve the characteristic SED in each pixel.

[ascl:1609.023] contbin: Contour binning and accumulative smoothing

Contbin bins X-ray data using contours on an adaptively smoothed map. The generated bins closely follow the surface brightness, and are ideal where the surface brightness distribution is not smooth, or the spectral properties are expected to follow surface brightness. Color maps can be used instead of surface brightness maps.

[ascl:9905.001] CONSKY: A Sky CCD Integration Simulation

This program addresses the question of what resources are needed to produce a continuous data record of the entire sky down to a given limiting visual magnitude. Toward this end, the program simulates a small camera/telescope or group of small camera/telescopes collecting light from a large portion of the sky. From a given stellar density derived from a Bahcall - Soneira Galaxy model, the program first converts star densities at visual magnitudes between 5 and 20 to number of sky pixels needed to monitor each star simultaneously. From pixels, the program converts input CCD parameters to needed telescope attributes, needed data storage space, and the length of time needed to accumulate data of photometric quality for stars of each limiting visual magnitude over the whole sky. The program steps though photometric integrations one second at a time and includes the contribution from a bright background, read noise, dark current, and atmospheric absorption.

Would you like to view a random code?