[ascl:1304.022]
Copter: Cosmological perturbation theory

Copter is a software package for doing calculations in cosmological perturbation theory. Specifically, Copter includes code for computing statistical observables in the large-scale structure of matter using various forms of perturbation theory, including linear theory, standard perturbation theory, renormalized perturbation theory, and many others. Copter is written in C++ and makes use of the Boost C++ library headers.

[ascl:1112.012]
CORA: Emission Line Fitting with Maximum Likelihood

CORA analyzes emission line spectra with low count numbers and fits them to a line using the maximum likelihood technique. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise, the software derives the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. CORA has been applied to an X-ray spectrum with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory.

[ascl:1603.002]
CORBITS: Efficient Geometric Probabilities of Multi-Transiting Exoplanetary Systems

CORBITS (Computed Occurrence of Revolving Bodies for the Investigation of Transiting Systems) computes the probability that any particular group of exoplanets can be observed to transit from a collection of conjectured exoplanets orbiting a star. The efficient, semi-analytical code computes the areas bounded by circular curves on the surface of a sphere by applying elementary differential geometry. CORBITS is faster than previous algorithms, based on comparisons with Monte Carlo simulations, and tests show that it is extremely accurate even for highly eccentric planets.

[ascl:1406.003]
CoREAS: CORSIKA-based Radio Emission from Air Showers simulator

CoREAS is a Monte Carlo code for the simulation of radio emission from extensive air showers. It implements the endpoint formalism for the calculation of electromagnetic radiation directly in CORSIKA (ascl:1202.006). As such, it is parameter-free, makes no assumptions on the emission mechanism for the radio signals, and takes into account the complete complexity of the electron and positron distributions as simulated by CORSIKA.

[ascl:1702.002]
corner.py: Corner plots

*corner.py* uses matplotlib to visualize multidimensional samples using a scatterplot matrix. In these visualizations, each one- and two-dimensional projection of the sample is plotted to reveal covariances. *corner.py* was originally conceived to display the results of Markov Chain Monte Carlo simulations and the defaults are chosen with this application in mind but it can be used for displaying many qualitatively different samples. An earlier version of *corner.py* was known as triangle.py.

[ascl:1711.005]
correlcalc: Two-point correlation function from redshift surveys

correlcalc calculates two-point correlation function (2pCF) of galaxies/quasars using redshift surveys. It can be used for any assumed geometry or Cosmology model. Using BallTree algorithms to reduce the computational effort for large datasets, it is a parallelised code suitable for running on clusters as well as personal computers. It takes redshift (z), Right Ascension (RA) and Declination (DEC) data of galaxies and random catalogs as inputs in form of ascii or fits files. If random catalog is not provided, it generates one of desired size based on the input redshift distribution and mangle polygon file (in .ply format) describing the survey geometry. It also calculates different realisations of (3D) anisotropic 2pCF. Optionally it makes healpix maps of the survey providing visualization.

[ascl:1211.004]
CORRFIT: Cross-Correlation Routines

CORRFIT is a set of routines that use the cross-correlation method to extract parameters of the line-of-sight velocity distribution from galactic spectra and stellar templates observed on the same system. It works best when the broadening function is well sampled at the spectral resolution used (e.g. 200 km/s dispersion at 2 Angstrom resolution). Results become increasingly sensitive to the spectral match between galaxy and template if the broadening function is not well sampled. CORRFIT does not work well for dispersions less than the velocity sampling interval ('delta' in the code) unless the template is perfect.

[ascl:1703.003]
Corrfunc: Blazing fast correlation functions on the CPU

Corrfunc is a suite of high-performance clustering routines. The code can compute a variety of spatial correlation functions on Cartesian geometry as well Landy-Szalay calculations for spatial and angular correlation functions on a spherical geometry and is useful for, for example, exploring the galaxy-halo connection. The code is written in C and can be used on the command-line, through the supplied python extensions, or the C API.

[ascl:1202.006]
CORSIKA: An Air Shower Simulation Program

CORSIKA (COsmic Ray Simulations for KAscade) is a program for detailed simulation of extensive air showers initiated by high energy cosmic ray particles. Protons, light nuclei up to iron, photons, and many other particles may be treated as primaries. The particles are tracked through the atmosphere until they undergo reactions with the air nuclei or, in the case of unstable secondaries, decay. The hadronic interactions at high energies may be described by several reaction models. Hadronic interactions at lower energies are described, and in particle decays all decay branches down to the 1% level are taken into account. Options for the generation of Cherenkov radiation and neutrinos exist. CORSIKA may be used up to and beyond the highest energies of 100 EeV.

[ascl:1712.008]
CosApps: Simulate gravitational lensing through ray tracing and shear calculation

Cosmology Applications (CosApps) provides tools to simulate gravitational lensing using two different techniques, ray tracing and shear calculation. The tool ray_trace_ellipse calculates deflection angles on a grid for light passing a deflecting mass distribution. Using MPI, ray_trace_ellipse may calculate deflection in parallel across network connected computers, such as cluster. The program physcalc calculates the gravitational lensing shear using the relationship of convergence and shear, described by a set of coupled partial differential equations.

[ascl:1010.040]
Cosmic String Simulations

Complicated cosmic string loops will fragment until they reach simple, non-intersecting ("stable") configurations. Through extensive numerical study, these attractor loop shapes are characterized including their length, velocity, kink, and cusp distributions. An initial loop containing $M$ harmonic modes will, on average, split into 3M stable loops. These stable loops are approximately described by the degenerate kinky loop, which is planar and rectangular, independently of the number of modes on the initial loop. This is confirmed by an analytic construction of a stable family of perturbed degenerate kinky loops. The average stable loop is also found to have a 40% chance of containing a cusp. This new analytic scheme explicitly solves the string constraint equations.

[ascl:1010.030]
CosmicEmu: Cosmic Emulator for the Dark Matter Power Spectrum

Lawrence, Earl; Heitmann, Katrin; White, Martin; Higdon, David; Wagner, Christian; Habib, Salman; Williams, Brian

Many of the most exciting questions in astrophysics and cosmology, including the majority of observational probes of dark energy, rely on an understanding of the nonlinear regime of structure formation. In order to fully exploit the information available from this regime and to extract cosmological constraints, accurate theoretical predictions are needed. Currently such predictions can only be obtained from costly, precision numerical simulations. The "Coyote Universe'' simulation suite comprises nearly 1,000 N-body simulations at different force and mass resolutions, spanning 38 wCDM cosmologies. This large simulation suite enabled construct of a prediction scheme, or emulator, for the nonlinear matter power spectrum accurate at the percent level out to k~1 h/Mpc. This is the first cosmic emulator for the dark matter power spectrum.

[ascl:1304.006]
CosmicEmuLog: Cosmological Power Spectra Emulator

CosmicEmuLog is a simple Python emulator for cosmological power spectra. In addition to the power spectrum of the conventional overdensity field, it emulates the power spectra of the log-density as well as the Gaussianized density. It models fluctuations in the power spectrum at each k as a linear combination of contributions from fluctuations in each cosmological parameter. The data it uses for emulation consist of ASCII files of the mean power spectrum, together with derivatives of the power spectrum with respect to the five cosmological parameters in the space spanned by the Coyote Universe suite. This data can also be used for Fisher matrix analysis. At present, CosmicEmuLog is restricted to redshift 0.

[ascl:1601.008]
CosmicPy: Interactive cosmology computations

CosmicPy performs simple and interactive cosmology computations for forecasting cosmological parameters constraints; it computes tomographic and 3D Spherical Fourier-Bessel power spectra as well as Fisher matrices for galaxy clustering. Written in Python, it relies on a fast C++ implementation of Fourier-Bessel related computations, and requires NumPy, SciPy, and Matplotlib.

[ascl:9910.004]
COSMICS: Cosmological initial conditions and microwave anisotropy codes

COSMICS is a package of Fortran programs useful for computing transfer functions and microwave background anisotropy for cosmological models, and for generating gaussian random initial conditions for nonlinear structure formation simulations of such models. Four programs are provided: linger_con and linger_syn integrate the linearized equations of general relativity, matter, and radiation in conformal Newtonian and synchronous gauge, respectively; deltat integrates the photon transfer functions computed by the linger codes to produce photon anisotropy power spectra; and grafic tabulates normalized matter power spectra and produces constrained or unconstrained samples of the matter density field.

[ascl:1505.013]
cosmoabc: Likelihood-free inference for cosmology

Ishida, Emille E. O.; Vitenti, Sandro D. P.; Penna-Lima, Mariana; Trindade, Arlindo M.; Cisewski, Jessi; M.; de Souza, Rafael; Cameron, Ewan; Busti, Vinicius C.

Approximate Bayesian Computation (ABC) enables parameter inference for complex physical systems in cases where the true likelihood function is unknown, unavailable, or computationally too expensive. It relies on the forward simulation of mock data and comparison between observed and synthetic catalogs. cosmoabc is a Python Approximate Bayesian Computation (ABC) sampler featuring a Population Monte Carlo variation of the original ABC algorithm, which uses an adaptive importance sampling scheme. The code can be coupled to an external simulator to allow incorporation of arbitrary distance and prior functions. When coupled with the numcosmo library, it has been used to estimate posterior probability distributions over cosmological parameters based on measurements of galaxy clusters number counts without computing the likelihood function.

[ascl:1511.019]
CosmoBolognaLib: Open source C++ libraries for cosmological calculations

CosmoBolognaLib contains numerical libraries for cosmological calculations; written in C++, it is intended to define a common numerical environment for cosmological investigations of the large-scale structure of the Universe. The software aids in handling real and simulated astronomical catalogs by measuring one-point, two-point and three-point statistics in configuration space and performing cosmological analyses. These open source libraries can be included in either C++ or Python codes.

[ascl:1303.003]
CosmoHammer: Cosmological parameter estimation with the MCMC Hammer

CosmoHammer is a Python framework for the estimation of cosmological parameters. The software embeds the Python package emcee by Foreman-Mackey et al. (2012) and gives the user the possibility to plug in modules for the computation of any desired likelihood. The major goal of the software is to reduce the complexity when one wants to extend or replace the existing computation by modules which fit the user's needs as well as to provide the possibility to easily use large scale computing environments. CosmoHammer can efficiently distribute the MCMC sampling over thousands of cores on modern cloud computing infrastructure.

[ascl:1110.024]
CosmoMC SNLS: CosmoMC Plug-in to Analyze SNLS3 SN Data

This module is a plug-in for CosmoMC and requires that software. Though programmed to analyze SNLS3 SN data, it can also be used for other SN data provided the inputs are put in the right form. In fact, this is probably a good idea, since the default treatment that comes with CosmoMC is flawed. Note that this requires fitting two additional SN nuisance parameters (alpha and beta), but this is significantly faster than attempting to marginalize over them internally.

[ascl:1106.025]
CosmoMC: Cosmological MonteCarlo

We present a fast Markov Chain Monte-Carlo exploration of cosmological parameter space. We perform a joint analysis of results from recent CMB experiments and provide parameter constraints, including sigma_8, from the CMB independent of other data. We next combine data from the CMB, HST Key Project, 2dF galaxy redshift survey, supernovae Ia and big-bang nucleosynthesis. The Monte Carlo method allows the rapid investigation of a large number of parameters, and we present results from 6 and 9 parameter analyses of flat models, and an 11 parameter analysis of non-flat models. Our results include constraints on the neutrino mass (m_nu < 0.3eV), equation of state of the dark energy, and the tensor amplitude, as well as demonstrating the effect of additional parameters on the base parameter constraints. In a series of appendices we describe the many uses of importance sampling, including computing results from new data and accuracy correction of results generated from an approximate method. We also discuss the different ways of converting parameter samples to parameter constraints, the effect of the prior, assess the goodness of fit and consistency, and describe the use of analytic marginalization over normalization parameters.

[ascl:1110.019]
CosmoNest: Cosmological Nested Sampling

CosmoNest is an algorithm for cosmological model selection. Given a model, defined by a set of parameters to be varied and their prior ranges, and data, the algorithm computes the evidence (the marginalized likelihood of the model in light of the data). The Bayes factor, which is proportional to the relative evidence of two models, can then be used for model comparison, i.e. to decide whether a model is an adequate description of data, or whether the data require a more complex model.

For convenience, CosmoNest, programmed in Fortran, is presented here as an optional add-on to CosmoMC, which is widely used by the cosmological community to perform parameter fitting within a model using a Markov-Chain Monte-Carlo (MCMC) engine. For this reason it can be run very easily by anyone who is able to compile and run CosmoMC. CosmoNest implements a different sampling strategy, geared for computing the evidence very accurately and efficiently. It also provides posteriors for parameter fitting as a by-product.

[ascl:1408.018]
CosmoPhotoz: Photometric redshift estimation using generalized linear models

de Souza, Rafael S.; Elliott, Jonathan; Krone-Martins, Alberto; Ishida, Emille E. O.; Hilbe, Joseph; Cameron, Ewan

CosmoPhotoz determines photometric redshifts from galaxies utilizing their magnitudes. The method uses generalized linear models which reproduce the physical aspects of the output distribution. The code can adopt gamma or inverse gaussian families, either from a frequentist or a Bayesian perspective. A set of publicly available libraries and a web application are available. This software allows users to apply a set of GLMs to their own photometric catalogs and generates publication quality plots with no involvement from the user. The code additionally provides a Shiny application providing a simple user interface.

[ascl:1212.006]
CosmoPMC: Cosmology sampling with Population Monte Carlo

Kilbinger, Martin; Benabed, Karim; Cappé, Olivier; Coupon, Jean; Cardoso, Jean-François; Fort, Gersende; McCracken, Henry Joy; Prunet, Simon; Robert, Christian P.; Wraith, Darren

CosmoPMC is a Monte-Carlo sampling method to explore the likelihood of various cosmological probes. The sampling engine is implemented with the package pmclib. It is called Population MonteCarlo (PMC), which is a novel technique to sample from the posterior. PMC is an adaptive importance sampling method which iteratively improves the proposal to approximate the posterior. This code has been introduced, tested and applied to various cosmology data sets.

[ascl:1304.017]
CosmoRec: Cosmological Recombination code

CosmoRec solves the recombination problem including recombinations to highly excited states, corrections to the 2s-1s two-photon channel, HI Lyn-feedback, n>2 two-photon profile corrections, and n≥2 Raman-processes. The code can solve the radiative transfer equation of the Lyman-series photon field to obtain the required modifications to the rate equations of the resolved levels, and handles electron scattering, the effect of HeI intercombination transitions, and absorption of helium photons by hydrogen. It also allows accounting for dark matter annihilation and optionally includes detailed helium radiative transfer effects.

[ascl:1705.001]
COSMOS: Carnegie Observatories System for MultiObject Spectroscopy

COSMOS (Carnegie Observatories System for MultiObject Spectroscopy) reduces multislit spectra obtained with the IMACS and LDSS3 spectrographs on the Magellan Telescopes. It can be used for the quick-look analysis of data at the telescope as well as for pipeline reduction of large data sets. COSMOS is based on a precise optical model of the spectrographs, which allows (after alignment and calibration) an accurate prediction of the location of spectra features. This eliminates the line search procedure which is fundamental to many spectral reduction programs, and allows a robust data pipeline to be run in an almost fully automatic mode, allowing large amounts of data to be reduced with minimal intervention.

[ascl:1409.012]
CosmoSIS: Cosmological parameter estimation

Zuntz, Joe; Paterno, Marc; Jennings, Elise; Rudd, Douglas; Manzotti, Alessandro; Dodelson, Scott; Bridle, Sarah; Sehrish, Saba; Kowalkowski, James

CosmoSIS is a cosmological parameter estimation code. It structures cosmological parameter estimation to ease re-usability, debugging, verifiability, and code sharing in the form of calculation modules. Witten in python, CosmoSIS consolidates and connects existing code for predicting cosmic observables and maps out experimental likelihoods with a range of different techniques.

[ascl:1701.004]
CosmoSlik: Cosmology sampler of likelihoods

CosmoSlik quickly puts together, runs, and analyzes an MCMC chain for analysis of cosmological data. It is highly modular and comes with plugins for CAMB (ascl:1102.026), CLASS (ascl:1106.020), the Planck likelihood, the South Pole Telescope likelihood, other cosmological likelihoods, emcee (ascl:1303.002), and more. It offers ease-of-use, flexibility, and modularity.

[ascl:1311.009]
CosmoTherm: Thermalization code

CosmoTherm allows precise computation of CMB spectral distortions caused by energy release in the early Universe. Different energy-release scenarios (e.g., decaying or annihilating particles) are implemented using the Green's function of the cosmological thermalization problem, allowing fast computation of the distortion signal. The full thermalization problem can be solved on a case-by-case basis for a wide range of energy-release scenarios using the full PDE solver of CosmoTherm. A simple Monte-Carlo toolkit is included for parameter estimation and forecasts using the Green's function method.

[ascl:1504.010]
CosmoTransitions: Cosmological Phase Transitions

CosmoTransitions analyzes early-Universe finite-temperature phase transitions with multiple scalar fields. The code enables analysis of the phase structure of an input theory, determines the amount of supercooling at each phase transition, and finds the bubble-wall profiles of the nucleated bubbles that drive the transitions.

[ascl:1307.010]
cosmoxi2d: Two-point galaxy correlation function calculation

Cosmoxi2d is written in C and computes the theoretical two-point galaxy correlation function as a function of cosmological and galaxy nuisance parameters. It numerically evaluates the model described in detail in Reid and White 2011 (arxiv:1105.4165) and Reid et al. 2012 (arxiv:1203.6641) for the multipole moments (up to ell = 4) for the observed redshift space correlation function of biased tracers as a function of cosmological (though an input linear matter power spectrum, growth rate f, and Alcock-Paczynski geometric factors alphaperp and alphapar) as well as nuisance parameters describing the tracers (bias and small scale additive velocity dispersion, isotropicdisp1d).

This model works best for highly biased tracers where the 2nd order bias term is small. On scales larger than 100 Mpc, the code relies on 2nd order Lagrangian Perturbation theory as detailed in Matsubara 2008 (PRD 78, 083519), and uses the analytic version of Reid and White 2011 on smaller scales.

[ascl:1512.013]
CounterPoint: Zeeman-split absorption lines

CounterPoint works in concert with MoogStokes (ascl:1308.018). It applies the Zeeman effect to the atomic lines in the region of study, splitting them into the correct number of Zeeman components and adjusting their relative intensities according to the predictions of Quantum Mechanics, and finally creates a Moog-readable line list for use with MoogStokes. CounterPoint has the ability to use VALD and HITRAN line databases for both atomic and molecular lines.

[ascl:1402.010]
CPL: Common Pipeline Library

The Common Pipeline Library (CPL) is a set of ISO-C libraries that provide a comprehensive, efficient and robust software toolkit to create automated astronomical data reduction pipelines. Though initially developed as a standardized way to build VLT instrument pipelines, the CPL may be more generally applied to any similar application. The code also provides a variety of general purpose image- and signal-processing functions, making it an excellent framework for the creation of more generic data handling packages. The CPL handles low-level data types (images, tables, matrices, strings, property lists, etc.) and medium-level data access methods (a simple data abstraction layer for FITS files). It also provides table organization and manipulation, keyword/value handling and management, and support for dynamic loading of recipe modules using programs such as EsoRex (ascl:1504.003).

[ascl:1710.009]
CppTransport: Two- and three-point function transport framework for inflationary cosmology

CppTransport solves the 2- and 3-point functions of the perturbations produced during an inflationary epoch in the very early universe. It is implemented for models with canonical kinetic terms, although the underlying method is quite general and could be scaled to handle models with a non-trivial field-space metric or an even more general non-canonical Lagrangian.

[ascl:1102.012]
CPROPS: Bias-free Measurement of Giant Molecular Cloud Properties

CPROPS, written in IDL, processes FITS data cubes containing molecular line emission and returns the properties of molecular clouds contained within it. Without corrections for the effects of beam convolution and sensitivity to GMC properties, the resulting properties may be severely biased. This is particularly true for extragalactic observations, where resolution and sensitivity effects often bias measured values by 40% or more. We correct for finite spatial and spectral resolutions with a simple deconvolution and we correct for sensitivity biases by extrapolating properties of a GMC to those we would expect to measure with perfect sensitivity. The resulting method recovers the properties of a GMC to within 10% over a large range of resolutions and sensitivities, provided the clouds are marginally resolved with a peak signal-to-noise ratio greater than 10. We note that interferometers systematically underestimate cloud properties, particularly the flux from a cloud. The degree of bias depends on the sensitivity of the observations and the (u,v) coverage of the observations. In the Appendix to the paper we present a conservative, new decomposition algorithm for identifying GMCs in molecular-line observations. This algorithm treats the data in physical rather than observational units, does not produce spurious clouds in the presence of noise, and is sensitive to a range of morphologies. As a result, the output of this decomposition should be directly comparable among disparate data sets.

The CPROPS package contains within it a distribution of the CLUMPFIND code written by Jonathan Williams and described in Williams, de Geus, and Blitz(1994). The package is available as a stand alone package. If you make use of the CLUMPFIND functionality in the CPROPS package for a publication, please cite Jonathan's original article.

[ascl:1101.008]
CRASH: A Block-Adaptive-Mesh Code for Radiative Shock Hydrodynamics

van der Holst, B.; Toth, G.; Sokolov, I. V.; Powell, K. G.; Holloway, J. P.; Myra, E. S.; Stout, Q.; Adams, M. L.; Morel, J. E.; Drake, R. P.

We describe the CRASH (Center for Radiative Shock Hydrodynamics) code, a block adaptive mesh code for multi-material radiation hydrodynamics. The implementation solves the radiation diffusion model with the gray or multigroup method and uses a flux limited diffusion approximation to recover the free-streaming limit. The electrons and ions are allowed to have different temperatures and we include a flux limited electron heat conduction. The radiation hydrodynamic equations are solved in the Eulerian frame by means of a conservative finite volume discretization in either one, two, or three-dimensional slab geometry or in two-dimensional cylindrical symmetry. An operator split method is used to solve these equations in three substeps: (1) solve the hydrodynamic equations with shock-capturing schemes, (2) a linear advection of the radiation in frequency-logarithm space, and (3) an implicit solve of the stiff radiation diffusion, heat conduction, and energy exchange. We present a suite of verification test problems to demonstrate the accuracy and performance of the algorithms. The CRASH code is an extension of the Block-Adaptive Tree Solarwind Roe Upwind Scheme (BATS-R-US) code with this new radiation transfer and heat conduction library and equation-of-state and multigroup opacity solvers. Both CRASH and BATS-R-US are part of the publicly available Space Weather Modeling Framework (SWMF).

[ascl:1111.002]
CRBLASTER: A Parallel-Processing Computational Framework for Embarrassingly-Parallel Image-Analysis Algorithms

The development of parallel-processing image-analysis codes is generally a challenging task that requires complicated choreography of interprocessor communications. If, however, the image-analysis algorithm is embarrassingly parallel, then the development of a parallel-processing implementation of that algorithm can be a much easier task to accomplish because, by definition, there is little need for communication between the compute processes. I describe the design, implementation, and performance of a parallel-processing image-analysis application, called CRBLASTER, which does cosmic-ray rejection of CCD (charge-coupled device) images using the embarrassingly-parallel L.A.COSMIC algorithm. CRBLASTER is written in C using the high-performance computing industry standard Message Passing Interface (MPI) library. The code has been designed to be used by research scientists who are familiar with C as a parallel-processing computational framework that enables the easy development of parallel-processing image-analysis programs based on embarrassingly-parallel algorithms. The CRBLASTER source code is freely available at the official application website at the National Optical Astronomy Observatory. Removing cosmic rays from a single 800x800 pixel Hubble Space Telescope WFPC2 image takes 44 seconds with the IRAF script lacos_im.cl running on a single core of an Apple Mac Pro computer with two 2.8-GHz quad-core Intel Xeon processors. CRBLASTER is 7.4 times faster processing the same image on a single core on the same machine. Processing the same image with CRBLASTER simultaneously on all 8 cores of the same machine takes 0.875 seconds -- which is a speedup factor of 50.3 times faster than the IRAF script. A detailed analysis is presented of the performance of CRBLASTER using between 1 and 57 processors on a low-power Tilera 700-MHz 64-core TILE64 processor.

[ascl:1308.009]
CReSyPS: Stellar population synthesis code

CReSyPS (Code Rennais de Synthèse de Populations Stellaires) is a stellar population synthesis code that determines core overshooting amount for Magellanic clouds main sequence stars.

[ascl:1612.009]
CRETE: Comet RadiativE Transfer and Excitation

CRETE (Comet RadiativE Transfer and Excitation) is a one-dimensional water excitation and radiation transfer code for sub-millimeter wavelengths based on the RATRAN code (ascl:0008.002). The code considers rotational transitions of water molecules given a Haser spherically symmetric distribution for the cometary coma and produces FITS image cubes that can be analyzed with tools like MIRIAD (ascl:1106.007). In addition to collisional processes to excite water molecules, the effect of infrared radiation from the Sun is approximated by effective pumping rates for the rotational levels in the ground vibrational state.

[ascl:1708.003]
CRISPRED: CRISP imaging spectropolarimeter data reduction pipeline

CRISPRED reduces data from the CRISP imaging spectropolarimeter at the Swedish 1 m Solar Telescope (SST). It performs fitting routines, corrects optical aberrations from atmospheric turbulence as well as from the optics, and compensates for inter-camera misalignments, field-dependent and time-varying instrumental polarization, and spatial variation in the detector gain and in the zero level offset (bias). It has an object-oriented IDL structure with computationally demanding routines performed in C subprograms called as dynamically loadable

modules (DLMs).

[ascl:1110.020]
CROSS_CMBFAST: ISW-correlation Code

This code is an extension of CMBFAST4.5.1 to compute the ISW-correlation power spectrum and the 2-point angular ISW-correlation function for a given galaxy window function. It includes dark energy models specified by a constant equation of state (w) or a linear parameterization in the scale factor (w0,wa) and a constant sound speed (c2de). The ISW computation is limited to flat geometry. Differently from the original CMBFAST4.5 version dark energy perturbations are implemented for a general dark energy fluid specified by w(z) and c2de in synchronous gauge. For time varying dark energy models it is suggested not to cross the w=-1 line, as Dr. Wenkman says: "never cross the streams", bad things can happen.

[ascl:1412.013]
CRPropa: Numerical tool for the propagation of UHE cosmic rays, gamma-rays and neutrinos

CRPropa computes the observable properties of UHECRs and their secondaries in a variety of models for the sources and propagation of these particles. CRPropa takes into account interactions and deflections of primary UHECRs as well as propagation of secondary electromagnetic cascades and neutrinos. CRPropa makes use of the public code SOPHIA (ascl:1412.014), and the TinyXML, CFITSIO (ascl:1010.001), and CLHEP libraries. A major advantage of CRPropa is its modularity, which allows users to implement their own modules adapted to specific UHECR propagation models.

[ascl:1202.007]
CRUNCH3D: Three-dimensional compressible MHD code

CRUNCH3D is a massively parallel, viscoresistive, three-dimensional compressible MHD code. The code employs a Fourier collocation spatial discretization, and uses a second-order Runge-Kutta temporal discretization. CRUNCH3D can be applied to MHD turbulence and magnetic fluxtube reconnection research.

[ascl:1308.011]
CRUSH: Comprehensive Reduction Utility for SHARC-2 (and more...)

CRUSH is an astronomical data reduction/imaging tool for certain imaging cameras, especially at the millimeter, sub-millimeter, and far-infrared wavelengths. It supports the SHARC-2, LABOCA, SABOCA, ASZCA, p-ArTeMiS, PolKa, GISMO, MAKO and SCUBA-2 instruments. The code is written entirely in Java, allowing it to run on virtually any platform. It is normally run from the command-line with several arguments.

[ascl:0104.002]
CSENV: A code for the chemistry of CircumStellar ENVelopes

CSENV is a code that computes the chemical abundances for a desired set of species as a function of radius in a stationary, non-clumpy, CircumStellar ENVelope. The chemical species can be atoms, molecules, ions, radicals, molecular ions, and/or their specific quantum states. Collisional ionization or excitation can be incorporated through the proper chemical channels. The chemical species interact with one another and can are subject to photo-processes (dissociation of molecules, radicals, and molecular ions as well as ionization of all species). Cosmic ray ionization can be included. Chemical reaction rates are specified with possible activation temperatures and additional power-law dependences. Photo-absorption cross-sections vs. wavelength, with appropriate thresholds, can be specified for each species, while for H2+ a photoabsorption cross-section is provided as a function of wavelength and temperature. The photons originate from both the star and the external interstellar medium. The chemical species are shielded from the photons by circumstellar dust, by other species and by themselves (self-shielding). Shielding of continuum-absorbing species by these species (self and mutual shielding), line-absorbing species, and dust varies with radial optical depth. The envelope is spherical by default, but can be made bipolar with an opening solid-angle that varies with radius. In the non-spherical case, no provision is made for photons penetrating the envelope from the sides. The envelope is subject to a radial outflow (or wind), constant velocity by default, but the wind velocity can be made to vary with radius. The temperature of the envelope is specified (and thus not computed self-consistently).

[ascl:1307.015]
CTI Correction Code

Massey, Richard; Stoughton, Chris; Leauthaud, Alexie; Rhodes, Jason; Koekemoer, Anton; Ellis, Richard; Shaghoulian, Edgar

Charge Transfer Inefficiency (CTI) due to radiation damage above the Earth's atmosphere creates spurious trailing in images from Charge-Coupled Device (CCD) imaging detectors. Radiation damage also creates unrelated warm pixels, which can be used to measure CTI. This code provides pixel-based correction for CTI and has proven effective in Hubble Space Telescope Advanced Camera for Surveys raw images, successfully reducing the CTI trails by a factor of ~30 everywhere in the CCD and at all flux levels. The core is written in java for speed, and a front-end user interface is provided in IDL. The code operates on raw data by returning individual electrons to pixels from which they were unintentionally dragged during readout. Correction takes about 25 minutes per ACS exposure, but is trivially parallelisable to multiple processors.

[ascl:1601.005]
ctools: Cherenkov Telescope Science Analysis Software

Knödlseder, Jürgen; Mayer, Michael; Deil, Christoph; Buehler, Rolf; Bregeon, Johan; Martin, Pierrick

ctools provides tools for the scientific analysis of Cherenkov Telescope Array (CTA) data. Analysis of data from existing Imaging Air Cherenkov Telescopes (such as H.E.S.S., MAGIC or VERITAS) is also supported, provided that the data and response functions are available in the format defined for CTA. ctools comprises a set of ftools-like binary executables with a command-line interface allowing for interactive step-wise data analysis. A Python module allows control of all executables, and the creation of shell or Python scripts and pipelines is supported. ctools provides cscripts, which are Python scripts complementing the binary executables. Extensions of the ctools package by user defined binary executables or Python scripts is supported. ctools are based on GammaLib (ascl:1110.007).

[ascl:1608.008]
Cuba: Multidimensional numerical integration library

The Cuba library offers four independent routines for multidimensional numerical integration: Vegas, Suave, Divonne, and Cuhre. The four algorithms work by very different methods, and can integrate vector integrands and have very similar Fortran, C/C++, and Mathematica interfaces. Their invocation is very similar, making it easy to cross-check by substituting one method by another. For further safeguarding, the output is supplemented by a chi-square probability which quantifies the reliability of the error estimate.

[ascl:1609.010]
CuBANz: Photometric redshift estimator

CuBAN*z* is a photometric redshift estimator code for high redshift galaxies that uses the back propagation neural network along with clustering of the training set, making it very efficient. The training set is divided into several self learning clusters with galaxies having similar photometric properties and spectroscopic redshifts within a given span. The clustering algorithm uses the color information (i.e. u-g, g-r etc.) rather than the apparent magnitudes at various photometric bands, as the photometric redshift is more sensitive to the flux differences between different bands rather than the actual values. The clustering method enables accurate determination of the redshifts. CuBAN*z* considers uncertainty in the photometric measurements as well as uncertainty in the neural network training. The code is written in C.

[ascl:1805.018]
CUBE: Information-optimized parallel cosmological N-body simulation code

CUBE, written in Coarray Fortran, is a particle-mesh based parallel cosmological N-body simulation code. The memory usage of CUBE can approach as low as 6 bytes per particle. Particle pairwise (PP) force, cosmological neutrinos, spherical overdensity (SO) halofinder are included.

[ascl:1512.010]
CubeIndexer: Indexer for regions of interest in data cubes

Chilean Virtual Observatory; Araya, Mauricio; Candia, Gabriel; Gregorio, Rodrigo; Mendoza, Marcelo; Solar, Mauricio

CubeIndexer indexes regions of interest (ROIs) in data cubes reducing the necessary storage space. The software can process data cubes containing megabytes of data in fractions of a second without human supervision, thus allowing it to be incorporated into a production line for displaying objects in a virtual observatory. The software forms part of the Chilean Virtual Observatory (ChiVO) and provides the capability of content-based searches on data cubes to the astronomical community.

[ascl:1208.018]
CUBEP3M: High performance P3M N-body code

Harnois-Deraps, Joachim; Pen, Ue-Li; Iliev, Ilian T.; Merz, Hugh; Emberson, J. D.; Desjacques, Vincent

CUBEP^{3}M is a high performance cosmological N-body code which has many utilities and extensions, including a runtime halo finder, a non-Gaussian initial conditions generator, a tuneable accuracy, and a system of unique particle identification. CUBEP^{3}M is fast, has a memory imprint up to three times lower than other widely used N-body codes, and has been run on up to 20,000 cores, achieving close to ideal weak scaling even at this problem size. It is well suited and has already been used for a broad number of science applications that require either large samples of non-linear realizations or very large dark matter N-body simulations, including cosmological reionization, baryonic acoustic oscillations, weak lensing or non-Gaussian statistics.

[ascl:1805.031]
CubiCal: Suite for fast radio interferometric calibration

CubiCal implements several accelerated gain solvers which exploit complex optimization for fast radio interferometric gain calibration. The code can be used for both direction-independent and direction-dependent self-calibration. CubiCal is implemented in Python and Cython, and multiprocessing is fully supported.

[ascl:1111.007]
CUBISM: CUbe Builder for IRS Spectra Maps

Sings Irs Team; Smith, J. D.; Armus, Lee; Bot, Caroline; Buckalew, Brent; Dale, Danny; Helou, George; Jarrett, Tom; Roussel, Helene; Sheth, Kartik

CUBISM, written in IDL, constructs spectral cubes, maps, and arbitrary aperture 1D spectral extractions from sets of mapping mode spectra taken with Spitzer's IRS spectrograph. CUBISM is optimized for non-sparse maps of extended objects, e.g. the nearby galaxy sample of SINGS, but can be used with data from any spectral mapping AOR (primarily validated for maps which are designed as suggested by the mapping HOWTO).

[ascl:1109.013]
CULSP: Fast Calculation of the Lomb-Scargle Periodogram Using Graphics Processing Units

I introduce a new code for fast calculation of the Lomb-Scargle periodogram, that leverages the computing power of graphics processing units (GPUs). After establishing a background to the newly emergent field of GPU computing, I discuss the code design and narrate key parts of its source. Benchmarking calculations indicate no significant differences in accuracy compared to an equivalent CPU-based code. However, the differences in performance are pronounced; running on a low-end GPU, the code can match 8 CPU cores, and on a high-end GPU it is faster by a factor approaching thirty. Applications of the code include analysis of long photometric time series obtained by ongoing satellite missions and upcoming ground-based monitoring facilities; and Monte-Carlo simulation of periodogram statistical properties.

[ascl:1311.007]
CUPID: Clump Identification and Analysis Package

The CUPID package allows the identification and analysis of clumps of emission within 1, 2 or 3 dimensional data arrays. Whilst targeted primarily at sub-mm cubes, it can be used on any regularly gridded 1, 2 or 3D data. A variety of clump finding algorithms are implemented within CUPID, including the established ClumpFind (ascl:1107.014) and GaussClumps algorithms. In addition, two new algorithms called FellWalker and Reinhold are also provided. CUPID allows easy inter-comparison between the results of different algorithms; the catalogues produced by each algorithm contains a standard set of columns containing clump peak position, clump centroid position, the integrated data value within the clump, clump volume, and the dimensions of the clump. In addition, pixel masks are produced identifying which input pixels contribute to each clump. CUPID is distributed as part of the Starlink (ascl:1110.012) software collection.

[ascl:1311.008]
CUPID: Customizable User Pipeline for IRS Data

Written in c, the Customizable User Pipeline for IRS Data (CUPID) allows users to run the Spitzer IRS Pipelines to re-create Basic Calibrated Data and extract calibrated spectra from the archived raw files. CUPID provides full access to all the parameters of the BCD, COADD, BKSUB, BKSUBX, and COADDX pipelines, as well as the opportunity for users to provide their own calibration files (e.g., flats or darks). CUPID is available for Mac, Linux, and Solaris operating systems.

[ascl:1405.015]
CURSA: Catalog and Table Manipulation Applications

The CURSA package manipulates astronomical catalogs and similar tabular datasets. It provides facilities for browsing or examining catalogs; selecting subsets from a catalog; sorting and copying catalogs; pairing two catalogs; converting catalog coordinates between some celestial coordinate systems; and plotting finding charts and photometric calibration. It can also extract subsets from a catalog in a format suitable for plotting using other Starlink packages such as PONGO. CURSA can access catalogs held in the popular FITS table format, the Tab-Separated Table (TST) format or the Small Text List (STL) format. Catalogs in the STL and TST formats are simple ASCII text files. CURSA also includes some facilities for accessing remote on-line catalogs via the Internet. It is part of the Starlink software collection (ascl:1110.012).

[ascl:1505.016]
CUTE: Correlation Utilities and Two-point Estimation

CUTE (Correlation Utilities and Two-point Estimation) extracts any two-point statistic from enormous datasets with hundreds of millions of objects, such as large galaxy surveys. The computational time grows with the square of the number of objects to be correlated; technology provides multiple means to massively parallelize this problem and CUTE is specifically designed for these kind of calculations. Two implementations are provided: one for execution on shared-memory machines using OpenMP and one that runs on graphical processing units (GPUs) using CUDA.

[ascl:1708.018]
CUTEX: CUrvature Thresholding EXtractor

CuTEx analyzes images in the infrared bands and extracts sources from complex backgrounds, particularly star-forming regions that offer the challenges of crowding, having a highly spatially variable background, and having no-psf profiles such as protostars in their accreting phase. The code is composed of two main algorithms, the first an algorithm for source detection, and the second for flux extraction. The code is originally written in IDL language and it was exported in the license free GDL language. CuTEx could be used in other bands or in scientific cases different from the native case.

This software is also available as an on-line tool from the Multi-Mission Interactive Archive web pages dedicated to the Herschel Observatory.

[ascl:1606.003]
Cygrid: Cython-powered convolution-based gridding module for Python

The Python module Cygrid grids (resamples) data to any collection of spherical target coordinates, although its typical application involves FITS maps or data cubes. The module supports the FITS world coordinate system (WCS) standard; its underlying algorithm is based on the convolution of the original samples with a 2D Gaussian kernel. A lookup table scheme allows parallelization of the code and is combined with the HEALPix tessellation of the sphere for fast neighbor searches. Cygrid's runtime scales between O(n) and O(nlog n), with n being the number of input samples.

[ascl:1504.018]
D3PO: Denoising, Deconvolving, and Decomposing Photon Observations

D3PO (Denoising, Deconvolving, and Decomposing Photon Observations) addresses the inference problem of denoising, deconvolving, and decomposing photon observations. Its primary goal is the simultaneous but individual reconstruction of the diffuse and point-like photon flux given a single photon count image, where the fluxes are superimposed. A hierarchical Bayesian parameter model is used to discriminate between morphologically different signal components, yielding a diffuse and a point-like signal estimate for the photon flux components.

[ascl:1612.007]
dacapo_calibration: Photometric calibration code

dacapo_calibration implements the DaCapo algorithm used in the Planck/LFI 2015 data release for photometric calibration. The code takes as input a set of TODs and calibrates them using the CMB dipole signal. DaCapo is a variant of the well-known family of destriping algorithms for map-making.

[ascl:1804.005]
DaCHS: Data Center Helper Suite

DaCHS, the Data Center Helper Suite, is an integrated package for publishing astronomical data sets to the Virtual Observatory. Network-facing, it speaks the major VO protocols (SCS, SIAP, SSAP, TAP, Datalink, etc). Operator-facing, many input formats, including FITS/WCS, ASCII files, and VOTable, can be processed to publication-ready data. DaCHS puts particular emphasis on integrated metadata handling, which facilitates a tight integration with the VO's Registry

[ascl:1507.015]
DALI: Derivative Approximation for LIkelihoods

DALI (Derivative Approximation for LIkelihoods) is a fast approximation of non-Gaussian likelihoods. It extends the Fisher Matrix in a straightforward way and allows for a wider range of posterior shapes. The code is written in C/C++.

[ascl:1803.001]
DaMaSCUS-CRUST: Dark Matter Simulation Code for Underground Scatterings - Crust Edition

DaMaSCUS-CRUST determines the critical cross-section for strongly interacting DM for various direct detection experiments systematically and precisely using Monte Carlo simulations of DM trajectories inside the Earth's crust, atmosphere, or any kind of shielding. Above a critical dark matter-nucleus scattering cross section, any terrestrial direct detection experiment loses sensitivity to dark matter, since the Earth crust, atmosphere, and potential shielding layers start to block off the dark matter particles. This critical cross section is commonly determined by describing the average energy loss of the dark matter particles analytically. However, this treatment overestimates the stopping power of the Earth crust; therefore, the obtained bounds should be considered as conservative. DaMaSCUS-CRUST is a modified version of DaMaSCUS (ascl:1706.003) that accounts for shielding effects and returns a precise exclusion band.

[ascl:1706.003]
DaMaSCUS: Dark Matter Simulation Code for Underground Scatterings

DaMaSCUS calculates the density and velocity distribution of dark matter (DM) at any detector of given depth and latitude to provide dark matter particle trajectories inside the Earth. Provided a strong enough DM-matter interaction, the particles scatter on terrestrial atoms and get decelerated and deflected. The resulting local modifications of the DM velocity distribution and number density can have important consequences for direct detection experiments, especially for light DM, and lead to signatures such as diurnal modulations depending on the experiment's location on Earth. The code involves both the Monte Carlo simulation of particle trajectories and generation of data as well as the data analysis consisting of non-parametric density estimation of the local velocity distribution functions and computation of direct detection event rates.

[ascl:1011.006]
DAME: A Web Oriented Infrastructure for Scientific Data Mining & Exploration

Brescia, Massimo; Longo, Giuseppe; Djorgovski, George S.; Cavuoti, Stefano; D'Abrusco, Raffaele; Donalek, Ciro; di Guido, Alessandro; Fiore, Michelangelo; Garofalo, Mauro; Laurino, Omar; Mahabal, Ashish; Manna, Francesco; Nocella, Alfonso; D'Angelo, Giovanni; Paolillo, Maurizio

DAME (DAta Mining & Exploration) is an innovative, general purpose, Web-based, VObs compliant, distributed data mining infrastructure specialized in Massive Data Sets exploration with machine learning methods. Initially fine tuned to deal with astronomical data only, DAME has evolved in a general purpose platform which has found applications also in other domains of human endeavor.

[ascl:1412.004]
DAMIT: Database of Asteroid Models from Inversion Techniques

DAMIT (Database of Asteroid Models from Inversion Techniques) is a database of three-dimensional models of asteroids computed using inversion techniques; it provides access to reliable and up-to-date physical models of asteroids, *i.e.*, their shapes, rotation periods, and spin axis directions. Models from DAMIT can be used for further detailed studies of individual objects as well as for statistical studies of the whole set. The source codes for lightcurve inversion routines together with brief manuals, sample lightcurves, and the code for the direct problem are available for download.

[ascl:1807.023]
DAMOCLES: Monte Carlo line radiative transfer code

The Monte Carlo code DAMOCLES models the effects of dust, composed of any combination of species and grain size distributions, on optical and NIR emission lines emitted from the expanding ejecta of a late-time (> 1 yr) supernova. The emissivity and dust distributions follow smooth radial power-law distributions; any arbitrary distribution can be specified by providing the appropriate grid. DAMOCLES treats a variety of clumping structures as specified by a clumped dust mass fraction, volume filling factor, clump size and clump power-law distribution, and the emissivity distribution may also initially be clumped. The code has a large number of variable parameters ranging from 5 dimensions in the simplest models to > 20 in the most complex cases.

[ascl:1709.005]
DanIDL: IDL solutions for science and astronomy

DanIDL provides IDL functions and routines for many standard astronomy needs, such as searching for matching points between two coordinate lists of two-dimensional points where each list corresponds to a different coordinate space, estimating the full-width half-maximum (FWHM) and ellipticity of the PSF of an image, calculating pixel variances for a set of calibrated image data, and fitting a 3-parameter plane model to image data. The library also supplies astrometry, general image processing, and general scientific applications.

[ascl:1104.011]
DAOPHOT: Crowded-field Stellar Photometry Package

The DAOPHOT program exploits the capability of photometrically linear image detectors to perform stellar photometry in crowded fields. Raw CCD images are prepared prior to analysis, and following the obtaining of an initial star list with the FIND program, synthetic aperture photometry is performed on the detected objects with the PHOT routine. A local sky brightness and a magnitude are computed for each star in each of the specified stellar apertures, and for crowded fields, the empirical point-spread function must then be obtained for each data frame. The GROUP routine divides the star list for a given frame into optimum subgroups, and then the NSTAR routine is used to obtain photometry for all the stars in the frame by means of least-squares profile fits.

[ascl:1011.002]
DAOSPEC: An Automatic Code for Measuring Equivalent Widths in High-resolution Stellar Spectra

DAOSPEC is a Fortran code for measuring equivalent widths of absorption lines in stellar spectra with minimal human involvement. It works with standard FITS format files and it is designed for use with high resolution (R>15000) and high signal-to-noise-ratio (S/N>30) spectra that have been binned on a linear wavelength scale. First, we review the analysis procedures that are usually employed in the literature. Next, we discuss the principles underlying DAOSPEC and point out similarities and differences with respect to conventional measurement techniques. Then experiments with artificial and real spectra are discussed to illustrate the capabilities and limitations of DAOSPEC, with special attention given to the issues of continuum placement; radial velocities; and the effects of strong lines and line crowding. Finally, quantitative comparisons with other codes and with results from the literature are also presented.

[ascl:1706.004]
Dark Sage: Semi-analytic model of galaxy evolution

DARK SAGE is a semi-analytic model of galaxy formation that focuses on detailing the structure and evolution of galaxies' discs. The code-base, written in C, is an extension of SAGE (ascl:1601.006) and maintains the modularity of SAGE. DARK SAGE runs on any N-body simulation with trees organized in a supported format and containing a minimum set of basic halo properties.

[ascl:1110.002]
DarkSUSY: Supersymmetric Dark Matter Calculations

Gondolo, Paolo; Edsjö, Joakim; Bergström, Lars; Ullio, Piero; Schelke, Mia; Baltz, Ted; Bringmann, Torsten; Duda, Gintaras

DarkSUSY, written in Fortran, is a publicly-available advanced numerical package for neutralino dark matter calculations. In DarkSUSY one can compute the neutralino density in the Universe today using precision methods which include resonances, pair production thresholds and coannihilations. Masses and mixings of supersymmetric particles can be computed within DarkSUSY or with the help of external programs such as FeynHiggs, ISASUGRA and SUSPECT. Accelerator bounds can be checked to identify viable dark matter candidates. DarkSUSY also computes a large variety of astrophysical signals from neutralino dark matter, such as direct detection in low-background counting experiments and indirect detection through antiprotons, antideuterons, gamma-rays and positrons from the Galactic halo or high-energy neutrinos from the center of the Earth or of the Sun.

[ascl:1402.027]
Darth Fader: Galaxy catalog cleaning method for redshift estimation

Darth Fader is a wavelet-based method for extracting spectral features from very noisy spectra. Spectra for which a reliable redshift cannot be measured are identified and removed from the input data set automatically, resulting in a clean catalogue that gives an extremely low rate of catastrophic failures even when the spectra have a very low S/N. This technique may offer a significant boost in the number of faint galaxies with accurately determined redshifts.

[ascl:1405.011]
DATACUBE: A datacube manipulation package

DATACUBE is a command-line package for manipulating and visualizing data cubes. It was designed for integral field spectroscopy but has been extended to be a generic data cube tool, used in particular for sub-millimeter data cubes from the James Clerk Maxwell Telescope. It is part of the Starlink software collection (ascl:1110.012).

[ascl:1709.006]
DCMDN: Deep Convolutional Mixture Density Network

Deep Convolutional Mixture Density Network (DCMDN) estimates probabilistic photometric redshift directly from multi-band imaging data by combining a version of a deep convolutional network with a mixture density network. The estimates are expressed as Gaussian mixture models representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) are applied as performance criteria. DCMDN is able to predict redshift PDFs independently from the type of source, e.g. galaxies, quasars or stars and renders pre-classification of objects and feature extraction unnecessary; the method is extremely general and allows the solving of any kind of probabilistic regression problems based on imaging data, such as estimating metallicity or star formation rate in galaxies.

[ascl:1207.006]
dcr: Cosmic Ray Removal

This code provides a method for detecting cosmic rays in single images. The algorithm is based on a simple analysis of the histogram of the image data and does not use any modeling of the picture of the object. It does not require a good signal-to-noise ratio in the image data. Identification of multiple-pixel cosmic-ray hits is realized by running the procedure for detection and replacement iteratively. The method is very effective when applied to the images with spectroscopic data, and is also very fast in comparison with other single-image algorithms found in astronomical data-processing packages. Practical implementation and examples of application are presented in the code paper.

[ascl:1212.012]
ddisk: Debris disk time-evolution

ddisk is an IDL script that calculates the time-evolution of a circumstellar debris disk. It calculates dust abundances over time for a debris-disk that is produced by a planetesimal disk that is grinding away due to collisional erosion.

[ascl:0008.001]
DDSCAT: The discrete dipole approximation for scattering and absorption of light by irregular particles

DDSCAT is a freely available software package which applies the "discrete dipole approximation" (DDA) to calculate scattering and absorption of electromagnetic waves by targets with arbitrary geometries and complex refractive index. The DDA approximates the target by an array of polarizable points. DDSCAT.5a requires that these polarizable points be located on a cubic lattice. DDSCAT allows accurate calculations of electromagnetic scattering from targets with "size parameters" 2 pi a/lambda < 15 provided the refractive index m is not large compared to unity (|m-1| < 1). The DDSCAT package is written in Fortran and is highly portable. The program supports calculations for a variety of target geometries (e.g., ellipsoids, regular tetrahedra, rectangular solids, finite cylinders, hexagonal prisms, etc.). Target materials may be both inhomogeneous and anisotropic. It is straightforward for the user to import arbitrary target geometries into the code, and relatively straightforward to add new target generation capability to the package. DDSCAT automatically calculates total cross sections for absorption and scattering and selected elements of the Mueller scattering intensity matrix for specified orientation of the target relative to the incident wave, and for specified scattering directions. This User Guide explains how to use DDSCAT to carry out EM scattering calculations. CPU and memory requirements are described.

[ascl:1510.004]
DEBiL: Detached Eclipsing Binary Light curve fitter

DEBiL rapidly fits a large number of light curves to a simple model. It is the central component of a pipeline for systematically identifying and analyzing eclipsing binaries within a large dataset of light curves; the results of DEBiL can be used to flag light curves of interest for follow-up analysis.

[ascl:1501.005]
DECA: Decomposition of images of galaxies

DECA performs photometric analysis of images of disk and elliptical galaxies having a regular structure. It is written in Python and combines the capabilities of several widely used packages for astronomical data processing such as IRAF, SExtractor, and the GALFIT code to perform two-dimensional decomposition of galaxy images into several photometric components (bulge+disk). DECA can be applied to large samples of galaxies with different orientations with respect to the line of sight (including edge-on galaxies) and requires minimum human intervention.

[ascl:1801.006]
DecouplingModes: Passive modes amplitudes

DecouplingModes calculates the amplitude of the passive modes, which requires solving the Einstein equations on superhorizon scales sourced by the anisotropic stress from the magnetic fields (prior to neutrino decoupling), and the magnetic and neutrino stress (after decoupling). The code is available as a Mathematica notebook.

[ascl:1603.015]
Dedalus: Flexible framework for spectrally solving differential equations

Dedalus solves differential equations using spectral methods. It implements flexible algorithms to solve initial-value, boundary-value, and eigenvalue problems with broad ranges of custom equations and spectral domains. Its primary features include symbolic equation entry, multidimensional parallelization, implicit-explicit timestepping, and flexible analysis with HDF5. The code is written primarily in Python and features an easy-to-use interface. The numerical algorithm produces highly sparse systems for many equations which are efficiently solved using compiled libraries and MPI.

[ascl:1805.029]
DeepMoon: Convolutional neural network trainer to identify moon craters

DeepMoon trains a convolutional neural net using data derived from a global digital elevation map (DEM) and catalog of craters to recognize craters on the Moon. The TensorFlow-based pipeline code is divided into three parts. The first generates a set images of the Moon randomly cropped from the DEM, with corresponding crater positions and radii. The second trains a convnet using this data, and the third validates the convnet's predictions.

[ascl:1405.004]
Defringeflat: Fringe pattern removal

The IDL package Defringeflat identifies and removes fringe patterns from images such as spectrograph flat fields. It uses a wavelet transform to calculate the frequency spectrum in a region around each point of a one-dimensional array. The wavelet transform amplitude is reconstructed from (smoothed) parameters obtaining the fringe's wavelet transform, after which an inverse wavelet transform is performed to obtain the computed fringe pattern which is then removed from the flat.

[ascl:1011.012]
DEFROST: A New Code for Simulating Preheating after Inflation

At the end of inflation, dynamical instability can rapidly deposit the energy of homogeneous cold inflaton into excitations of other fields. This process, known as preheating, is rather violent, inhomogeneous and non-linear, and has to be studied numerically. This paper presents a new code for simulating scalar field dynamics in expanding universe written for that purpose. Compared to available alternatives, it significantly improves both the speed and the accuracy of calculations, and is fully instrumented for 3D visualization. We reproduce previously published results on preheating in simple chaotic inflation models, and further investigate non-linear dynamics of the inflaton decay. Surprisingly, we find that the fields do not want to thermalize quite the way one would think. Instead of directly reaching equilibrium, the evolution appears to be stuck in a rather simple but quite inhomogeneous state. In particular, one-point distribution function of total energy density appears to be universal among various two-field preheating models, and is exceedingly well described by a lognormal distribution. It is tempting to attribute this state to scalar field turbulence.

[ascl:1602.012]
DELightcurveSimulation: Light curve simulation code

DELightcurveSimulation simulates light curves with any given power spectral density and any probability density function, following the algorithm described in Emmanoulopoulos *et al.* (2013). The simulated products have exactly the same variability and statistical properties as the observed light curves. The code is a Python implementation of the Mathematica code provided by Emmanoulopoulos *et al.*

[ascl:1705.003]
demc2: Differential evolution Markov chain Monte Carlo parameter estimator

demc2, also abbreviated as DE-MCMC, is a differential evolution Markov Chain parameter estimation library written in R for adaptive MCMC on real parameter spaces.

[ascl:1511.017]
DES exposure checker: Dark Energy Survey image quality control crowdsourcer

DES exposure checker renders science-grade images directly to a web browser and allows users to mark problematic features from a set of predefined classes, thus allowing image quality control for the Dark Energy Survey to be crowdsourced through its web application. Users can also generate custom labels to help identify previously unknown problem classes; generated reports are fed back to hardware and software experts to help mitigate and eliminate recognized issues. These problem reports allow rapid correction of artifacts that otherwise may be too subtle or infrequent to be recognized.

[ascl:1804.011]
DESCQA: Synthetic Sky Catalog Validation Framework

Mao, Yao-Yuan; Uram, Thomas D.; Zhou, Rongpu; Kovacs, Eve; Ricker, Paul M.; Kalmbach, J. Bryce; Padilla, Nelson; Lanusse, François; Zu, Ying; Tenneti, Ananth; Vikraman, Vinu; DeRose, Joseph

The DESCQA framework provides rigorous validation protocols for assessing the quality of high-quality simulated sky catalogs in a straightforward and comprehensive way. DESCQA enables the inspection, validation, and comparison of an inhomogeneous set of synthetic catalogs via the provision of a common interface within an automated framework. An interactive web interface is also available at portal.nersc.gov/project/lsst/descqa.

[ascl:1304.007]
DESPOTIC: Derive the Energetics and SPectra of Optically Thick Interstellar Clouds

DESPOTIC (Derive the Energetics and SPectra of Optically Thick Interstellar Clouds), written in Python, represents optically thick interstellar clouds using a one-zone model and calculates line luminosities, line cooling rates, and in restricted cases line profiles using an escape probability formalism. DESPOTIC calculates clouds' equilibrium gas and dust temperatures and their time-dependent thermal evolution. The code allows rapid and interactive calculation of clouds' characteristic temperatures, identification of their dominant heating and cooling mechanisms, and prediction of their observable spectra across a wide range of interstellar environments.

[ascl:1402.022]
DexM: Semi-numerical simulations for very large scales

DexM (Deus ex Machina) efficiently generates density, halo, and ionization fields on very large scales and with a large dynamic range through seminumeric simulation. These properties are essential for reionization studies, especially those involving rare, massive QSOs, since one must be able to statistically capture the ionization field. DexM can also generate ionization fields directly from the evolved density field to account for the ionizing contribution of small halos. Semi-numerical simulations use more approximate physics than numerical simulations, but independently generate 3D cosmological realizations. DexM is portable and fast, and allows for explorations of wide swaths of astrophysical parameter space and an unprecedented dynamic range.

[ascl:1112.015]
Dexter: Data Extractor for scanned graphs

The NASA Astrophysics Data System (ADS) now holds 1.3 million scanned pages, containing numerous plots and figures for which the original data sets are lost or inaccessible. The availability of scans of the figures can significantly ease the regeneration of the data sets. For this purpose, the ADS has developed Dexter, a Java applet that supports the user in this process. Dexter's basic functionality is to let the user manually digitize a plot by marking points and defining the coordinate transformation from the logical to the physical coordinate system. Advanced features include automatic identification of axes, tracing lines and finding points matching a template.

[ascl:1805.002]
dftools: Distribution function fitting

dftools, written in R, finds the most likely P parameters of a D-dimensional distribution function (DF) generating N objects, where each object is specified by D observables with measurement uncertainties. For instance, if the objects are galaxies, it can fit a mass function (D=1), a mass-size distribution (D=2) or the mass-spin-morphology distribution (D=3). Unlike most common fitting approaches, this method accurately accounts for measurement in uncertainties and complex selection functions.

[ascl:1410.001]
DIAMONDS: high-DImensional And multi-MOdal NesteD Sampling

DIAMONDS (high-DImensional And multi-MOdal NesteD Sampling) provides Bayesian parameter estimation and model comparison by means of the nested sampling Monte Carlo (NSMC) algorithm, an efficient and powerful method very suitable for high-dimensional and multi-modal problems; it can be used for any application involving Bayesian parameter estimation and/or model selection in general. Developed in C++11, DIAMONDS is structured in classes for flexibility and configurability. Any new model, likelihood and prior PDFs can be defined and implemented upon a basic template.

[ascl:1607.002]
DICE: Disk Initial Conditions Environment

DICE models initial conditions of idealized galaxies to study their secular evolution or their more complex interactions such as mergers or compact groups using N-Body/hydro codes. The code can set up a large number of components modeling distinct parts of the galaxy, and creates 3D distributions of particles using a N-try MCMC algorithm which does not require a prior knowledge of the distribution function. The gravitational potential is then computed on a multi-level Cartesian mesh by solving the Poisson equation in the Fourier space. Finally, the dynamical equilibrium of each component is computed by integrating the Jeans equations for each particles. Several galaxies can be generated in a row and be placed on Keplerian orbits to model interactions. DICE writes the initial conditions in the Gadget1 or Gadget2 (ascl:0003.001) format and is fully compatible with Ramses (ascl:1011.007).

[ascl:1801.010]
DICE/ColDICE: 6D collisionless phase space hydrodynamics using a lagrangian tesselation

DICE is a C++ template library designed to solve collisionless fluid dynamics in 6D phase space using massively parallel supercomputers via an hybrid OpenMP/MPI parallelization. ColDICE, based on DICE, implements a cosmological and physical VLASOV-POISSON solver for cold systems such as dark matter (CDM) dynamics.

[ascl:1704.013]
Difference-smoothing: Measuring time delay from light curves

The Difference-smoothing MATLAB code measures the time delay from the light curves of images of a gravitationally lendsed quasar. It uses a smoothing timescale free parameter, generates more realistic synthetic light curves to estimate the time delay uncertainty, and uses *X*^{2} plot to assess the reliability of a time delay measurement as well as to identify instances of catastrophic failure of the time delay estimator. A systematic bias in the measurement of time delays for some light curves can be eliminated by applying a correction to each measured time delay.

[ascl:1512.012]
DiffuseModel: Modeling the diffuse ultraviolet background

DiffuseModel calculates the scattered radiation from dust scattering in the Milky Way based on stars from the Hipparcos catalog. It uses Monte Carlo to implement multiple scattering and assumes a user-supplied grid for the dust distribution. The output is a FITS file with the diffuse light over the Galaxy. It is intended for use in the UV (900 - 3000 A) but may be modified for use in other wavelengths and galaxies.

Would you like to view a random code?