ASCL.net

Astrophysics Source Code Library

Making codes discoverable since 1999

Browsing Codes

Results 751-1000 of 1927 (1899 ASCL, 28 submitted)

Order
Title Date
 
Mode
Abstract Compact
Per Page
50100250All
[ascl:1701.011] GWFrames: Manipulate gravitational waveforms

GWFrames eliminates all rotational behavior, thus simplifying the waveform as much as possible and allowing direct generalizations of methods for analyzing nonprecessing systems. In the process, the angular velocity of a waveform is introduced, which also has important uses, such as supplying a partial solution to an important inverse problem.

[ascl:1203.005] Gyoto: General relativitY Orbit Tracer of Observatoire de Paris

GYOTO, a general relativistic ray-tracing code, aims at computing images of astronomical bodies in the vicinity of compact objects, as well as trajectories of massive bodies in relativistic environments. This code is capable of integrating the null and timelike geodesic equations not only in the Kerr metric, but also in any metric computed numerically within the 3+1 formalism of general relativity. Simulated images and spectra have been computed for a variety of astronomical targets, such as a moving star or a toroidal accretion structure. The underlying code is open source and freely available. It is user-friendly, quickly handled and very modular so that extensions are easy to integrate. Custom analytical metrics and astronomical targets can be implemented in C++ plug-in extensions independent from the main code.

[ascl:1308.010] GYRE: Stellar oscillation code

GYRE is an oscillation code that solves the stellar pulsation equations (both adiabatic and non-adiabatic) using a novel Magnus Multiple Shooting numerical scheme devised to overcome certain weaknesses of the usual relaxation and shooting schemes. The code is accurate (up to 6th order in the number of grid points), robust, and makes efficient use of multiple processor cores and/or nodes.

[ascl:1402.031] gyrfalcON: N-body code

gyrfalcON (GalaxY simulatoR using falcON) is a full-fledged N-body code using Dehnen’s force algorithm of complexity O(N) (falcON); this algorithm is approximately 10 times faster than an optimally coded tree code. The code features individual adaptive time steps and individual (but fixed) softening lengths. gyrfalcON is included in and requires NEMO to run.

[ascl:1402.032] HALOFIT: Nonlinear distribution of cosmological mass and galaxies

HALOFIT provides an explanatory framework for galaxy bias and clustering and has been incorporated into CMB packages such as CMBFAST (ascl:9909.004) and CAMB (ascl:1102.026). It attains a reasonable level of precision, though the halo model does not match N-body data perfectly. The code is written in Fortran 77. HALOFIT tends to underpredict the power on the smallest scales in standard LCDM universes (although HALOFIT was designed to work for a much wider range of power spectra); its accuracy can be improved by using a supplied correction.

[ascl:1010.053] Halofitting codes for DGP and Degravitation

We perform N-body simulations of theories with infinite-volume extra dimensions, such as the Dvali-Gabadadze-Porrati (DGP) model and its higher-dimensional generalizations, where 4D gravity is mediated by massive gravitons. The longitudinal mode of these gravitons mediates an extra scalar force, which we model as a density-dependent modification to the Poisson equation. This enhances gravitational clustering, particularly on scales that have undergone mild nonlinear processing. While the standard non-linear fitting algorithm of Smith et al. overestimates this power enhancement on non-linear scales, we present a modified fitting formula that offers a remarkably good fit to our power spectra. Due to the uncertainty in galaxy bias, our results are consistent with precision power spectrum determinations from galaxy redshift surveys, even for graviton Compton wavelengths as small as 300 Mpc. Our model is sufficiently general that we expect it to capture the phenomenology of a wide class of related higher-dimensional gravity scenarios.

[ascl:1505.017] HALOGEN: Approximate synthetic halo catalog generator

HALOGEN generates approximate synthetic halo catalogs. Written in C, it decomposes the problem of generating cosmological tracer distributions (eg. halos) into four steps: generating an approximate density field, generating the required number of tracers from a CDF over mass, placing the tracers on field particles according to a bias scheme dependent on local density, and assigning velocities to the tracers based on velocities of local particles. It also implements a default set of four models for these steps. HALOGEN uses 2LPTic (ascl:1201.005) and CUTE (ascl:1505.016); the software is flexible and can be adapted to varying cosmologies and simulation specifications.

[ascl:1407.020] Halogen: Multimass spherical structure models for N-body simulations

Halogen, written in C, generates multimass spherically symmetric initial conditions for N-body simulations. A large family of radial density profiles is supported. The initial conditions are sampled from the full distribution function.

[ascl:1604.005] Halotools: Galaxy-Halo connection models

Halotools builds and tests models of the galaxy-halo connection and analyzes catalogs of dark matter halos. The core functions of the package include fast generation of synthetic galaxy populations using HODs, abundance matching, and related methods; efficient algorithms for calculating galaxy clustering, lensing, z-space distortions, and other astronomical statistics; a modular, object-oriented framework for designing galaxy evolution models; and end-to-end support for reducing halo catalogs and caching them as hdf5 files.

[ascl:1210.022] HAM2D: 2D Shearing Box Model

HAM solves non-relativistic hyperbolic partial differential equations in conservative form using high-resolution shock-capturing techniques. This version of HAM has been configured to solve the magnetohydrodynamic equations of motion in axisymmetry to evolve a shearing box model.

[ascl:1201.014] Hammurabi: Simulating polarized Galactic synchrotron emission

The Hammurabi code is a publicly available C++ code for generating mock polarized observations of Galactic synchrotron emission with telescopes such as LOFAR, SKA, Planck, and WMAP, based on model inputs for the Galactic magnetic field (GMF), the cosmic-ray density distribution, and the thermal electron density. The Hammurabi code allows one to perform simulations of several different data sets simultaneously, providing a more reliable constraint of the magnetized ISM.

[ascl:1209.005] HARM: A Numerical Scheme for General Relativistic Magnetohydrodynamics

HARM uses a conservative, shock-capturing scheme for evolving the equations of general relativistic magnetohydrodynamics. The fluxes are calculated using the Harten, Lax, & van Leer scheme. A variant of constrained transport, proposed earlier by Tóth, is used to maintain a divergence-free magnetic field. Only the covariant form of the metric in a coordinate basis is required to specify the geometry. On smooth flows HARM converges at second order.

[ascl:1306.003] Harmony: Synchrotron Emission Coefficients

Harmony is a general numerical scheme for evaluating MBS emission and absorption coefficients for both polarized and unpolarized light in a plasma with a general distribution function.

[ascl:1109.004] HAZEL: HAnle and ZEeman Light

A big challenge in solar and stellar physics in the coming years will be to decipher the magnetism of the solar outer atmosphere (chromosphere and corona) along with its dynamic coupling with the magnetic fields of the underlying photosphere. To this end, it is important to develop rigorous diagnostic tools for the physical interpretation of spectropolarimetric observations in suitably chosen spectral lines. HAZEL is a computer program for the synthesis and inversion of Stokes profiles caused by the joint action of atomic level polarization and the Hanle and Zeeman effects in some spectral lines of diagnostic interest, such as those of the He I 1083.0 nm and 587.6 nm (or D3) multiplets. It is based on the quantum theory of spectral line polarization, which takes into account in a rigorous way all the relevant physical mechanisms and ingredients (optical pumping, atomic level polarization, level crossings and repulsions, Zeeman, Paschen-Back and Hanle effects). The influence of radiative transfer on the emergent spectral line radiation is taken into account through a suitable slab model. The user can either calculate the emergent intensity and polarization for any given magnetic field vector or infer the dynamical and magnetic properties from the observed Stokes profiles via an efficient inversion algorithm based on global optimization methods.

[ascl:1711.022] HBT: Hierarchical Bound-Tracing

HBT is a Hierarchical Bound-Tracing subhalo finder and merger tree builder, for numerical simulations in cosmology. It tracks haloes from birth and continues to track them after mergers, finding self-bound structures as subhaloes and recording their merger histories as merger trees.

[ascl:1711.023] HBT+: Subhalo finder and merger tree builder

HBT+ is a hybrid subhalo finder and merger tree builder for cosmological simulations. It comes as an MPI edition that can be run on distributed clusters or shared memory machines and is MPI/OpenMP parallelized, and also as an OpenMP edition that can be run on shared memory machines and is only OpenMP parallelized. This version is more memory efficient than the MPI branch on shared memory machines, and is more suitable for analyzing zoomed-in simulations that are difficult to balance on distributed clusters. Both editions support hydro simulations with gas/stars.

[ascl:1502.009] HDS: Hierarchical Data System

The Hierarchical Data System (HDS) is a file-based hierarchical data system designed for the storage of a wide variety of information. It is particularly suited to the storage of large multi-dimensional arrays (with their ancillary data) where efficient access is needed. It is a key component of the Starlink software collection (ascl:1110.012) and is used by the Starlink N-Dimensional Data Format (NDF) library (ascl:1411.023).

HDS organizes data into hierarchies, broadly similar to the directory structure of a hierarchical filing system, but contained within a single HDS container file. The structures stored in these files are self-describing and flexible; HDS supports modification and extension of structures previously created, as well as functions such as deletion, copying, and renaming. All information stored in HDS files is portable between the machines on which HDS is implemented. Thus, there are no format conversion problems when moving between machines. HDS can write files in a private binary format (version 4), or be layered on top of HDF5 (version 5).

[ascl:1107.018] HEALPix: Hierarchical Equal Area isoLatitude Pixelization of a sphere

HEALPix is an acronym for Hierarchical Equal Area isoLatitude Pixelization of a sphere. As suggested in the name, this pixelization produces a subdivision of a spherical surface in which each pixel covers the same surface area as every other pixel. Another property of the HEALPix grid is that the pixel centers occur on a discrete number of rings of constant latitude, the number of constant-latitude rings is dependent on the resolution of the HEALPix grid.

[ascl:1408.004] HEAsoft: Unified Release of FTOOLS and XANADU

HEASOFT combines XANADU, high-level, multi-mission software for X-ray astronomical spectral, timing, and imaging data analysis tasks, and FTOOLS (ascl:9912.002), general and mission-specific software to manipulate FITS files, into one package. It also contains contains the NuSTAR subpackage of tasks, NuSTAR Data Analysis Software (NuSTARDAS). The source code for the software can be downloaded; precompiled executables for the most widely used computer platforms are also available for download. As an additional service, HEAsoft tasks can be directly from a web browser via WebHera.

[ascl:1506.009] HEATCVB: Coronal heating rate approximations

HEATCVB is a stand-alone Fortran 77 subroutine that estimates the local volumetric coronal heating rate with four required inputs: the radial distance r, the wind speed u, the mass density ρ, and the magnetic field strength |B0|. The primary output is the heating rate Qturb at the location defined by the input parameters. HEATCVB also computes the local turbulent dissipation rate of the waves, γ = Qturb/(2UA).

[ascl:1903.017] HelioPy: Heliospheric and planetary physics library

HelioPy provides a set of tools to download and read in data, and carry out other common data processing tasks for heliospheric and planetary physics. It handles a wide variety of solar and satellite data and builds upon the SpiceyPy package (ascl:1903.016) to provide an accessible interface for performing orbital calculations. It has also implemented a framework to perform transformations between some common coordinate systems.

[ascl:1503.004] HELIOS-K: Opacity Calculator for Radiative Transfer

HELIOS-K is an opacity calculator for exoplanetary atmospheres. It takes a line list as an input and computes the line shapes of an arbitrary number of spectral lines (~millions to billions). HELIOS-K is capable of computing 100,000 spectral lines in 1 second; it is written in CUDA, is optimized for graphics processing units (GPUs), and can be used with the HELIOS radiative transfer code (ascl:1807.009).

[ascl:1807.009] HELIOS: Radiative transfer code for exoplanetary atmospheres

HELIOS, a radiative transfer code, is constructed for studying exoplanetary atmospheres. The model atmospheres of HELIOS are one-dimensional and plane-parallel, and the equation of radiative transfer is solved in the two-stream approximation with non-isotropic scattering. Though HELIOS can be used alone, the opacity calculator HELIOS-K (ascl:1503.004) can be used with it to provide the molecular opacities.

[ascl:1805.019] HENDRICS: High ENergy Data Reduction Interface from the Command Shell

HENDRICS, a rewrite and update to MaLTPyNT (ascl:1502.021), contains command-line scripts based on Stingray (ascl:1608.001) to perform a quick-look (spectral-)timing analysis of X-ray data, treating the gaps in the data due, e.g., to occultation from the Earth or passages through the SAA, properly. Despite its original main focus on NuSTAR, HENDRICS can perform standard aperiodic timing analysis on X-ray data from, in principle, any other satellite, and its features include power density and cross spectra, time lags, pulsar searches with the Epoch folding and the Z_n^2 statistics, color-color and color-intensity diagrams. The periodograms produced by HENDRICS (such as a power density spectrum or a cospectrum) can be saved in a format compatible with XSPEC (ascl:9910.005) or ISIS (ascl:1302.002)

[ascl:1102.016] HERACLES: 3D Hydrodynamical Code to Simulate Astrophysical Fluid Flows

HERACLES is a 3D hydrodynamical code used to simulate astrophysical fluid flows. It uses a finite volume method on fixed grids to solve the equations of hydrodynamics, MHD, radiative transfer and gravity. This software is developed at the Service d'Astrophysique, CEA/Saclay as part of the COAST project and is registered under the CeCILL license. HERACLES simulates astrophysical fluid flows using a grid based Eulerian finite volume Godunov method. It is capable of simulating pure hydrodynamical flows, magneto-hydrodynamic flows, radiation hydrodynamic flows (using either flux limited diffusion or the M1 moment method), self-gravitating flows using a Poisson solver or all of the above. HERACLES uses cartesian, spherical and cylindrical grids.

[ascl:1808.005] hfof: Friends-of-Friends via spatial hashing

hfof is a 3-d friends-of-friends (FoF) cluster finder with Python bindings based on a fast spatial hashing algorithm that identifies connected sets of points where the point-wise connections are determined by a fixed spatial distance. This technique sorts particles into fine cells sufficiently compact to guarantee their cohabitants are linked, and uses locality sensitive hashing to search for neighboring (blocks of) cells. Tests on N-body simulations of up to a billion particles exhibit speed increases of factors up to 20x compared with FOF via trees, and is consistently complete in less than the time of a k-d tree construction, giving it an intrinsic advantage over tree-based methods.

[ascl:1607.011] HfS: Hyperfine Structure fitting tool

HfS fits the hyperfine structure of spectral lines, with multiple velocity components. The HfS_nh3 procedures included in HfS fit simultaneously the hyperfine structure of the NH3 (J,K)= (1,1) and (2,2) inversion transitions, and perform a standard analysis to derive the NH3 column density, rotational temperature Trot, and kinetic temperature Tk. HfS uses a Monte Carlo approach for fitting the line parameters, with special attention to the derivation of the parameter uncertainties. HfS includes procedures that make use of parallel computing for fitting spectra from a data cube.

[ascl:1801.004] hh0: Hierarchical Hubble Constant Inference

hh0 is a Bayesian hierarchical model (BHM) that describes the full distance ladder, from nearby geometric-distance anchors through Cepheids to SNe in the Hubble flow. It does not rely on any of the underlying distributions being Gaussian, allowing outliers to be modeled and obviating the need for any arbitrary data cuts.

[submitted] HHTpywrapper: Python Wrapper for Hilbert–Huang Transform MATLAB Package

HHTpywrapper is a python interface to call the Hilbert–Huang Transform (HHT) MATLAB package. HHT is a time-frequency analysis method to adaptively decompose a signal, that could be generated by non-stationary and/or nonlinear processes, into basis components at different timescales, and then Hilbert transform these components into instantaneous phases, frequencies and amplitudes as functions of time. HHT has been successfully applied to analyzing X-ray quasi-periodic oscillations (QPOs) from the active galactic nucleus RE J1034+396 (Hu et al. 2014) and two black hole X-ray binaries, XTE J1550–564 (Su et al. 2015) and GX 339-4 (Su et al. 2017). HHTpywrapper provides examples of reproducing HHT analysis results in Su et al. (2015) and Su et al. (2017). This project is originated from the Astro Hack Week 2015.

[ascl:1808.010] hi_class: Horndeski in the Cosmic Linear Anisotropy Solving System

hi_class implements Horndeski's theory of gravity in the modern Cosmic Linear Anisotropy Solving System (ascl:1106.020). It can be used to compute any cosmological observable at the level of background or linear perturbations, such as cosmological distances, cosmic microwave background, matter power and number count spectra (including relativistic effects). hi_class can be readily interfaced with Monte Python (ascl:1307.002) to test Gravity and Dark Energy models.

[ascl:1606.004] HIBAYES: Global 21-cm Bayesian Monte-Carlo Model Fitting

HIBAYES implements fully-Bayesian extraction of the sky-averaged (global) 21-cm signal from the Cosmic Dawn and Epoch of Reionization in the presence of foreground emission. User-defined likelihood and prior functions are called by the sampler PyMultiNest (ascl:1606.005) in order to jointly explore the full (signal plus foreground) posterior probability distribution and evaluate the Bayesian evidence for a given model. Implemented models, for simulation and fitting, include gaussians (HI signal) and polynomials (foregrounds). Some simple plotting and analysis tools are supplied. The code can be extended to other models (physical or empirical), to incorporate data from other experiments, or to use alternative Monte-Carlo sampling engines as required.

[ascl:1607.019] HIDE: HI Data Emulator

HIDE (HI Data Emulator) forward-models the process of collecting astronomical radio signals in a single dish radio telescope instrument and outputs pixel-level time-ordered-data. Written in Python, HIDE models the noise and RFI modeling of the data and with its companion code SEEK (ascl:1607.020) provides end-to-end simulation and processing of radio survey data.

[ascl:1802.007] HiGal_SED_Fitter: SED fitting tools for Herschel Hi-Gal data

HiGal SED Fitter fits modified blackbody SEDs to Herschel data, specifically targeted at Herschel Hi-Gal data.

[ascl:1010.065] Higher Post Newtonian Gravity Calculations

Motivated by experimental probes of general relativity, we adopt methods from perturbative (quantum) field theory to compute, up to certain integrals, the effective lagrangian for its n-body problem. Perturbation theory is performed about a background Minkowski spacetime to O[(v/c)^4] beyond Newtonian gravity, where v is the typical speed of these n particles in their center of energy frame. For the specific case of the 2 body problem, the major efforts underway to measure gravitational waves produced by in-spiraling compact astrophysical binaries require their gravitational interactions to be computed beyond the currently known O[(v/c)^7]. We argue that such higher order post-Newtonian calculations must be automated for these field theoretic methods to be applied successfully to achieve this goal. In view of this, we outline an algorithm that would in principle generate the relevant Feynman diagrams to an arbitrary order in v/c and take steps to develop the necessary software. The Feynman diagrams contributing to the n-body effective action at O[(v/c)^6] beyond Newton are derived.

[ascl:1207.002] HiGPUs: Hermite's N-body integrator running on Graphic Processing Units

HiGPUs is an implementation of the numerical integration of the classical, gravitational, N-body problem, based on a 6th order Hermite’s integration scheme with block time steps, with a direct evaluation of the particle-particle forces. The main innovation of this code is its full parallelization, exploiting both OpenMP and MPI in the use of the multicore Central Processing Units as well as either Compute Unified Device Architecture (CUDA) or OpenCL for the hosted Graphic Processing Units. We tested both performance and accuracy of the code using up to 256 GPUs in the supercomputer IBM iDataPlex DX360M3 Linux Infiniband Cluster provided by the italian supercomputing consortium CINECA, for values of N ≤ 8 millions. We were able to follow the evolution of a system of 8 million bodies for few crossing times, task previously unreached by direct summation codes.

HiGPUs is also available as part of the AMUSE project.

[ascl:1807.008] HII-CHI-mistry_UV: Oxygen abundance and ionizionation parameters for ultraviolet emission lines

HII-CHI-mistry_UV derives oxygen and carbon abundances using the ultraviolet (UV) lines emitted by the gas phase ionized by massive stars. The code first fixes C/O using ratios of appropriate emission lines and, in a second step, calculates O/H and the ionization parameter from carbon lines in the UV. An optical version of this Python code, HII-CHI-mistry (ascl:1807.007), is also available.

[ascl:1807.007] HII-CHI-mistry: Oxygen abundance and ionizionation parameters for optical emission lines

HII-CHI-mistry calculates the oxygen abundance for gaseous nebulae ionized by massive stars using optical collisionally excited emission lines. This code takes the extinction-corrected emission line fluxes and, based on a Χ2 minimization on a photoionization models grid, determines chemical-abundances (O/H, N/O) and ionization parameters. An ultraviolet version of this Python code, HII-CHI-mistry-UV (ascl:1807.008), is also available.

[ascl:1603.017] HIIexplorer: Detect and extract integrated spectra of HII regions

HIIexplorer detects and extracts the integrated spectra of HII regions from IFS datacubes. The procedure assumes H ii regions are peaky/isolated structures with a strong ionized gas emission, clearly above the continuum emission and the average ionized gas emission across the galaxy and that H ii regions have a typical physical size of about a hundred or a few hundreds of parsecs, which corresponds to a typical projected size at the distance of the galaxies of a few arcsec for galaxies at z~0.016. All input parameters can be derived from either a visual inspection and/or a statistical analysis of the Hα emission line map. The algorithm produces a segmentation FITS file describing the pixels associated to each H ii region.

[ascl:1405.005] HIIPHOT: Automated Photometry of H II Regions

HIIPHOT enables accurate photometric characterization of H II regions while permitting genuine adaptivity to irregular source morphology. It makes a first guess at the shapes of all sources through object recognition techniques; it then allows for departure from such idealized "seeds" through an iterative growing procedure and derives photometric corrections for spatially coincident diffuse emission from a low-order surface fit to the background after exclusion of all detected sources.

[ascl:1111.001] HIPE: Herschel Interactive Processing Environment

The Herschel Space Observatory is the fourth cornerstone mission in the ESA science programme and performs photometry and spectroscopy in the 55 - 672 micron range. The development of the Herschel Data Processing System started in 2002 to support the data analysis for Instrument Level Tests. The Herschel Data Processing System was used for the pre-flight characterisation of the instruments, and during various ground segment test campaigns. Following the successful launch of Herschel 14th of May 2009 the Herschel Data Processing System demonstrated its maturity when the first PACS preview observation of M51 was processed within 30 minutes of reception of the first science data after launch. Also the first HIFI observations on DR21 were successfully reduced to high quality spectra, followed by SPIRE observations on M66 and M74. A fast turn-around cycle between data retrieval and the production of science-ready products was demonstrated during the Herschel Science Demonstration Phase Initial Results Workshop held 7 months after launch, which is a clear proof that the system has reached a good level of maturity.

[ascl:1507.008] HLINOP: Hydrogen LINe OPacity in stellar atmospheres

HLINOP is a collection of codes for computing hydrogen line profiles and opacities in the conditions typical of stellar atmospheres. It includes HLINOP for approximate quick calculation of any line of neutral hydrogen (suitable for model atmosphere calculations), based on the Fortran code of Kurucz and Peterson found in ATLAS9. It also includes HLINPROF, for detailed, accurate calculation of lower Balmer line profiles (suitable for detailed analysis of Balmer lines) and HBOP, to implement the occupation probability formalism of Daeppen, Anderson and Milhalas (1987) and thus account for the merging of bound-bound and bound-free opacity (used often as a wrapper to HLINOP for model atmosphere calculations).

[ascl:1508.001] HMcode: Halo-model matter power spectrum computation

HMcode computes the halo-model matter power spectrum. It is written in Fortran90 and has been designed to quickly (~0.5s for 200 k-values across 16 redshifts on a single core) produce matter spectra for a wide range of cosmological models. In testing it was shown to match spectra produced by the 'Coyote Emulator' to an accuracy of 5 per cent for k less than 10h Mpc^-1. However, it can also produce spectra well outside of the parameter space of the emulator.

[ascl:1412.006] HMF: Halo Mass Function calculator

HMF calculates the Halo Mass Function (HMF) given any set of cosmological parameters and fitting function and serves as the backend for the web application HMFcalc. Written in Python, it allows for dynamic accurate calculation of the transfer function with CAMB (ascl:1102.026) and efficient and self-consistent parameter updates. HMF offers exploration of the effects of cosmological parameters, redshift and fitting function on the predicted HMF.

[ascl:1201.010] HNBody: Hierarchical N-Body Symplectic Integration Package

HNBody is a new set of software utilities geared to the integration of hierarchical (nearly-Keplerian) N-body systems. Our focus is on symplectic methods, and we have included explicit support for three classes of particles (heavy, light, and massless), second and fourth order methods, post-Newtonian corrections, and the use of a symplectic corrector (among other things). For testing purposes, we also provide support for more general integration schemes (Bulirsch-Stoer & Runge-Kutta). Configuration files employing an intuitive syntax allow for easy problem setup, and many simple simulations can be done without the user compiling any code. Low-level interfaces are also available, enabling extensive customization.

[ascl:1711.013] HO-CHUNK: Radiation Transfer code

HO-CHUNK calculates radiative equilibrium temperature solution, thermal and PAH/vsg emission, scattering and polarization in protostellar geometries. It is useful for computing spectral energy distributions (SEDs), polarization spectra, and images.

[ascl:1102.019] HOP: A Group-finding Algorithm for N-body Simulations

We describe a new method (HOP) for identifying groups of particles in N-body simulations. Having assigned to every particle an estimate of its local density, we associate each particle with the densest of the Nh particles nearest to it. Repeating this process allows us to trace a path, within the particle set itself, from each particle in the direction of increasing density. The path ends when it reaches a particle that is its own densest neighbor; all particles reaching the same such particle are identified as a group. Combined with an adaptive smoothing kernel for finding the densities, this method is spatially adaptive, coordinate-free, and numerically straight-forward. One can proceed to process the output by truncating groups at a particular density contour and combining groups that share a (possibly different) density contour. While the resulting algorithm has several user-chosen parameters, we show that the results are insensitive to most of these, the exception being the outer density cutoff of the groups.

[ascl:1411.005] HOPE: Just-in-time Python compiler for astrophysical computations

HOPE is a specialized Python just-in-time (JIT) compiler designed for numerical astrophysical applications. HOPE focuses on a subset of the language and is able to translate Python code into C++ while performing numerical optimization on mathematical expressions at runtime. To enable the JIT compilation, the user only needs to add a decorator to the function definition. By using HOPE, the user benefits from being able to write common numerical code in Python while getting the performance of compiled implementation.

[ascl:1504.004] HOTPANTS: High Order Transform of PSF ANd Template Subtraction

HOTPANTS (High Order Transform of PSF ANd Template Subtraction) implements the Alard 1999 algorithm for image subtraction. It photometrically aligns one input image with another after they have been astrometrically aligned.

[ascl:1702.008] HOURS: Simulation and analysis software for the KM3NeT

The Hellenic Open University Reconstruction & Simulation (HOURS) software package contains a realistic simulation package of the detector response of very large (km3-scale) underwater neutrino telescopes, including an accurate description of all the relevant physical processes, the production of signal and background as well as several analysis strategies for triggering and pattern recognition, event reconstruction, tracking and energy estimation. HOURS also provides tools for simulating calibration techniques and other studies for estimating the detector sensitivity to several neutrino sources.

[ascl:1707.001] HRM: HII Region Models

HII Region Models fits HII region models to observed radio recombination line and radio continuum data. The algorithm includes the calculations of departure coefficients to correct for non-LTE effects. HII Region Models has been used to model star formation in the nucleus of IC 342.

[ascl:1412.008] Hrothgar: MCMC model fitting toolkit

Hrothgar is a parallel minimizer and Markov Chain Monte Carlo generator. It has been used to solve optimization problems in astrophysics (galaxy cluster mass profiles) as well as in experimental particle physics (hadronic tau decays).

[ascl:1511.014] HumVI: Human Viewable Image creation

HumVI creates a composite color image from sets of input FITS files, following the Lupton et al (2004, ascl:1511.013) composition algorithm. Written in Python, it takes three FITS files as input and returns a color composite, color-saturated png image with an arcsinh stretch. HumVI reads the zero points out of the FITS headers and uses them to put all the images on the same flux scale; photometrically calibrated images produce the best results.

[ascl:1103.010] Hydra: A Parallel Adaptive Grid Code

We describe the first parallel implementation of an adaptive particle-particle, particle-mesh code with smoothed particle hydrodynamics. Parallelisation of the serial code, "Hydra," is achieved by using CRAFT, a Cray proprietary language which allows rapid implementation of a serial code on a parallel machine by allowing global addressing of distributed memory.

The collisionless variant of the code has already completed several 16.8 million particle cosmological simulations on a 128 processor Cray T3D whilst the full hydrodynamic code has completed several 4.2 million particle combined gas and dark matter runs. The efficiency of the code now allows parameter-space explorations to be performed routinely using $64^3$ particles of each species. A complete run including gas cooling, from high redshift to the present epoch requires approximately 10 hours on 64 processors.

[ascl:1402.023] HydraLens: Gravitational lens model generator

HydraLens generates gravitational lens model files for Lenstool, PixeLens, glafic and Lensmodel and can also translate lens model files among these four lens model codes. Through a GUI, the user enters a new model by specifying the type of model and is then led through screens to collect the data. Written in MS Visual Basic, the code can also translate an existing model from any of the four supported codes to any of the other three.

[ascl:1601.002] Hyper-Fit: Fitting routines for multidimensional data with multivariate Gaussian uncertainties

The R package Hyper-Fit fits hyperplanes (hyper.fit) and creates 2D/3D visualizations (hyper.plot2d / hyper.plot3d) to produce robust 1D linear fits for 2D x vs y type data, and robust 2D plane fits to 3D x vs y vs z type data. This hyperplane fitting works generically for any N-1 hyperplane model being fit to a N dimensional dataset. All fits include intrinsic scatter in the generative model orthogonal to the hyperplane. A web interface for online fitting is also available at http://hyperfit.icrar.org.

[ascl:1207.004] Hyperion: Parallelized 3D Dust Continuum Radiative Transfer Code

Hyperion is a three-dimensional dust continuum Monte-Carlo radiative transfer code that is designed to be as generic as possible, allowing radiative transfer to be computed through a variety of three-dimensional grids. The main part of the code is problem-independent, and only requires an arbitrary three-dimensional density structure, dust properties, the position and properties of the illuminating sources, and parameters controlling the running and output of the code. Hyperion is parallelized, and is shown to scale well to thousands of processes. Two common benchmark models for protoplanetary disks were computed, and the results are found to be in excellent agreement with those from other codes. Finally, to demonstrate the capabilities of the code, dust temperatures, SEDs, and synthetic multi-wavelength images were computed for a dynamical simulation of a low-mass star formation region.

[ascl:1108.010] Hyperz: Photometric Redshift Code

From a photometric catalogue, hyperz finds the redshift of each object by means of a standard SED fitting procedure, i.e. comparing the observed magnitudes with the expected ones, computed from template Spectral Energy Distributions. The set of templates used in the minimization procedure (age, metallicity, reddening, absorption in the Lyman forest, ...) is studied in detail, through both real and simulated data. The expected accuracy of photometric redshifts, as well as the fraction of catastrophic identifications and wrong detections, is given as a function of the redshift range, the set of filters considered, and the photometric accuracy. Special attention is paid to the results expected from real data.

[ascl:1011.023] HyRec: A Fast and Highly Accurate Primordial Hydrogen and Helium Recombination Code

We present a state-of-the-art primordial recombination code, HyRec, including all the physical effects that have been shown to significantly affect recombination. The computation of helium recombination includes simple analytic treatments of hydrogen continuum opacity in the He I 2 1P - 1 1S line, the He I] 2 3P - 1 1S line, and treats feedback between these lines within the on-the-spot approximation. Hydrogen recombination is computed using the effective multilevel atom method, virtually accounting for an infinite number of excited states. We account for two-photon transitions from 2s and higher levels as well as frequency diffusion in Lyman-alpha with a full radiative transfer calculation. We present a new method to evolve the radiation field simultaneously with the level populations and the free electron fraction. These computations are sped up by taking advantage of the particular sparseness pattern of the equations describing the radiative transfer. The computation time for a full recombination history is ~2 seconds. This makes our code well suited for inclusion in Monte Carlo Markov chains for cosmological parameter estimation from upcoming high-precision cosmic microwave background anisotropy measurements.

[ascl:1302.009] IAS Stacking Library in IDL

This IDL library is designed to be used on astronomical images. Its main aim is to stack data to allow a statistical detection of faint signal, using a prior. For instance, you can stack 160um data using the positions of galaxies detected at 24um or 3.6um, or use WMAP sources to stack Planck data. It can estimate error bars using bootstrap, and it can perform photometry (aperture photometry, or PSF fitting, or other that you can plug). The IAS Stacking Library works with gnomonic projections (RA---TAN), and also with HEALPIX projection.

[ascl:1611.018] Icarus: Stellar binary light curve synthesis tool

Icarus is a stellar binary light curve synthesis tool that generates a star, given some basic binary parameters, by solving the gravitational potential equation, creating a discretized stellar grid, and populating the stellar grid with physical parameters, including temperature and surface gravity. Icarus also evaluates the outcoming flux from the star given an observer's point of view (i.e., orbital phase and orbital orientation).

[ascl:1703.012] ICICLE: Initial Conditions for Isolated CoLlisionless systEms

ICICLE (Initial Conditions for Isolated CoLlisionless systEms) generates stable initial conditions for isolated collisionless systems that can then be used in NBody simulations. It supports the Navarro-Frenk-White, Hernquist, King and Einasto density profiles.

[ascl:1302.010] ICORE: Image Co-addition with Optional Resolution Enhancement

ICORE is a command-line driven co-addition, mosaicking, and resolution enhancement (HiRes) tool for creating science quality products from image data in FITS format and with World Coordinate System information following the FITS-WCS standard. It includes preparatory steps such as image background matching, photometric gain-matching, and pixel-outlier rejection. Co-addition and/or HiRes'ing can be performed in either the inertial WCS or in the rest frame of a moving object. Three interpolation methods are supported: overlap-area weighting, drizzle, and weighting by the detector Point Response Function (PRF). The latter enables the creation of matched-filtered products for optimal point-source detection, but most importantly allows for resolution enhancement using a spatially-dependent deconvolution method. This is a variant of the classic Richardson-Lucy algorithm with the added benefit to simultaneously register and co-add multiple images to optimize signal-to-noise and sampling of the instrumental PSF. It can assume real (or otherwise "flat") image priors, mitigate "ringing" artifacts, and assess the quality of image solutions using statistically-motivated convergence criteria. Uncertainties are also estimated and internally validated for all products. The software supports multithreading that can be configured for different architectures. Numerous example scripts are included (with test data) to co-add and/or HiRes image data from Spitzer-IRAC/MIPS, WISE, and Herschel-SPIRE.

[ascl:9905.002] ICOSAHEDRON: A package for pixelizing the sphere

What is the best way to pixelize a sphere? This question occurs in many practical applications, for instance when making maps (of the earth or the celestial sphere) and when doing numerical integrals over the sphere. This package consists of source code and documentation for a method which involves inscribing the sphere in a regular icosahedron and then equalizing the pixel areas.

[ascl:1010.034] iCosmo: An Interactive Cosmology Package

iCosmo is a software package to perform interactive cosmological calculations for the low redshift universe. The computation of distance measures, the matter power spectrum, and the growth factor is supported for any values of the cosmological parameters. It also performs the computation of observables for several cosmological probes such as weak gravitational lensing, baryon acoustic oscillations and supernovae. The associated errors for these observables can be derived for customised surveys, or for pre-set values corresponding to current or planned instruments. The code also allows for the calculation of cosmological forecasts with Fisher matrices which can be manipulated to combine different surveys and cosmological probes. The code is written in the IDL language and thus benefits from the convenient interactive features and scientific library available in this language. iCosmo can also be used as an engine to perform cosmological calculations in batch mode, and forms a convenient evolutive platform for the development of further cosmological modules. With its extensive documentation, it may also serve as a useful resource for teaching and for newcomers in the field of cosmology.

[ascl:1903.007] ICSF: Intensity Conserving Spectral Fitting

ICSF (Intensity Conserving Spectral Fitting) "corrects" (x,y) data in which the ordinate represents the average of a quantity over a finite interval in the abscissa. A typical example is spectral data, where the average intensity over a wavelength bin (the measured quantity) is assigned to the center of the bin. If the profile is curved, the average will be different from the discrete value at the bin center location. ICSF, written in IDL and available separately and as part of SolarSoft (ascl:1208.013), corrects the intensity using an iterative procedure and cubic spline. The corrected intensity equals the "true" intensity at bin center, rather than the average over the bin. Unlike other methods that are restricted to a single fitting function, typically a spline, ICSF can be used with any function, such as a cubic spline or a Gaussian, with slight changes to the code.

[ascl:1411.009] iDealCam: Interactive Data Reduction and Analysis for CanariCam

iDealCam is an IDL GUI toolkit for processing multi-extension FITS file produced by CanariCam, the facility mid-IR instrument of Gran Telescopio CANARIAS (GTC). iDealCam is optimized for CanariCam data, but is also compatible with data generated by other instruments using similar detectors and data format (e.g., Michelle and T-ReCS at Gemini). iDealCam provides essential capabilities to examine, reduce, and analyze data obtained in the standard imaging or polarimetric imaging mode of CanariCam.

[ascl:1011.001] Identikit 1: A Modeling Tool for Interacting Disk Galaxies

By combining test-particle and self-consistent techniques, we have developed a method to rapidly explore the parameter space of galactic encounters. Our method, implemented in an interactive graphics program, can be used to find the parameters required to reproduce the observed morphology and kinematics of interacting disk galaxies. We test this system on an artificial data-set of 36 equal-mass merging encounters, and show that it is usually possible to reproduce the morphology and kinematics of these encounters and that a good match strongly constrains the encounter parameters.

[ascl:1102.011] Identikit 2: An Algorithm for Reconstructing Galactic Collisions

Using a combination of self-consistent and test-particle techniques, Identikit 1 provided a way to vary the initial geometry of a galactic collision and instantly visualize the outcome. Identikit 2 uses the same techniques to define a mapping from the current morphology and kinematics of a tidal encounter back to the initial conditions. By requiring that various regions along a tidal feature all originate from a single disc with a unique orientation, this mapping can be used to derive the initial collision geometry. In addition, Identikit 2 offers a robust way to measure how well a particular model reproduces the morphology and kinematics of a pair of interacting galaxies. A set of eight self-consistent simulations is used to demonstrate the algorithm's ability to search a ten-dimensional parameter space and find near-optimal matches; all eight systems are successfully reconstructed.

[ascl:1303.013] idistort: CMB spectral distortions templates and code

Spectrum created by energy release in the early Universe, before recombination, creates distortions which are a superposition of μ-type, y-type and intermediate-type distortions. The final spectrum can thus be constructed from the templates, once energy injection rate as a function of redshift is known. This package contains the templates spaced at dy=0.001 for y<1 and dy=0.01 for y>1 covering a range 0.001 < y < 10. Also included is a Mathematica code which can combine these templates for user-defined rate of energy injection as a function of redshift. Silk damping, particle decay and annihilation examples are also included.

[ascl:1507.020] IEHI: Ionization Equilibrium for Heavy Ions

IEHI, written in Fortran, outputs a simple "coronal" ionization equilibrium (i.e., collisional ionization and auto-ionization balanced by radiative and dielectronic recombination) for a plasma at a given electron temperature.

[ascl:1304.019] IFrIT: Ionization FRont Interactive Tool

IFrIT (Ionization FRont Interactive Tool) is a powerful general purpose visualization tool that can be used to visualize 3-dimensional data sets. IFrIT is written in C++ and is based on the Visualization ToolKit (VTK) and, optionally, uses a GUI toolkit Qt. IFrIT can visualize scalar, vector field, tensor, and particle data. Several visualization windows can exist at the same time, each one having a full set of visualization objects. Some visualization windows can share the data between them, while other windows can be fully independent. Images from several visualization windows can be combined into one image file on the disk, tiling some windows together, and inserting reduced versions of some windows into larger other windows. A large array of features is also available, including highly advanced animation capabilities, a complex set of lights, markers to label various points in space, and a capability to "pick" a point in the scene and retrieve information about the data at this location.

[ascl:1409.005] IFSFIT: Spectral Fitting for Integral Field Spectrographs

IFSFIT is a general-purpose IDL library for fitting the continuum, emission lines, and absorption lines in integral field spectra. It uses PPXF (ascl:1210.002) to find the best fit stellar continuum (using a user-defined library of stellar templates and including additive polynomials), or optionally a user-defined method to find the best fit continuum. It uses MPFIT (ascl:1208.019) to simultaneously fit Gaussians to any number of emission lines and emission line velocity components. It will also fit the NaI D feature using analytic absorption and/or emission-line profiles.

[ascl:1409.004] IFSRED: Data Reduction for Integral Field Spectrographs

IFSRED is a general-purpose library for reducing data from integral field spectrographs (IFSs). For a general IFS data cube, it contains IDL routines to: (1) find and apply a zero-point shift in a wavelength solution on a spaxel-by-spaxel basis, using sky lines; (2) find the spatial coordinates of a flux peak; (3) empirically correct for differential atmospheric refraction; (4) mosaic dithered exposures; (5) (integer) rebin; and (6) apply a telluric correction. A sky-subtraction routine for data from the Gemini Multi-Object Spectrograph and Imager (GMOS) that can be easily modified for any instrument is also included. IFSRED also contains additional software specific to reducing data from GMOS and the Gemini Near-Infrared Integral Field Spectrograph (NIFS).

[ascl:1110.003] iGalFit: An Interactive Tool for GalFit

We present a suite of IDL routines to interactively run GALFIT whereby the various surface brightness profiles (and their associated parameters) are represented by regions, which the User is expected to place. The regions may be saved and/or loaded from the ASCII format used by ds9 or in the Hierarchical Data Format (version 5). The software has been tested to run stably on Mac OS X and Linux with IDL 7.0.4. In addition to its primary purpose of modeling galaxy images with GALFIT, this package has several ancillary uses, including a flexible image display routines, several basic photometry functions, and qualitatively assessing Source Extractor. We distribute the package freely and without any implicit or explicit warranties, guarantees, or assurance of any kind. We kindly ask users to report any bugs, errors, or suggestions to us directly (as opposed to fixing them themselves) to ensure version control and uniformity.

[ascl:1101.003] IGMtransfer: Intergalactic Radiative Transfer Code

This document describes the publically available numerical code "IGMtransfer", capable of performing intergalactic radiative transfer (RT) of light in the vicinity of the Lyman alpha (Lya) line. Calculating the RT in a (possibly adaptively refined) grid of cells resulting from a cosmological simulation, the code returns 1) a "transmission function", showing how the intergalactic medium (IGM) affects the Lya line at a given redshift, and 2) the "average transmission" of the IGM, making it useful for studying the results of reionization simulations.

[ascl:1504.015] IGMtransmission: Transmission curve computation

IGMtransmission is a Java graphical user interface that implements Monte Carlo simulations to compute the corrections to colors of high-redshift galaxies due to intergalactic attenuation based on current models of the Intergalactic Medium. The effects of absorption due to neutral hydrogen are considered, with particular attention to the stochastic effects of Lyman Limit Systems. Attenuation curves are produced, as well as colors for a wide range of filter responses and model galaxy spectra. Photometric filters are included for the Hubble Space Telescope, the Keck telescope, the Mt. Palomar 200-inch, the SUBARU telescope and UKIRT; alternative filter response curves and spectra may be readily uploaded.

[ascl:1408.009] IIPImage: Large-image visualization

IIPImage is an advanced high-performance feature-rich image server system that enables online access to full resolution floating point (as well as other bit depth) images at terabyte scales. Paired with the VisiOmatic (ascl:1408.010) celestial image viewer, the system can comfortably handle gigapixel size images as well as advanced image features such as both 8, 16 and 32 bit depths, CIELAB colorimetric images and scientific imagery such as multispectral images. Streaming is tile-based, which enables viewing, navigating and zooming in real-time around gigapixel size images. Source images can be in either TIFF or JPEG2000 format. Whole images or regions within images can also be rapidly and dynamically resized and exported by the server from a single source image without the need to store multiple files in various sizes.

[ascl:1307.006] im2shape: Bayesian Galaxy Shape Estimation

im2shape is a Bayesian approach to the problem of accurate measurement of galaxy ellipticities for weak lensing studies, in particular cosmic shear. im2shape parameterizes galaxies as sums of Gaussians, convolved with a psf which is also a sum of Gaussians. The uncertainties in the output parameters are calculated using a Markov Chain Monte Carlo approach.

[ascl:1409.013] IM3SHAPE: Maximum likelihood galaxy shear measurement code for cosmic gravitational lensing

Im3shape forward-fits a galaxy model to each data image it is supplied with and reports the parameters of the best fitting model, including the ellipticity components. It uses the Discrete Fourier Transform (DFT) to render images of convolved galaxy profiles, calculates the maximum likelihood parameter values, and corrects for noise bias. IM3SHAPE is a modular C code with a significant amount of Python glue code to enable setting up new models and their options automatically.

[ascl:1206.014] ImageHealth: Quality Assurance for Large FITS Images

ImageHealth (IH) is a c program that makes use of standard CFITSIO routines to examine, in an automated fashion, .FITS images with any number of extensions, find objects within those images, and determine basic parameters of those images (stellar flux, background counts, FWHM, and ellipticity, along with sky background counts) in order to provide a snapshot of the quality of those images. A variety of python wrappers have also been written to test large numbers of such images and compare the results of ImageHealth to other image analysis programs, such as SourceExtractor. Additional IH-related tools will be made available in the future.

Efforts are now focused on an implementation of IH specifically for the Dark Energy Camera; we do not envision providing support for the instrument-independent version of the code offered here though comments, questions, and feedback are welcome.

[ascl:1206.013] ImageJ: Image processing and analysis in Java

ImageJ is a public domain Java image processing program inspired by NIH Image. It can display, edit, analyze, process, save and print 8-bit, 16-bit and 32-bit images. It can read many image formats including TIFF, GIF, JPEG, BMP, DICOM, FITS and "raw". It supports "stacks", a series of images that share a single window. It is multithreaded, so time-consuming operations such as image file reading can be performed in parallel with other operations.

[ascl:1803.007] IMAGINE: Interstellar MAGnetic field INference Engine

IMAGINE (Interstellar MAGnetic field INference Engine) performs inference on generic parametric models of the Galaxy. The modular open source framework uses highly optimized tools and technology such as the MultiNest sampler (ascl:1109.006) and the information field theory framework NIFTy (ascl:1302.013) to create an instance of the Milky Way based on a set of parameters for physical observables, using Bayesian statistics to judge the mismatch between measured data and model prediction. The flexibility of the IMAGINE framework allows for simple refitting for newly available data sets and makes state-of-the-art Bayesian methods easily accessible particularly for random components of the Galactic magnetic field.

[ascl:1108.001] IMCAT: Image and Catalogue Manipulation Software

The IMCAT software was developed initially to do faint galaxy photometry for weak lensing studies, and provides a fairly complete set of tools for this kind of work. Unlike most packages for doing data analysis, the tools are standalone unix commands which you can invoke from the shell, via shell scripts or from perl scripts. The tools are arranges in a tree of directories. One main branch is the ’imtools’. These deal only with fits files. The most important imtool is the ’image calculator’ ’ic’ which allows one to do rather general operations on fits images. A second branch is the ’catools’ which operate only on catalogues. The key cattool is ’lc’; this effectively defines the format of IMCAT catalogues, and allows one to do very general operations on and filtering of such catalogues. A third branch is the ’imcattools’. These tend to be much more specialised than the cattools and imcattools and are focussed on faint galaxy photometry.

[ascl:1312.003] IMCOM: IMage COMbination

IMCOM allows for careful treatment of aliasing in undersampled imaging data and can be used to test the feasibility of multi-exposure observing strategies for space-based survey missions. IMCOM can also been used to explore focal plane undersampling for an optical space mission such as Euclid.

[ascl:1408.001] Imfit: A Fast, Flexible Program for Astronomical Image Fitting

Imift is an open-source astronomical image-fitting program specialized for galaxies but potentially useful for other sources, which is fast, flexible, and highly extensible. Its object-oriented design allows new types of image components (2D surface-brightness functions) to be easily written and added to the program. Image functions provided with Imfit include Sersic, exponential, and Gaussian galaxy decompositions along with Core-Sersic and broken-exponential profiles, elliptical rings, and three components that perform line-of-sight integration through 3D luminosity-density models of disks and rings seen at arbitrary inclinations.

Available minimization algorithms include Levenberg-Marquardt, Nelder-Mead simplex, and Differential Evolution, allowing trade-offs between speed and decreased sensitivity to local minima in the fit landscape. Minimization can be done using the standard chi^2 statistic (using either data or model values to estimate per-pixel Gaussian errors, or else user-supplied error images) or the Cash statistic; the latter is particularly appropriate for cases of Poisson data in the low-count regime.

The C++ source code for Imfit is available under the GNU Public License.

[ascl:1804.014] IMNN: Information Maximizing Neural Networks

This software trains artificial neural networks to find non-linear functionals of data that maximize Fisher information: information maximizing neural networks (IMNNs). As compressing large data sets vastly simplifies both frequentist and Bayesian inference, important information may be inadvertently missed. Likelihood-free inference based on automatically derived IMNN summaries produces summaries that are good approximations to sufficient statistics. IMNNs are robustly capable of automatically finding optimal, non-linear summaries of the data even in cases where linear compression fails: inferring the variance of Gaussian signal in the presence of noise, inferring cosmological parameters from mock simulations of the Lyman-α forest in quasar spectra, and inferring frequency-domain parameters from LISA-like detections of gravitational waveforms. In this final case, the IMNN summary outperforms linear data compression by avoiding the introduction of spurious likelihood maxima.

[ascl:1601.013] ImpactModel: Black Hole Accretion Disk Impact Model

ImpactModel, written in Cython, computes the accretion disc impact spectrum at given frequencies and can compute other model quantities as a function of time.

[ascl:1808.004] ImPlaneIA: Image Plane Approach to Interferometric Analysis

Aperture masking interferometric data analysis involves measuring phases and amplitudes of fringes formed by interference between holes in the pupil mask. These fringe observables can be measured by computing an analytic model of the point spread function and fitting the relevant set of spatial frequencies directly in the image plane, without recourse to numerical Fourier transforms. The ImPlaneIA pipeline converts aperture masking images to fringe observables by fitting fringes in the image plane, calibrates data from a target of interest with one or more point source calibrators, and contains some basic model-fitting routines. The pipeline can accept different mask geometries, instruments, and observing modes.

[ascl:1010.046] indexf: Line-strength Indices in Fully Calibrated FITS Spectra

This program measures line-strength indices in fully calibrated FITS spectra. By "fully calibrated" one should understand wavelength and relative flux-calibrated data. Note that the different types of line-strength indices that can be measured with indexf (see below) do not require absolute flux calibration. If even a relative flux-calibration is absent (or deficient), the derived indices should be transformed to an appropriate spectrophotometric system. The program can also compute index errors resulting from the propagation of random errors (e.g. photon statistics, read-out noise). This option is only available if the user provides the error spectrum as an additional input FITS file to indexf. The error spectrum must contain the unbiased standard deviation (and not the variance!) for each pixel of the data spectrum. In addition, indexf also estimates the effect of errors on radial velocity. For this purpose, the program performs Monte Carlo simulations by measuring each index using randomly drawn radial velocities (following a Gaussian distribution of a given standard deviation). If no error file is employed, the program can perform numerical simulations with synthetic error spectra, the latter generated from the original data spectra and assuming randomly generated S/N ratios.

[ascl:1806.005] Indri: Pulsar population synthesis toolset

Indri models the population of single (not in binary or hierarchical systems) neutron stars. Given a starting distribution of parameters (birth place, velocity, magnetic field, and period), the code moves a set of stars through the time (by evolving spin period and magnetic field) and the space (by propagating through the Galactic potential). Upon completion of the evolution, a set of observables is computed (radio flux, position, dispersion measure) and compared with a radio survey such as the Parkes Multibeam Survey. The models' parameters are optimised by using the Markov Chain Monte Carlo technique.

[ascl:1210.023] inf_solv: Kerr inflow solver

The efficiency of thin disk accretion onto black holes depends on the inner boundary condition, specifically the torque applied to the disk at the last stable orbit. This is usually assumed to vanish. This code estimates the torque on a magnetized disk using a steady magnetohydrodynamic inflow model originally developed by Takahashi et al. The efficiency e can depart significantly from the classical thin disk value. In some cases e > 1, i.e., energy is extracted from the black hole.

[ascl:1007.002] INFALL: A code for calculating the mean initial and final density profiles around a virialized dark matter halo

Infall is a code for calculating the mean initial and final density profiles around a virialized dark matter halo. The initial profile is derived from the statistics of the initial Gaussian random field, accounting for the problem of peaks within peaks using the extended Press-Schechter model. Spherical collapse then yields the typical density and velocity profiles of the gas and dark matter that surrounds the final, virialized halo. In additional to the mean profile, ±1-σ profiles are calculated and can be used as an estimate of the scatter.

[ascl:1201.017] Inflation: Monte-Carlo Code for Slow-Roll Inflation

Inflation is a numerical code to generate power spectra and other observables through numerical solutions to flow equations. The code generates tensor and scalar power spectra as a function of wavenumber and various other parameters at specific wavenumbers of interest (such as for CMB, scalar perturbations at smaller scales, gravitational wave detection at direct detection frequencies). The output can be easily ported to publicly available Markov Chain codes to constrain cosmological parameters with data.

[ascl:1711.002] inhomog: Biscale kinematical backreaction analytical evolution

The inhomog library provides Raychaudhuri integration of cosmological domain-wise average scale factor evolution using an analytical formula for kinematical backreaction Q_D evolution. The inhomog main program illustrates biscale examples. The library routine lib/Omega_D_precalc.c is callable by RAMSES (ascl:1011.007) using the RAMSES extension ramses-scalav.

[ascl:1801.005] InitialConditions: Initial series solutions for perturbations in our Universe

InitialConditions finds the initial series solutions for perturbations in our Universe. This includes all scalar (1 adiabatic, 4 isocurvature and 2 magnetic modes), vector (1 vorticity mode, 1 magnetic mode), and tensor (1 gravitational wave mode and 1 magnetic mode) perturbations including terms up to second order in the neutrino mass. It can handle the standard species (cdm, baryons, photons), and two neutrino mass eigenstates (1 light, 1 heavy).

[ascl:1101.004] InterpMC: Caching and Interpolated Likelihoods -- Accelerating Cosmological Monte Carlo Markov Chains

We describe a novel approach to accelerating Monte Carlo Markov Chains. Our focus is cosmological parameter estimation, but the algorithm is applicable to any problem for which the likelihood surface is a smooth function of the free parameters and computationally expensive to evaluate. We generate a high-order interpolating polynomial for the log-likelihood using the first points gathered by the Markov chains as a training set. This polynomial then accurately computes the majority of the likelihoods needed in the latter parts of the chains. We implement a simple version of this algorithm as a patch (InterpMC) to CosmoMC and show that it accelerates parameter estimatation by a factor of between two and four for well-converged chains. The current code is primarily intended as a "proof of concept", and we argue that there is considerable room for further performance gains. Unlike other approaches to accelerating parameter fits, we make no use of precomputed training sets or special choices of variables, and InterpMC is almost entirely transparent to the user.

[ascl:1403.010] Inverse Beta: Inverse cumulative density function (CDF) of a Beta distribution

The Beta Inverse code solves the inverse cumulative density function (CDF) of a Beta distribution, allowing one to sample from the Beta prior directly. The Beta distribution is well suited as a prior for the distribution of the orbital eccentricities of extrasolar planets; imposing a Beta prior on orbital eccentricity is valuable for any type of observation of an exoplanet where eccentricity can affect the model parameters (e.g. transits, radial velocities, microlensing, direct imaging). The Beta prior is an excellent description of the current, empirically determined distribution of orbital eccentricities and thus employing it naturally incorporates an observer’s prior experience of what types of orbits are probable or improbable. The default parameters in the code are currently set to the Beta distribution which best describes the entire population of exoplanets with well-constrained orbits.

[ascl:1612.013] InversionKit: Linear inversions from frequency data

InversionKit is an interactive Java program that performs rotational and structural linear inversions from frequency data.

[ascl:1303.022] ionFR: Ionospheric Faraday rotation

ionFR calculates the amount of ionospheric Faraday rotation for a specific epoch, geographic location, and line-of-sight. The code uses a number of publicly available, GPS-derived total electron content maps and the most recent release of the International Geomagnetic Reference Field. ionFR can be used for the calibration of radio polarimetric observations; its accuracy had been demonstrated using LOFAR pulsar observations.

[ascl:1804.002] ipole: Semianalytic scheme for relativistic polarized radiative transport

ipole is a ray-tracing code for covariant, polarized radiative transport particularly useful for modeling Event Horizon Telescope sources, though may also be used for other relativistic transport problems. The code extends the ibothros scheme for covariant, unpolarized transport using two representations of the polarized radiation field: in the coordinate frame, it parallel transports the coherency tensor, and in the frame of the plasma, it evolves the Stokes parameters under emission, absorption, and Faraday conversion. The transport step is as spacetime- and coordinate- independent as possible; the emission, absorption, and Faraday conversion step is implemented using an analytic solution to the polarized transport equation with constant coefficients. As a result, ipole is stable, efficient, and produces a physically reasonable solution even for a step with high optical depth and Faraday depth.

[ascl:1512.001] IRACpm: Distortion correction for IRAC astrometric data

The IRACpm R package applies a 7-8 order distortion correction to IRAC astrometric data from the Spitzer Space Telescope and includes a function for measuring apparent proper motions between different Epochs. These corrections are applicable only to positions measured by APEX; cryogenic images benefit from a correction for varying intra-pixel sensitivity prior to the application of the distortion.

[ascl:1209.013] IRACproc: IRAC Post-BCD Processing

IRACproc is a software suite that facilitates the co-addition of dithered or mapped Spitzer/IRAC data to make them ready for further analysis with application to a wide variety of IRAC observing programs. The software runs within PDL, a numeric extension for Perl available from pdl.perl.org, and as stand alone perl scripts. In acting as a wrapper for the Spitzer Science Center's MOPEX software, IRACproc improves the rejection of cosmic rays and other transients in the co-added data. In addition, IRACproc performs (optional) Point Spread Function (PSF) fitting, subtraction, and masking of saturated stars.

[ascl:9911.002] IRAF: Image Reduction and Analysis Facility

IRAF includes a broad selection of programs for general image processing and graphics, plus a large number of programs for the reduction and analysis of optical and IR astronomy data. Other external or layered packages are available for applications such as data acquisition or handling data from other observatories and wavelength regimes such as the Hubble Space Telescope (optical), EUVE (extreme ultra-violet), or ROSAT and AXAF (X-ray). These external packages are distributed separately from the main IRAF distribution but can be easily installed. The IRAF system also includes a complete programming environment for scientific applications, which includes a programmable Command Language scripting facility, the IMFORT Fortran/C programming interface, and the full SPP/VOS programming environment in which the portable IRAF system and all applications are written.

[ascl:1406.014] IRAS90: IRAS Data Processing

IRAS90 is a suite of programs for processing IRAS data. It takes advantage of Starlink's (ascl:1110.012) ADAM environment, which provides multi-platform availability of both data and the programs to process it, and the user friendly interface of the parameter entry system. The suite can determine positions in astrometric coordinates, draw grids, and offers other functions for standard astronomical measurement and standard projections.

[ascl:1406.015] IRCAMDR: IRCAM3 Data Reduction Software

The UKIRT IRCAM3 data reduction and analysis software package, IRCAMDR (formerly ircam_clred) analyzes and displays any 2D data image stored in the standard Starlink (ascl:1110.012) NDF data format. It reduces and analyzes IRCAM1/2 data images of 62x58 pixels and IRCAM3 images of 256x256 size. Most of the applications will work on NDF images of any physical (pixel) dimensions, for example, 1024x1024 CCD images can be processed.

[ascl:1109.017] IRDR: InfraRed Data Reduction

We describe the InfraRed Data Reduction (IRDR) software package, a small ANSI C library of fast image processing routines for automated pipeline reduction of infrared (dithered) observations. We developed the software to satisfy certain design requirements not met in existing packages (e.g., full weight map handling) and to optimize the software for large data sets (non-interactive tasks that are CPU and disk efficient). The software includes stand-alone C programs for tasks such as running sky frame subtraction with object masking, image registration and coaddition with weight maps, dither offset measurement using cross-correlation, and object mask dilation. Although we currently use the software to process data taken with CIRSI (a near-IR mosaic imager), the software is modular and concise and should be easy to adapt/reuse for other work.

[ascl:1205.007] Iris: The VAO SED Application

Iris is a downloadable Graphical User Interface (GUI) application which allows the astronomer to build and analyze wide-band Spectral Energy Distributions (SEDs). The components of Iris have been contributed by members of the VAO. Specview, contributed by STScI, provides a GUI for reading, editing, and displaying SEDs, as well as defining models and parameter values. Sherpa, contributed by the Chandra project at SAO, provides a library of models, fit statistics, and optimization methods; the underlying I/O library, SEDLib, is a VAO product written by SAO to current IVOA (International Virtual Observatory Alliance) data model standards. NED is a service provided by IPAC for easy location of data for a given extragalactic source, including SEDs. SedImporter converts non-standard SED data files into a format supported by Iris.

[ascl:1602.016] IRSFRINGE: Interactive tool for fringe removal from Spitzer IRS spectra

IRSFRINGE is an IDL-based GUI package that allows observers to interactively remove fringes from IRS spectra. Fringes that originate from the detector subtrates are observed in the IRS Short-High (SH) and Long-High (LH) modules. In the Long-Low (LL) module, another fringe component is seen as a result of the pre-launch change in one of the LL filters. The fringes in the Short-Low (SL) module are not spectrally resolved. the fringes are already largely removed in the pipeline processing when the flat field is applied. However, this correction is not perfect and remaining fringes can be removed with IRSFRINGE from data in each module. IRSFRINGE is available as a stand-alone package and is also part of the Spectroscopic Modeling, Analysis and Reduction Tool (SMART, ascl:1210.021).

[ascl:1303.029] iSAP: Interactive Sparse Astronomical Data Analysis Packages

iSAP consists of three programs, written in IDL, which together are useful for spherical data analysis. MR/S (MultiResolution on the Sphere) contains routines for wavelet, ridgelet and curvelet transform on the sphere, and applications such denoising on the sphere using wavelets and/or curvelets, Gaussianity tests and Independent Component Analysis on the Sphere. MR/S has been designed for the PLANCK project, but can be used for many other applications. SparsePol (Polarized Spherical Wavelets and Curvelets) has routines for polarized wavelet, polarized ridgelet and polarized curvelet transform on the sphere, and applications such denoising on the sphere using wavelets and/or curvelets, Gaussianity tests and blind source separation on the Sphere. SparsePol has been designed for the PLANCK project. MS-VSTS (Multi-Scale Variance Stabilizing Transform on the Sphere), designed initially for the FERMI project, is useful for spherical mono-channel and multi-channel data analysis when the data are contaminated by a Poisson noise. It contains routines for wavelet/curvelet denoising, wavelet deconvolution, multichannel wavelet denoising and deconvolution.

[ascl:1403.009] ISAP: ISO Spectral Analysis Package

ISAP, written in IDL, simplifies the process of visualizing, subsetting, shifting, rebinning, masking, combining scans with weighted means or medians, filtering, and smoothing Auto Analysis Results (AARs) from post-pipeline processing of the Infrared Space Observatory's (ISO) Short Wavelength Spectrometer (SWS) and Long Wavelength Spectrometer (LWS) data. It can also be applied to PHOT-S and CAM-CVF data, and data from practically any spectrometer. The result of a typical ISAP session is expected to be a "simple spectrum" (single-valued spectrum which may be resampled to a uniform wavelength separation if desired) that can be further analyzed and measured either with other ISAP functions, native IDL functions, or exported to other analysis package (e.g., IRAF, MIDAS) if desired. ISAP provides many tools for further analysis, line-fitting, and continuum measurements, such as routines for unit conversions, conversions from wavelength space to frequency space, line and continuum fitting, flux measurement, synthetic photometry and models such as a zodiacal light model to predict and subtract the dominant foreground at some wavelengths.

[ascl:1809.010] Isca: Idealized global circulation modeling

Isca provides a framework for the idealized modeling of the global circulation of planetary atmospheres at varying levels of complexity and realism. Though Isca is an outgrowth of models designed for Earth's atmosphere, it may readily be extended into other planetary regimes. Various forcing and radiation options are available. At the simple end of the spectrum a Held-Suarez case is available. An idealized grey radiation scheme, a grey scheme with moisture feedback, a two-band scheme and a multi-band scheme are also available, all with simple moist effects and astronomically-based solar forcing. At the complex end of the spectrum the framework provides a direct connection to comprehensive atmospheric general circulation models.

[ascl:1708.029] iSEDfit: Bayesian spectral energy distribution modeling of galaxies

iSEDfit uses Bayesian inference to extract the physical properties of galaxies from their observed broadband photometric spectral energy distribution (SED). In its default mode, the inputs to iSEDfit are the measured photometry (fluxes and corresponding inverse variances) and a measurement of the galaxy redshift. Alternatively, iSEDfit can be used to estimate photometric redshifts from the input photometry alone.

After the priors have been specified, iSEDfit calculates the marginalized posterior probability distributions for the physical parameters of interest, including the stellar mass, star-formation rate, dust content, star formation history, and stellar metallicity. iSEDfit also optionally computes K-corrections and produces multiple "quality assurance" (QA) plots at each stage of the modeling procedure to aid in the interpretation of the prior parameter choices and subsequent fitting results. The software is distributed as part of the impro IDL suite.

[submitted] isis_emcee: ISIS-based MCMC Hammer

ISIS_EMCEE is the implementation of Goodman & Weare's affine-invariant Markov chain Monte Carlo (MCMC) ensemble sampler in the Interactive Spectral Interpretation System (ISIS; ascl:1302.002), based upon the parallel "simple stretch" method from the MCMC Hammer (emcee; ascl:1303.002).

[ascl:9909.003] ISIS: A method for optimal image subtraction

ISIS is a complete package to process CCD images using the image Optimal subtraction method (Alard & Lupton 1998, Alard 1999). The ISIS package can find the best kernel solution even in case of kernel variations as a function of position in the image. The relevant computing time is minimal in this case and is only slightly different from finding constant kernel solutions. ISIS includes as well a number of facilities to compute the light curves of variables objects from the subtracted images. The basic routines required to build the reference frame and make the image registration are also provided in the package.

[ascl:1302.002] ISIS: Interactive Spectral Interpretation System for High Resolution X-Ray Spectroscopy

ISIS, the Interactive Spectral Interpretation System, is designed to facilitate the interpretation and analysis of high resolution X-ray spectra. It is being developed as a programmable, interactive tool for studying the physics of X-ray spectrum formation, supporting measurement and identification of spectral features, and interaction with a database of atomic structure parameters and plasma emission models.

[ascl:1601.021] ISO: Isochrone construction

ISO transforms MESA history files into a uniform basis for interpolation and then constructs new stellar evolution tracks and isochrones from that basis. It is written in Fortran and requires MESA (ascl:1010.083), primarily for interpolation. Though designed to ingest MESA star history files, tracks from other stellar evolution codes can be incorporated by loading the tracks into the data structures used in the codes.

[ascl:1503.010] isochrones: Stellar model grid package

Isochrones, written in Python, simplifies common tasks often done with stellar model grids, such as simulating synthetic stellar populations, plotting evolution tracks or isochrones, or estimating the physical properties of a star given photometric and/or spectroscopic observations.

[ascl:1409.006] iSpec: Stellar atmospheric parameters and chemical abundances

iSpec is an integrated software framework written in Python for the treatment and analysis of stellar spectra and abundances. Spectra treatment functions include cosmic rays removal, continuum normalization, resolution degradation, and telluric lines identification. It can also perform radial velocity determination and correction and resampling. iSpec can also determine atmospheric parameters (i.e effective temperature, surface gravity, metallicity, micro/macroturbulence, rotation) and individual chemical abundances by using either the synthetic spectra fitting technique or equivalent widths method. The synthesis is performed with SPECTRUM (ascl:9910.002).

[ascl:1010.047] ISW and Weak Lensing Likelihood Code

ISW and Weak Lensing Likelihood code is the likelihood code that calculates the likelihood of Integrated Sachs Wolfe and Weak Lensing of Cosmic Microwave Background using the WMAP 3year CMB maps with mass tracers such as 2MASS (2-Micron All Sky Survey), SDSS LRG (Sloan Digital Sky Survey Luminous Red Galaxies), SDSS QSOs (Sloan Digital Sky Survey Quasars) and NVSS (NRAO VLA All Sky Survey) radio sources. The details of the analysis (*thus the likelihood code) can be understood by reading the papers ISW paper and Weak lensing paper. The code does brute force theoretical matter power spectrum and calculations with CAMB. See the paper for an introduction, descriptions, and typical results from some pre-WMAP data. The code is designed to be integrated into CosmoMC. For further information concerning the integration, see Code Modification for integration into COSMOMC.

[ascl:1307.012] ITERA: IDL Tool for Emission-line Ratio Analysis

ITERA, the IDL Tool for Emission-line Ratio Analysis, is an IDL widget tool that allows you to plot ratios of any strong atomic and ionized emission lines as determined by standard photoionization and shock models. These "line ratio diagrams" can then be used to determine diagnostics for nebulae excitation mechanisms or nebulae parameters such as density, temperature, metallicity, etc. ITERA can also be used to determine line sensitivities to such parameters, compare observations with the models, or even estimate unobserved line fluxes.

[ascl:1406.016] IUEDR: IUE Data Reduction package

IUEDR reduces IUE data. It addresses the problem of working from the IUE Guest Observer tape or disk file through to a calibrated spectrum that can be used in scientific analysis and is a complete system for IUE data reduction. IUEDR was distributed as part of the Starlink software collection (ascl:1110.012).

[ascl:1801.002] iWander: Dynamics of interstellar wanderers

iWander assesses the origin of interstellar small bodies such as asteroids and comets. It includes a series of databases and tools that can be used in general for studying the dynamics of an interstellar vagabond object (small−body, interstellar spaceship and even stars).

[ascl:1209.002] JAGS: Just Another Gibbs Sampler

JAGS analyzes Bayesian hierarchical models using Markov Chain Monte Carlo (MCMC) simulation not wholly unlike BUGS. JAGS has three aims:

  • to have a cross-platform engine for the BUGS language;
  • to be extensible, allowing users to write their own functions, distributions and samplers; and
  • to be a platform for experimentation with ideas in Bayesian modeling.

[ascl:1403.018] JAM: Jeans Anisotropic MGE modeling method

The Jeans Anisotropic MGE (JAM) modeling method uses the Multi-Gaussian Expansion parameterization for the galaxy surface brightness. The code allows for orbital anisotropy (three-integrals distribution function) and also provides the full second moment tensor, including proper motions and radial velocities.

[ascl:1010.007] JAVELIN: Just Another Vehicle for Estimating Lags In Nuclei (formerly known as SPEAR)

JAVELIN (SPEAR) is a new approach to reverberation mapping that computes the lags between the AGN continuum and emission line light curves and their statistical confidence limits. It uses a damped random walk model to describe the quasar continuum variability and the ansatz that emission line variability is a scaled, smoothed and displaced version of the continuum. While currently configured only to simultaneously fit light curve means, it includes a general linear parameters formalism to fit more complex trends or calibration offsets. The noise matrix can be modified to allow for correlated errors, and the correlation matrix can be modified to use a different stochastic process. The transfer function model is presently a tophat, but this can be altered by changing the line-continuum covariance matrices. It is also able to cope with some problems in traditional reverberation mapping, such as irregular sampling, correlated errors and seasonal gaps.

[ascl:1411.020] JCMT COADD: UKT14 continuum and photometry data reduction

COADD was used to reduce photometry and continuum data from the UKT14 instrument on the James Clerk Maxwell Telescope in the 1990s. The software can co-add multiple observations and perform sigma clipping and Kolmogorov-Smirnov statistical analysis. Additional information on the software is available in the JCMT Spring 1993 newsletter (large PDF).

[ascl:1406.019] JCMTDR: Applications for reducing JCMT continuum data in GSD format

JCMTDR reduces continuum on-the-fly mapping data obtained with UKT14 or the heterodyne instruments using the IFD on the James Clerk Maxwell Telescope. This program reduces archive data and heterodyne beam maps and was distributed as part of the Starlink software collection (ascl:1110.012).

[ascl:1702.005] JetCurry: Modeling 3D geometry of AGN jets from 2D images

Written in Python, JetCurry models the 3D geometry of jets from 2-D images. JetCurry requires NumPy and SciPy and incorporates emcee (ascl:1303.002) and AstroPy (ascl:1304.002), and optionally uses VPython. From a defined initial part of the jet that serves as a reference point, JetCurry finds the position of highest flux within a bin of data in the image matrix and fits along the x axis for the general location of the bends in the jet. A spline fitting is used to smooth out the resulted jet stream.

[ascl:1810.003] JETGET: Hydrodynamic jet simulation visualization and analysis

JETGET accesses, visualizes, and analyses (magnetized-)fluid dynamics data stored in Hierarchical Data Format (HDF) and ASCII files. Although JETGET has been optimized to handle data output from jet simulations using the Zeus code (ascl:1306.014) from NCSA, it is also capable of analyzing other data output from simulations using other codes. JETGET can select variables from the data files, render both two- and three-dimensional graphics and analyze and plot important physical quantities. Graphics can be saved in encapsulated Postscript, JPEG, VRML, or saved into an MPEG for later visualization and/or presentations. The strength of JETGET in extracting the physics underlying such phenomena is demonstrated as well as its capabilities in visualizing the 3-dimensional features of the simulated magneto-hydrodynamic jets. The JETGET tool is written in Interactive Data Language (IDL) and uses a graphical user interface to manipulate the data. The tool was developed on a LINUX platform and can be run on any platform that supports IDL.

[ascl:1308.016] JHelioviewer: Visualization software for solar physics data

JHelioview is open source visualization software for solar physics data. The JHelioviewer client application enables users to browse petabyte-scale image archives; the JHelioviewer server integrates a JPIP server, metadata catalog, and an event server. JHelioview uses the JPEG 2000 image compression standard, which provides efficient access to petabyte-scale image archives; JHelioviewer also allows users to locate and manipulate specific data sets.

[ascl:1207.013] JKTEBOP: Analyzing light curves of detached eclipsing binaries

The JKTEBOP code is used to fit a model to the light curves of detached eclipsing binary stars in order to derive the radii of the stars as well as various other quantities. It is very stable and includes extensive Monte Carlo or bootstrapping error analysis algorithms. It is also excellent for transiting extrasolar planetary systems. All input and output is done by text files; JKTEBOP is written in almost-standard FORTRAN 77 using first the g77 compiler and now the ifort compiler.

[ascl:1511.016] JKTLD: Limb darkening coefficients

JKTLD outputs theoretically-calculated limb darkening (LD) strengths for equations (LD laws) which predict the amount of LD as a function of the part of the star being observed. The coefficients of these laws are obtained by bilinear interpolation (in effective temperature and surface gravity) in published tables of coefficients calculated from stellar model atmospheres by several researchers. Many observations of stars require the strength of limb darkening (LD) to be estimated, which can be done using theoretical models of stellar atmospheres; JKTLD can help in these circumstances.

[ascl:1511.002] JSPAM: Interacting galaxies modeller

JSPAM models galaxy collisions using a restricted n-body approach to speed up computation. Instead of using a softened point-mass potential, the software supports a modified version of the three component potential created by Hernquist (1994, ApJS 86, 389). Although spherically symmetric gravitationally potentials and a Gaussian model for the bulge are used to increase computational efficiency, the potential mimics that of a fully consistent n-body model of a galaxy. Dynamical friction has been implemented in the code to improve the accuracy of close approaches between galaxies. Simulations using this code using thousands of particles over the typical interaction times of a galaxy interaction take a few seconds on modern desktop workstations, making it ideal for rapidly prototyping the dynamics of colliding galaxies. Extensive testing of the code has shown that it produces nearly identical tidal features to those from hierarchical tree codes such as Gadget but using a fraction of the computational resources. This code was used in the Galaxy Zoo: Mergers project and is very well suited for automated fitting of galaxy mergers with automated pattern fitting approaches such as genetic algorithms. Java and Fortran versions of the code are available.

[ascl:1607.007] JUDE: An Utraviolet Imaging Telescope pipeline

JUDE (Jayant's UVIT Data Explorer) converts the Level 1 data (FITS binary table) from the Ultraviolet Imaging Telescope (UVIT) on ASTROSAT into three output files: a photon event list as a function of frame number (FITS binary table); a FITS image file with two extensions; and a PNG file created from the FITS image file with an automated scaling.

[ascl:1812.016] Juliet: Transiting and non-transiting exoplanetary systems modelling tool

Juliet essentially serves as a wrapper of other tools, including Batman (ascl:1510.002), George (ascl:1511.015), Dynesty (ascl:1809.013) and AstroPy (ascl:1304.002), to analyze and model transits, radial-velocities, or both from multiple instruments at the same time. Using nested sampling algorithms, it performs a thorough sampling of the parameter space and a model comparison via Bayesian evidences. Juliet also fits transiting and non-transiting multi-planetary systems, and Gaussian Processes (GPs) which might share hyperparameters between the photometry and radial-velocities simultaneously (e.g., stellar rotation periods).

[ascl:1109.024] Jupiter: Multidimensional Astrophysical Hydrocode

Jupiter is a multidimensional astrophysical hydrocode. It is based on a Godunov method, and it is parallelized with MPI. The mesh geometry can either be cartesian, cylindrical or spherical. It allows mesh refinement and includes special features adapted to the description of planets embedded in disks and nearly steady states.

[ascl:1702.003] juwvid: Julia code for time-frequency analysis

Juwvid performs time-frequency analysis. Written in Julia, it uses a modified version of the Wigner distribution, the pseudo Wigner distribution, and the short-time Fourier transform from MATLAB GPL programs, tftb-0.2. The modification includes the zero-padding FFT, the non-uniform FFT, the adaptive algorithm by Stankovic, Dakovic, Thayaparan 2013, the S-method, the L-Wigner distribution, and the polynomial Wigner-Ville distribution.

[ascl:1504.017] JWFront: Wavefronts and Light Cones for Kerr Spacetimes

JWFront visualizes wavefronts and light cones in general relativity. The interactive front-end allows users to enter the initial position values and choose the values for mass and angular momentum per unit mass. The wavefront animations are available in 2D and 3D; the light cones are visualized using the coordinate systems (t, x, y) or (t, z, x). JWFront can be easily modified to simulate wavefronts and light cones for other spacetime by providing the Christoffel symbols in the program.

[ascl:1507.013] K-Inpainting: Inpainting for Kepler

Inpainting is a technique for dealing with gaps in time series data, as frequently occurs in asteroseismology data, that may generate spurious peaks in the power spectrum, thus limiting the analysis of the data. The inpainting method, based on a sparsity prior, judiciously fills in gaps in the data, preserving the asteroseismic signal as far as possible. This method can be applied both on ground and space-based data. The inpainting technique improves the oscillation modes detection and estimation, the impact of the observational window function is reduced, and the interpretation of the power spectrum is simplified. K-Inpainting can be used to study very long time series of many stars because its computation is very fast.

[ascl:1503.001] K2flix: Kepler pixel data visualizer

K2flix makes it easy to inspect the CCD pixel data obtained by NASA's Kepler space telescope. The two-wheeled extended Kepler mission, K2, is affected by new sources of systematics, including pointing jitter and foreground asteroids, that are easier to spot by eye than by algorithm. The code takes Kepler's Target Pixel Files (TPF) as input and turns them into contrast-stretched animated gifs or MPEG-4 movies. K2flix can be used both as a command-line tool or using its Python API.

[ascl:1601.009] K2fov: Field of view software for NASA's K2 mission

K2fov allows users to transform celestial coordinates into K2's pixel coordinate system for the purpose of preparing target proposals and field of view visualizations. In particular, the package, written in Python, adds the "K2onSilicon" and "K2findCampaigns" tools to the command line, allowing the visibility of targets to be checked in a user-friendly way.

[ascl:1602.014] k2photometry: Read, reduce and detrend K2 photometry

k2photometry reads, reduces and detrends K2 photometry and searches for transiting planets. MAST database pixel files are used as input; the output includes raw lightcurves, detrended lightcurves and a transit search can be performed as well. Stellar variability is not typically well-preserved but parameters can be tweaked to change that. The BLS algorithm used to detect periodic events is a Python implementation by Ruth Angus and Dan Foreman-Mackey (https://github.com/dfm/python-bls).

[ascl:1607.010] K2PS: K2 Planet search

K2PS is an Oxford K2 planet search pipeline. Written in Python, it searches for transit-like signals from the k2sc-detrended light curves.

[ascl:1605.012] K2SC: K2 Systematics Correction

K2SC (K2 Systematics Correction) models instrumental systematics and astrophysical variability in light curves from the K2 mission. It enables the user to remove both position-dependent systematics and time-dependent variability (e.g., for transit searches) or to remove systematics while preserving variability (for variability studies). K2SC automatically computes estimates of the period, amplitude and evolution timescale of the variability for periodic variables and can be run on ASCII and FITS light curve files. Written in Python, this pipeline requires NumPy, SciPy, MPI4Py, Astropy (ascl:1304.002), and George (ascl:1511.015).

[ascl:1307.003] K3Match: Point matching in 3D space

K3Match is a C library with Python bindings for fast matching of points in 3D space. It uses 3-dimensional binary trees to find matches between large datasets in O(N log N) time.

[ascl:1803.005] Kadenza: Kepler/K2 Raw Cadence Data Reader

Kadenza enables time-critical data analyses to be carried out using NASA's Kepler Space Telescope. It enables users to convert Kepler's raw data files into user-friendly Target Pixel Files upon downlink from the spacecraft. The primary motivation for this tool is to enable the microlensing, supernova, and exoplanet communities to create quicklook lightcurves for transient events which require rapid follow-up.

[ascl:1607.013] Kālī: Time series data modeler

The fully parallelized and vectorized software package Kālī models time series data using various stochastic processes such as continuous-time ARMA (C-ARMA) processes and uses Bayesian Markov Chain Monte-Carlo (MCMC) for inferencing a stochastic light curve. Kālī is written in c++ with Python language bindings for ease of use. Kālī is named jointly after the Hindu goddess of time, change, and power and also as an acronym for KArma LIbrary.

[ascl:1403.022] KAPPA: Kernel Applications Package

KAPPA comprising about 180 general-purpose commands for image processing, data visualization, and manipulation of the standard Starlink data format--the NDF. It works with Starlink's various specialized packages; in addition to the NDF, KAPPA can also process data in other formats by using the "on-the-fly" conversion scheme. Many commands can process data arrays of arbitrary dimension, and others work on both spectra and images. KAPPA operates from both the UNIX C-shell and the ICL command language. KAPPA uses the Starlink environment (ascl:1110.012).

[ascl:1502.008] KAPPA: Optically thin spectra synthesis for non-Maxwellian kappa-distributions

Based on the freely available CHIANTI (ascl:9911.004) database and software, KAPPA synthesizes line and continuum spectra from the optically thin spectra that arise from collisionally dominated astrophysical plasmas that are the result of non-Maxwellian κ-distributions detected in the solar transition region and flares. Ionization and recombination rates together with the ionization equilibria are provided for a range of κ values. Distribution-averaged collision strengths for excitation are obtained by an approximate method for all transitions in all ions available within CHIANTI; KAPPA also offers tools for calculating synthetic line and continuum intensities.

[ascl:1611.010] Kapteyn Package: Tools for developing astronomical applications

The Kapteyn Package provides tools for the development of astronomical applications with Python. It handles spatial and spectral coordinates, WCS projections and transformations between different sky systems; spectral translations (e.g., between frequencies and velocities) and mixed coordinates are also supported. Kapteyn offers versatile tools for writing small and dedicated applications for the inspection of FITS headers, the extraction and display of (FITS) data, interactive inspection of this data (color editing) and for the creation of plots with world coordinate information. It includes utilities for use with matplotlib such as obtaining coordinate information from plots, interactively modifiable colormaps and timer events (module mplutil); tools for parsing and interpreting coordinate information entered by the user (module positions); a function to search for gaussian components in a profile (module profiles); and a class for non-linear least squares fitting (module kmpfit).

[ascl:1102.018] Karma: Visualisation Test-Bed Toolkit

Karma is a toolkit for interprocess communications, authentication, encryption, graphics display, user interface and manipulating the Karma network data structure. It contains KarmaLib (the structured libraries and API) and a large number of modules (applications) to perform many standard tasks. A suite of visualisation tools are distributed with the library.

[ascl:1701.005] KAULAKYS: Inelastic collisions between hydrogen atoms and Rydberg atoms

KAULAKYS calculates cross sections and rate coefficients for inelastic collisions between Rydberg atoms and hydrogen atoms according to the free electron model of Kaulakys (1986, 1991). It is written in IDL and requires the code MSWAVEF (ascl:1701.006) to calculate momentum-space wavefunctions. KAULAKYS can be easily adapted to collisions with perturbers other than hydrogen atoms by providing the appropriate scattering amplitudes.

[ascl:1701.010] kcorrect: Calculate K-corrections between observed and desired bandpasses

kcorrect fits very restricted spectral energy distribution models to galaxy photometry or spectra in the restframe UV, optical and near-infrared. The main purpose of the fits are for calculating K-corrections. The templates used for the fits may also be interpreted physically, since they are based on the Bruzual-Charlot stellar evolution synthesis codes. Thus, for each fit galaxy kcorrect can provide an estimate of the stellar mass-to-light ratio.

[ascl:1712.001] KDUtils: Kinematic Distance Utilities

The Kinematic Distance utilities (KDUtils) calculate kinematic distances and kinematic distance uncertainties. The package includes methods to calculate "traditional" kinematic distances as well as a Monte Carlo method to calculate kinematic distances and uncertainties.

[ascl:1702.007] KEPLER: General purpose 1D multizone hydrodynamics code

KEPLER is a general purpose stellar evolution/explosion code that incorporates implicit hydrodynamics and a detailed treatment of nuclear burning processes. It has been used to study the complete evolution of massive and supermassive stars, all major classes of supernovae, hydrostatic and explosive nucleosynthesis, and x- and gamma-ray bursts on neutron stars and white dwarfs.

[ascl:1706.012] KeplerSolver: Kepler equation solver

KeplerSolver solves Kepler's equation for arbitrary epoch and eccentricity, using continued fractions. It is written in C and its speed is nearly the same as the SWIFT routines, while achieving machine precision. It comes with a test program to demonstrate usage.

[ascl:1806.022] Keras: The Python Deep Learning library

Keras is a high-level neural networks API written in Python and capable of running on top of TensorFlow, CNTK, or Theano. It focuses on enabling fast experimentation.

[submitted] KERN

KERN is a bi-annually released set of radio astronomical software packages. It should contain most of the standard tools that a radio astronomer needs to work with radio telescope data. The goal of KERN to is to save time and frustration in setting up of scientific pipelines, and to assist in achieving scientific reproducibility.

[ascl:1708.021] KERTAP: Strong lensing effects of Kerr black holes

KERTAP computes the strong lensing effects of Kerr black holes, including the effects on polarization. The key ingredients of KERTAP are a graphic user interface, a backward ray-tracing algorithm, a polarization propagator dealing with gravitational Faraday rotation, and algorithms computing observables such as flux magnification and polarization angles.

[ascl:1502.020] ketu: Exoplanet candidate search code

ketu, written in Python, searches K2 light curves for evidence of exoplanets; the code simultaneously fits for systematic effects caused by small (few-pixel) drifts in the telescope pointing and other spacecraft issues and the transit signals of interest. Though more computationally expensive than standard search algorithms, it can be efficiently implemented and used to discover transit signals.

[ascl:1403.019] KINEMETRY: Analysis of 2D maps of kinematic moments of LOSVD

KINEMETRY, written in IDL, analyzes 2D maps of the moments of the line-of-sight velocity distribution (LOSVD). It generalizes the surface photometry to all moments of the LOSVD. It performs harmonic expansion of 2D maps of observed moments (surface brightness, velocity, velocity dispersion, h3, h4, etc.) along the best fitting ellipses (either fixed or free to change along the radii) to robustly quantify maps of the LOSVD moments, describe trends in structures, and detect morphological and kinematic sub-components.

[ascl:1401.001] Kirin: N-body simulation library for GPUs

The use of graphics processing units offers an attractive alternative to specialized hardware, like GRAPE. The Kirin library mimics the behavior of the GRAPE hardware and uses the GPU to execute the force calculations. It is compatible with the GRAPE6 library; existing code that uses the GRAPE6 library can be recompiled and relinked to use the GPU equivalents of the GRAPE6 functions. All functions in the GRAPE6 library have an equivalent GPU implementation. Kirin can be used for direct N-body simulations as well as for treecodes; it can be run with shared-time steps or with block time-steps and allows non-softened potentials. As Kirin makes use of CUDA, it works only on NVIDIA GPUs.

[submitted] Kliko - The Scientific Compute Container Format

We present Kliko, a Docker based container specification for running one or multiple related compute jobs. The key concepts of Kliko is the encapsulation of data processing software into a container and the formalisation of the input, output and task parameters. Formalisation is realised by bundling a container with a Kliko file, which describes the IO and task parameters. This Kliko container can then be opened and run by a Kliko runner. The Kliko runner will parse the Kliko definition and gather the values for these parameters, for example by requesting user input or pre defined values in a script. Parameters can be various primitive types, for example float, int or the path to a file. This paper will also discuss the implementation of a support library named Kliko which can be used to create Kliko containers, parse Kliko definitions, chain Kliko containers in workflows using, for example, Luigi a workflow manager. The Kliko library can be used inside the container interact with the Kliko runner. Finally this paper will discuss two reference implementations based on Kliko: RODRIGUES, a web based Kliko container schedular and output visualiser specifically for astronomical data, and VerMeerKAT, a multi container workflow data reduction pipeline which is being used as a prototype pipeline for the commisioning of the MeerKAT radio telescope.

[ascl:1606.012] KMDWARFPARAM: Parameters estimator for K and M dwarf stars

KMDWARFPARAM estimates the physical parameters of a star with mass M < 0.8 M_sun given one or more observational constraints. The code runs a Markov-Chain Monte Carlo procedure to estimate the parameter values and their uncertainties.

[ascl:1504.013] kozai: Hierarchical triple systems evolution

The kozai Python package evolves hierarchical triple systems in the secular approximation. As its name implies, the kozai package is useful for studying Kozai-Lidov oscillations. The kozai package can represent and evolve hierarchical triples using either the Delaunay orbital elements or the angular momentum and eccentricity vectors. kozai contains functions to calculate the period of Kozai-Lidov oscillations and the maximum eccentricity reached; it also contains a module to study octupole order effects by averaging over individual Kozai-Lidov oscillations.

[ascl:1807.027] kplr: Tools for working with Kepler data using Python

kplr provides a lightweight Pythonic interface to the catalog of planet candidates (Kepler Objects of Interest [KOIs]) in the NASA Exoplanet Archive and the data stored in the Barbara A. Mikulski Archive for Space Telescopes (MAST). kplr automatically supports loading Kepler data using pyfits (ascl:1207.009) and supports two types of data: light curves and target pixel files.

[ascl:1609.003] Kranc: Cactus modules from Mathematica equations

Kranc turns a tensorial description of a time dependent partial differential equation into a module for the Cactus Computational Toolkit (ascl:1102.013). This Mathematica application takes a simple continuum description of a problem and generates highly efficient and portable code, and can be used both for rapid prototyping of evolution systems and for high performance supercomputing.

[ascl:1402.011] KROME: Chemistry package for astrophysical simulations

KROME, given a chemical network (in CSV format), automatically generates all the routines needed to solve the kinetics of the system modeled as a system of coupled Ordinary Differential Equations. It provides a large set of physical processes connected to chemistry, including photochemistry, cooling, heating, dust treatment, and reverse kinetics. KROME is flexible and can be used for a wide range of astrophysical simulations. The package contains a network for primordial chemistry, a small metal network appropriate for the modeling of low metallicities environments, a detailed network for the modeling of molecular clouds, and a network for planetary atmospheres as well as a framework for the modelling of the dust grain population.

[ascl:1505.004] KS Integration: Kelvin-Stokes integration

KS Intergration solves for mutual photometric effects produced by planets and spots allowing for analysis of planetary occultations of spots and spots regions. It proceeds by identifying integrable and non integrable arcs on the objects profiles and analytically calculates the solution exploiting the power of Kelvin-Stokes theorem. It provides the solution up to the second degree of the limb darkening law.

[ascl:1804.026] KSTAT: KD-tree Statistics Package

KSTAT calculates the 2 and 3-point correlation functions in discreet point data. These include the two-point correlation function in 2 and 3-dimensions, the anisotripic 2PCF decomposed in either sigma-pi or Kazin's dist. mu projection. The 3-point correlation function can also work in anisotropic coordinates. The code is based on kd-tree structures and is parallelized using a mixture of MPI and OpenMP.

[ascl:1807.028] ktransit: Exoplanet transit modeling tool in python

The routines in ktransit create and fit a transiting planet model. The underlying model is a Fortran implementation of the Mandel & Agol (2002) limb darkened transit model. The code calculates a full orbital model and eccentricity can be allowed to vary; radial velocity data can also be calculated via the model and included in the fit.

[ascl:1407.011] kungifu: Calibration and reduction of fiber-fed IFU astronomical spectroscopy

kungifu is a set of IDL software routines designed for the calibration and reduction of fiber-fed integral-field unit (IFU) astronomical spectroscopy. These routines can perform optimal extraction of IFU data and allow relative and absolute wavelength calibration to within a few hundredths of a pixel (for unbinned data) across 1200-2000 fibers. kungifu does nearly Poisson-limited sky subtraction, even in the I band, and can rebin in wavelength. The Princeton IDLUTILS and IDLSPEC2D packages must be installed for kungifu to run.

[ascl:1507.004] L-PICOLA: Fast dark matter simulation code

L-PICOLA generates and evolves a set of initial conditions into a dark matter field and can include primordial non-Gaussianity in the simulation and simulate the past lightcone at run-time, with optional replication of the simulation volume. It is a fast, distributed-memory, planar-parallel code. L-PICOLA is extremely useful for both current and next generation large-scale structure surveys.

[ascl:1207.005] L.A.Cosmic: Laplacian Cosmic Ray Identification

Conventional algorithms for rejecting cosmic rays in single CCD exposures rely on the contrast between cosmic rays and their surroundings and may produce erroneous results if the point-spread function is smaller than the largest cosmic rays. This code uses a robust algorithm for cosmic-ray rejection, based on a variation of Laplacian edge detection. The algorithm identifies cosmic rays of arbitrary shapes and sizes by the sharpness of their edges and reliably discriminates between poorly sampled point sources and cosmic rays. Examples of its performance are given for spectroscopic and imaging data, including Hubble Space Telescope Wide Field Planetary Camera 2 images, in the code paper.

[ascl:1601.011] LACEwING: LocAting Constituent mEmbers In Nearby Groups

LACEwING (LocAting Constituent mEmbers In Nearby Groups) uses the kinematics (positions and motions) of stars to determine if they are members of one of 10 nearby young moving groups or 4 nearby open clusters within 100 parsecs. It is written for Python 2.7 and depends upon Numpy, Scipy, and Astropy (ascl:1304.002) modules. LACEwING can be used as a stand-alone code or as a module in other code. Additional python programs are present in the repository for the purpose of recalibrating the code and producing other analyses, including a traceback analysis.

[ascl:1604.003] LAMBDAR: Lambda Adaptive Multi-Band Deblending Algorithm in R

LAMBDAR measures galaxy fluxes from an arbitrary FITS image, covering an arbitrary photometric wave-band, when provided all parameters needed to construct galactic apertures at the required locations for multi-band matched aperture galactic photometry. Through sophisticated matched aperture photometry, the package develops robust Spectral Energy Distributions (SEDs) and accurately establishes the physical properties of galactic objects. LAMBDAR was based on a package detailed in Bourne et al. (2012) that determined galactic fluxes in low resolution Herschel images.

[ascl:1010.077] LAMDA: Leiden Atomic and Molecular Database

LAMDA provides users of radiative transfer codes with the basic atomic and molecular data needed for the excitation calculation. Line data of a number of astrophysically interesting species are summarized, including energy levels, statistical weights, Einstein A-coefficients and collisional rate coefficients. Available collisional data from quantum chemical calculations and experiments are in some cases extrapolated to higher energies. Currently the database contains atomic data for 3 species and molecular data for 28 different species. In addition, several isotopomers and deuterated versions are available. This database should form an important tool in analyzing observations from current and future infrared and (sub)millimetre telescopes. Databases such as these rely heavily on the efforts by the chemical physics community to provide the relevant atomic and molecular data. Further efforts in this direction are strongly encouraged so that the current extrapolations of collisional rate coefficients can be replaced by actual calculations in future releases.

RADEX, a computer program for performing statistical equilibrium calculations is made publicly available as part of the data base.

[ascl:1409.003] LANL*: Radiation belt drift shell modeling

LANL* calculates the magnetic drift invariant L*, used for modeling radiation belt dynamics and other space weather applications, six orders of magnitude (~ one million times) faster than convectional approaches that require global numerical field lines tracing and integration. It is based on a modern machine learning technique (feed-forward artificial neural network) by supervising a large data pool obtained from the IRBEM library, which is the traditional source for numerically calculating the L* values. The pool consists of about 100,000 samples randomly distributed within the magnetosphere (r: [1.03, 11.5] Re) and within a whole solar cycle from 1/1/1994 to 1/1/2005. There are seven LANL* models, each corresponding to its underlying magnetic field configuration that is used to create the data sample pool. This model has applications to real-time radiation belt forecasting, analysis of data sets involving tens of satellite-years of observations, and other problems in space weather.

[ascl:1703.001] Larch: X-ray Analysis for Synchrotron Applications using Python

Larch is an open-source library and toolkit written in Python for processing and analyzing X-ray spectroscopic data. The primary emphasis is on X-ray spectroscopic and scattering data collected at modern synchrotron sources. Larch provides a wide selection of general-purpose processing, analysis, and visualization tools for processing X-ray data; its related target application areas include X-ray absorption fine structure (XAFS), micro-X-ray fluorescence (XRF) maps, quantitative X-ray fluorescence, X-ray absorption near edge spectroscopy (XANES), and X-ray standing waves and surface scattering. Larch provides a complete set of XAFS Analysis tools and has support for visualizing and analyzing XRF maps and spectra, and additional tools for X-ray spectral analysis, data handling, and general-purpose data modeling.

[ascl:1208.015] Lare3d: Lagrangian-Eulerian remap scheme for MHD

Lare3d is a Lagrangian-remap code for solving the non-linear MHD equations in three spatial dimensions.

[ascl:1806.021] LASR: Linear Algorithm for Significance Reduction

LASR removes stellar variability in the light curves of δ-Scuti and similar stars. It subtracts oscillations from a time series by minimizing their statistical significance in frequency space.

[ascl:1202.011] Lattimer-Swesty Equation of State Code

The Lattimer-Swesty Equation of State code is rapid enough to use directly in hydrodynamical simulations such as stellar collapse calculations. It contains an adjustable nuclear force that accurately models both potential and mean-field interactions and allows for the input of various nuclear parameters, including the bulk incompressibility parameter, the bulk and surface symmetry energies, the symmetric matter surface tension, and the nucleon effective masses. This permits parametric studies of the equation of state in astrophysical situations. The equation of state is modeled after the Lattimer, Lamb, Pethick, and Ravenhall (LLPR) compressible liquid drop model for nuclei, and includes the effects of interactions and degeneracy of the nucleon outside nuclei.

[ascl:1405.001] LBLRTM: Line-By-Line Radiative Transfer Model

LBLRTM (Line-By-Line Radiative Transfer Model) is an accurate line-by-line model that is efficient and highly flexible. LBLRTM attributes provide spectral radiance calculations with accuracies consistent with the measurements against which they are validated and with computational times that greatly facilitate the application of the line-by-line approach to current radiative transfer applications. LBLRTM has been extensively validated against atmospheric radiance spectra from the ultra-violet to the sub-millimeter.

LBLRTM's heritage is in FASCODE [Clough et al., 1981, 1992].

[ascl:1708.017] LCC: Light Curves Classifier

Light Curves Classifier uses data mining and machine learning to obtain and classify desired objects. This task can be accomplished by attributes of light curves or any time series, including shapes, histograms, or variograms, or by other available information about the inspected objects, such as color indices, temperatures, and abundances. After specifying features which describe the objects to be searched, the software trains on a given training sample, and can then be used for unsupervised clustering for visualizing the natural separation of the sample. The package can be also used for automatic tuning parameters of used methods (for example, number of hidden neurons or binning ratio).

Trained classifiers can be used for filtering outputs from astronomical databases or data stored locally. The Light Curve Classifier can also be used for simple downloading of light curves and all available information of queried stars. It natively can connect to OgleII, OgleIII, ASAS, CoRoT, Kepler, Catalina and MACHO, and new connectors or descriptors can be implemented. In addition to direct usage of the package and command line UI, the program can be used through a web interface. Users can create jobs for ”training” methods on given objects, querying databases and filtering outputs by trained filters. Preimplemented descriptors, classifier and connectors can be picked by simple clicks and their parameters can be tuned by giving ranges of these values. All combinations are then calculated and the best one is used for creating the filter. Natural separation of the data can be visualized by unsupervised clustering.

[ascl:1805.003] lcps: Light curve pre-selection

lcps searches for transit-like features (i.e., dips) in photometric data. Its main purpose is to restrict large sets of light curves to a number of files that show interesting behavior, such as drops in flux. While lcps is adaptable to any format of time series, its I/O module is designed specifically for photometry of the Kepler spacecraft. It extracts the pre-conditioned PDCSAP data from light curves files created by the standard Kepler pipeline. It can also handle csv-formatted ascii files. lcps uses a sliding window technique to compare a section of flux time series with its surroundings. A dip is detected if the flux within the window is lower than a threshold fraction of the surrounding fluxes.

[ascl:1511.018] LDC3: Three-parameter limb darkening coefficient sampling

LDC3 samples physically permissible limb darkening coefficients for the Sing et al. (2009) three-parameter law. It defines the physically permissible intensity profile as being everywhere-positive, monotonically decreasing from center to limb and having a curl at the limb. The approximate sampling method is analytic and thus very fast, reproducing physically permissible samples in 97.3% of random draws (high validity) and encompassing 94.4% of the physically permissible parameter volume (high completeness).

[ascl:1507.016] Least Asymmetry: Centering Method

Least Asymmetry finds the center of a distribution of light in an image using the least asymmetry method; the code also contains center of light and fitting a Gaussian routines. All functions in Least Asymmetry are designed to take optional weights.

[ascl:1104.006] LECTOR: Line-strengths in One-dimensional ASCII Spectra

LECTOR is a Fortran 77 code that measures line-strengths in one dimensional ascii spectra. The code returns the values of the Lick indices as well as those of Vazdekis & Arimoto 1999, Vazdekis et al. 2001, Rose 1994, Jones & Worthey 1995 and Cenarro et al. 2001. The code measures as many indices as you wish if the limits of two pseudocontinua (at each side of the feature) and the feature itself (i.e. Lick-style index definition) are provided. The Lick-style indices could be either expressed in pseudo-equivalent widths or in magnitudes. If requested the program provides index error estimates on the basis of photon statistics.

[ascl:1809.001] LEMON: Differential photometry pipeline

LEMON is a differential-photometry pipeline, written in Python, that determines the changes in the brightness of astronomical objects over time and compiles their measurements into light curves. This code makes it possible to completely reduce thousands of FITS images of time series in a matter of only a few hours, requiring minimal user interaction.

[ascl:1505.026] Lensed: Forward parametric modelling of strong lenses

Lensed performs forward parametric modelling of strong lenses. Using a provided model, Lensed renders the expected image of the lensing event for a large number of parameter settings, thereby exploring the space of possible realizations of the observation. It compares the expectation to the observed image by calculating the likelihood that the observation was indeed produced by the assumed model, thus reconstructing the probability distribution over the parameter space of the model. Written in C, the code uses a massively parallel ray-tracing kernel to perform the necessary calculations on a graphics processing unit (GPU), making the precise rendering of the background lensed sources fast and allowing the simultaneous optimization of tens of parameters for the selected model.

[ascl:1308.004] LensEnt2: Maximum-entropy weak lens reconstruction

LensEnt2 is a maximum entropy reconstructor of weak lensing mass maps. The method takes each galaxy shape as an independent estimator of the reduced shear field and incorporates an intrinsic smoothness, determined by Bayesian methods, into the reconstruction. The uncertainties from both the intrinsic distribution of galaxy shapes and galaxy shape estimation are carried through to the final mass reconstruction, and the mass within arbitrarily shaped apertures are calculated with corresponding uncertainties. The input is a galaxy ellipticity catalog with each measured galaxy shape treated as a noisy tracer of the reduced shear field, which is inferred on a fine pixel grid assuming positivity, and smoothness on scales of w arcsec where w is an input parameter. The ICF width w can be chosen by computing the evidence for it.

[ascl:9903.001] LENSKY: Galactic Microlensing Probability

Given a model for the Galaxy, this program computes the microlensing rate in any direction. Program features include the ability to include the brightness of the lens and to compute the probability of lens detection at any level of lensing amplification. The program limits itself to lensing by single stars of single sources. The program is currently setup to accept input from the Galactic models of Bahcall and Soniera (1982, 1986).

There are three files needed for LENSKY, the Fortran file lensky.for and two input files: galmod.dsk (15 Megs) and galmod.sph (22 Megs). The zip file available below contains all three files. The program generates output to the file lensky.out. The program is pretty self-explanatory past that.

[ascl:1010.050] LensPerfect: Gravitational Lens Massmap Reconstructions Yielding Exact Reproduction of All Multiple Images

LensPerfect is a new approach to the massmap reconstruction of strong gravitational lenses. Conventional methods iterate over possible lens models which reproduce the observed multiple image positions well but not exactly. LensPerfect only produces solutions which fit all of the data exactly. Magnifications and shears of the multiple images can also be perfectly constrained to match observations.

[ascl:1102.025] LensPix: Fast MPI full sky transforms for HEALPix

Modelling of the weak lensing of the CMB will be crucial to obtain correct cosmological parameter constraints from forthcoming precision CMB anisotropy observations. The lensing affects the power spectrum as well as inducing non-Gaussianities. We discuss the simulation of full sky CMB maps in the weak lensing approximation and describe a fast numerical code. The series expansion in the deflection angle cannot be used to simulate accurate CMB maps, so a pixel remapping must be used. For parameter estimation accounting for the change in the power spectrum but assuming Gaussianity is sufficient to obtain accurate results up to Planck sensitivity using current tools. A fuller analysis may be required to obtain accurate error estimates and for more sensitive observations. We demonstrate a simple full sky simulation and subsequent parameter estimation at Planck-like sensitivity.

[ascl:1705.009] LensPop: Galaxy-galaxy strong lensing population simulation

LensPop simulates observations of the galaxy-galaxy strong lensing population in the Dark Energy Survey (DES), the Large Synoptic Survey Telescope (LSST), and Euclid surveys.

[ascl:1102.004] LENSTOOL: A Gravitational Lensing Software for Modeling Mass Distribution of Galaxies and Clusters (strong and weak regime)

We describe a procedure for modelling strong lensing galaxy clusters with parametric methods, and to rank models quantitatively using the Bayesian evidence. We use a publicly available Markov chain Monte-Carlo (MCMC) sampler ('Bayesys'), allowing us to avoid local minima in the likelihood functions. To illustrate the power of the MCMC technique, we simulate three clusters of galaxies, each composed of a cluster-scale halo and a set of perturbing galaxy-scale subhalos. We ray-trace three light beams through each model to produce a catalogue of multiple images, and then use the MCMC sampler to recover the model parameters in the three different lensing configurations. We find that, for typical Hubble Space Telescope (HST)-quality imaging data, the total mass in the Einstein radius is recovered with ~1-5% error according to the considered lensing configuration. However, we find that the mass of the galaxies is strongly degenerated with the cluster mass when no multiple images appear in the cluster centre. The mass of the galaxies is generally recovered with a 20% error, largely due to the poorly constrained cut-off radius. Finally, we describe how to rank models quantitatively using the Bayesian evidence. We confirm the ability of strong lensing to constrain the mass profile in the central region of galaxy clusters in this way. Ultimately, such a method applied to strong lensing clusters with a very large number of multiple images may provide unique geometrical constraints on cosmology.

[ascl:1602.009] LensTools: Weak Lensing computing tools

LensTools implements a wide range of routines frequently used in Weak Gravitational Lensing, including tools for image analysis, statistical processing and numerical theory predictions. The package offers many useful features, including complete flexibility and easy customization of input/output formats; efficient measurements of power spectrum, PDF, Minkowski functionals and peak counts of convergence maps; survey masks; artificial noise generation engines; easy to compute parameter statistical inferences; ray tracing simulations; and many others. It requires standard numpy and scipy, and depending on tools used, may require Astropy (ascl:1304.002), emcee (ascl:1303.002), matplotlib, and mpi4py.

[ascl:1804.012] Lenstronomy: Multi-purpose gravitational lens modeling software package

Lenstronomy is a multi-purpose open-source gravitational lens modeling python package. Lenstronomy reconstructs the lens mass and surface brightness distributions of strong lensing systems using forward modelling and supports a wide range of analytic lens and light models in arbitrary combination. The software is also able to reconstruct complex extended sources as well as point sources. Lenstronomy is flexible and numerically accurate, with a clear user interface that could be deployed across different platforms. Lenstronomy has been used to derive constraints on dark matter properties in strong lenses, measure the expansion history of the universe with time-delay cosmography, measure cosmic shear with Einstein rings, and decompose quasar and host galaxy light.

[ascl:1307.005] LENSVIEW: Resolved gravitational lens images modeling

Lensview models resolved gravitational lens systems based on LensMEM but using the Skilling & Bryan MEM algorithm. Though its primary purpose is to find statistically acceptable lens models for lensed images and to reconstruct the surface brightness profile of the source, LENSVIEW can also be used for more simple tasks such as projecting a given source through a lens model to generate a “true” image by conserving surface brightness. The user can specify complicated lens models based on one or more components, such as softened isothermal ellipsoids, point masses, exponential discs, and external shears; LENSVIEW generates a best-fitting source matching the observed data for each specific combination of model parameters.

[ascl:1108.009] LePHARE: Photometric Analysis for Redshift Estimate

LePHARE is a set of Fortran commands to compute photometric redshifts and to perform SED fitting. The latest version includes new features with FIR fitting and a more complete treatment of physical parameters and uncertainties based on PÉGASE and Bruzual & Charlot population synthesis models. The program is based on a simple chi2 fitting method between the theoretical and observed photometric catalogue. A simulation program is also available in order to generate realistic multi-colour catalogues taking into account observational effects.

[ascl:1711.018] LExTeS: Link Extraction and Testing Suite

LExTeS (Link Extraction and Testing Suite) extracts hyperlinks from PDF documents, tests the extracted links to see which are broken, and tabulates the results. Though written to support a particular set of PDF documents, the dataset and scripts can be edited for use on other documents.

[ascl:1804.024] LFlGRB: Luminosity function of long gamma-ray bursts

LFlGRB models the luminosity function (LF) of long Gamma Ray Bursts (lGRBs) by using a sample of Swift and Fermi lGRBs to re-derive the parameters of the Yonetoku correlation and self-consistently estimate pseudo-redshifts of all the bursts with unknown redshifts. The GRB formation rate is modeled as the product of the cosmic star formation rate and a GRB formation efficiency for a given stellar mass.

[ascl:1804.023] LFsGRB: Binary neutron star merger rate via the luminosity function of short gamma-ray bursts

LFsGRB models the luminosity function (LF) of short Gamma Ray Bursts (sGRBs) by using the available catalog data of all short GRBs (sGRBs) detected till 2017 October, estimating the luminosities via pseudo-redshifts obtained from the Yonetoku correlation, and then assuming a standard delay distribution between the cosmic star formation rate and the production rate of their progenitors. The data are fit well both by exponential cutoff powerlaw and broken powerlaw models. Using the derived parameters of these models along with conservative values in the jet opening angles seen from afterglow observations, the true rate of short GRBs is derived. Assuming a short GRB is produced from each binary neutron star merger (BNSM), the rate of gravitational wave (GW) detections from these mergers are derived for the past, present and future configurations of the GW detector networks.

[ascl:1710.016] LGMCA: Local-Generalized Morphological Component Analysis

LGMCA (Local-Generalized Morphological Component Analysis) is an extension to GMCA (ascl:1710.015). Similarly to GMCA, it is a Blind Source Separation method which enforces sparsity. The novel aspect of LGMCA, however, is that the mixing matrix changes across pixels allowing LGMCA to deal with emissions sources which vary spatially. These IDL scripts compute the CMB map from WMAP and Planck data; running LGMCA on the WMAP9 temperature products requires the main script and a selection of mandatory files, algorithm parameters and map parameters.

[ascl:1712.016] LgrbWorldModel: Long-duration Gamma-Ray Burst World Model

LgrbWorldModel is written in Fortran 90 and attempts to model the population distribution of the Long-duration class of Gamma-Ray Bursts (LGRBs) as detected by the NASA's now-defunct Burst And Transient Source Experiment (BATSE) onboard the Compton Gamma Ray Observatory (CGRO). It is assumed that the population distribution of LGRBs is well fit by a multivariate log-normal distribution. The best-fit parameters of the distribution are then found by maximizing the likelihood of the observed data by BATSE detectors via a native built-in Adaptive Metropolis-Hastings Markov-Chain Monte Carlo (AMH-MCMC) Sampler.

[ascl:1408.002] LIA: LWS Interactive Analysis

The Long Wavelength Spectrometer (LWS) was one of two complementary spectrometers on the Infrared Space Observatory (ISO). LIA (LWS Interactive Analysis) is used for processing data from the LWS. It provides access to the different processing steps, including visualization of intermediate products and interactive manipulation of the data at each stage.

[ascl:1206.009] Libimf

Libimf provides a collection of programming functions based on the general IMF-algorithm by Pflamm-Altenburg & Kroupa (2006).

[ascl:1502.016] libnova: Celestial mechanics, astrometry and astrodynamics library

libnova is a general purpose, double precision, celestial mechanics, astrometry and astrodynamics library. Among many other calculations, it can calculate aberration, apparent position, proper motion, planetary positions, orbit velocities and lengths, angular separation of bodies, and hyperbolic motion of bodies.

[ascl:1604.002] libpolycomp: Compression/decompression library

Libpolycomp compresses and decompresses one-dimensional streams of numbers by means of several algorithms. It is well-suited for time-ordered data acquired by astronomical instruments or simulations. One of the algorithms, called "polynomial compression", combines two widely-used ideas (namely, polynomial approximation and filtering of Fourier series) to achieve substantial compression ratios for datasets characterized by smoothness and lack of noise. Notable examples are the ephemerides of astronomical objects and the pointing information of astronomical telescopes. Other algorithms implemented in this C library are well known and already widely used, e.g., RLE, quantization, deflate (via libz) and Burrows-Wheeler transform (via libbzip2). Libpolycomp can compress the timelines acquired by the Planck/LFI instrument with an overall compression ratio of ~9, while other widely known programs (gzip, bzip2) reach compression ratios less than 1.5.

[ascl:1612.003] libprofit: Image creation from luminosity profiles

libprofit is a C++ library for image creation based on different luminosity profiles. It offers fast and accurate two-dimensional integration for a useful number of profiles, including Sersic, Core-Sersic, broken-exponential, Ferrer, Moffat, empirical King, point-source and sky, with a simple mechanism for adding new profiles. libprofit provides a utility to read the model and profile parameters from the command-line and generate the corresponding image. It can output the resulting image as text values, a binary stream, or as a simple FITS file. It also provides a shared library exposing an API that can be used by any third-party application. R and Python interfaces are available: ProFit (ascl:1612.004) and PyProfit (ascl:1612.005).

[ascl:1010.020] Libpsht: Algorithms for Efficient Spherical Harmonic Transforms

Libpsht (or "library for Performing Spherical Harmonic Transforms") is a collection of algorithms for efficient conversion between spatial-domain and spectral-domain representations of data defined on the sphere. The package supports transforms of scalars as well as spin-1 and spin-2 quantities, and can be used for a wide range of pixelisations (including HEALPix, GLESP and ECP). It will take advantage of hardware features like multiple processor cores and floating-point vector operations, if available. Even without this additional acceleration, the employed algorithms are among the most efficient (in terms of CPU time as well as memory consumption) currently being used in the astronomical community.

The library is written in strictly standard-conforming C90, ensuring portability to many different hard- and software platforms, and allowing straightforward integration with codes written in various programming languages like C, C++, Fortran, Python etc.

Libpsht is distributed under the terms of the GNU General Public License (GPL) version 2.

Development on this project has ended; its successor is libsharp (ascl:1402.033).

[ascl:1402.033] libsharp: Library for spherical harmonic transforms

Libsharp is a collection of algorithms for efficient conversion between maps on the sphere and their spherical harmonic coefficients. It supports a wide range of pixelisations (including HEALPix, GLESP, and ECP). This library is a successor of libpsht (ascl:1010.020); it adds MPI support for distributed memory systems and SHTs of fields with arbitrary spin, and also supports new developments in CPU instruction sets like the Advanced Vector Extensions (AVX) or fused multiply-accumulate (FMA) instructions. libsharp is written in portable C99; it provides an interface accessible to other programming languages such as C++, Fortran, and Python.

[ascl:1403.004] Lightcone: Light-cone generating script

Lightcone works with simulated galaxy data stored in a relational database to rearrange the data in a shape of a light-cone; simulated galaxy data is expected to be in a box volume. The light-cone constructing script works with output from the SAGE semi-analytic model (ascl:1601.006), but will work with any other model that has galaxy positions (and other properties) saved per snapshots of the simulation volume distributed in time. The database configuration file is set up for PostgreSQL RDBMS, but can be modified for use with any other SQL database.

[ascl:1408.012] LightcurveMC: An extensible lightcurve simulation program

LightcurveMC is a versatile and easily extended simulation suite for testing the performance of time series analysis tools under controlled conditions. It is designed to be highly modular, allowing new lightcurve types or new analysis tools to be introduced without excessive development overhead. The statistical tools are completely agnostic to how the lightcurve data is generated, and the lightcurve generators are completely agnostic to how the data will be analyzed. The use of fixed random seeds throughout guarantees that the program generates consistent results from run to run.

LightcurveMC can generate periodic light curves having a variety of shapes and stochastic light curves having a variety of correlation properties. It features two error models (Gaussian measurement and signal injection using a randomized sample of base light curves), testing of C1 shape statistic, periodograms, ΔmΔt plots, autocorrelation function plots, peak-finding plots, and Gaussian process regression. The code is written in C++ and R.

[ascl:1812.013] Lightkurve: Kepler and TESS time series analysis in Python

Lightkurve analyzes astronomical flux time series data, in particular the pixels and light curves obtained by NASA’s Kepler, K2, and TESS exoplanet missions. This community-developed Python package is designed to be user friendly to lower the barrier for students, astronomers, and citizen scientists interested in analyzing data from these missions. Lightkurve provides easy tools to download, inspect, and analyze time series data and its documentation is supported by a large syllabus of tutorials.

[ascl:1711.009] Lightning: SED Fitting Package

Lightning is a spectral energy distribution (SED) fitting procedure that quickly and reliably recovers star formation history (SFH) and extinction parameters. The SFH is modeled as discrete steps in time. The code consists of a fully vectorized inversion algorithm to determine SFH step intensities and combines this with a grid-based approach to determine three extinction parameters.

[ascl:1107.012] LIME: Flexible, Non-LTE Line Excitation and Radiation Transfer Method for Millimeter and Far-infrared Wavelengths

LIME solves the molecular and atomic excitation and radiation transfer problem in a molecular gas and predicting emergent spectra. The code works in arbitrary three dimensional geometry using unstructured Delaunay latices for the transport of photons. Various physical models can be used as input, ranging from analytical descriptions over tabulated models to SPH simulations. To generate the Delaunay grid we sample the input model randomly, but weigh the sample probability with the molecular density and other parameters, and thereby we obtain an average grid point separation that scales with the local opacity. Slow convergence of opaque models becomes traceable; when convergence between the level populations, the radiation field, and the point separation has been obtained, the grid is ray-traced to produced images that can readily be compared to observations. LIME is particularly well suited for modeling of ALMA data because of the high dynamic range in scales that can be resolved using this type of grid, and can furthermore deal with overlapping lines of multiple molecular and atomic species.

[ascl:1710.023] LIMEPY: Lowered Isothermal Model Explorer in PYthon

LIMEPY solves distribution function (DF) based lowered isothermal models. It solves Poisson's equation used on input parameters and offers fast solutions for isotropic/anisotropic, single/multi-mass models, normalized DF values, density and velocity moments, projected properties, and generates discrete samples.

[ascl:1504.019] LineProf: Line Profile Indicators

LineProf implements a series of line-profile analysis indicators and evaluates its correlation with RV data. It receives as input a list of Cross-Correlation Functions and an optional list of associated RV. It evaluates the line-profile according to the indicators and compares it with the computed RV if no associated RV is provided, or with the provided RV otherwise.

[ascl:1602.006] LIRA: LInear Regression in Astronomy

LIRA (LInear Regression in Astronomy) performs Bayesian linear regression that accounts for heteroscedastic errors in both the independent and the dependent variables, intrinsic scatters (in both variables), time evolution of slopes, normalization and scatters, Malmquist and Eddington bias, and break of linearity. The posterior distribution of the regression parameters is sampled with a Gibbs method exploiting the JAGS (ascl:1209.002) library.

[ascl:1601.007] LIRA: Low-counts Image Reconstruction and Analysis

LIRA (Low-counts Image Reconstruction and Analysis) deconvolves any unknown sky components, provides a fully Poisson 'goodness-of-fit' for any best-fit model, and quantifies uncertainties on the existence and shape of unknown sky. It does this without resorting to χ2 or rebinning, which can lose high-resolution information. It is written in R and requires the FITSio package.

[ascl:1112.009] LISACode: A scientific simulator of LISA

LISACode is a simulator of the LISA mission. Its ambition is to achieve a new degree of sophistication allowing to map, as closely as possible, the impact of the different subsystems on the measurements. Its also a useful tool for generating realistic data including several kind of sources (Massive Black Hole binaries, EMRIs, cosmic string cusp, stochastic background, etc) and for preparing their analysis. It’s fully integrated to the Mock LISA Data Challenge. LISACode is not a detailed simulator at the engineering level but rather a tool whose purpose is to bridge the gap between the basic principles of LISA and a future, sophisticated end-to-end simulator.

[ascl:1902.005] LiveData: Data reduction pipeline

LiveData is a multibeam single-dish data reduction system for bandpass calibration and gridding. It is used for processing Parkes multibeam and Mopra data.

[submitted] Lizard: an extensible Cyclomatic Complexity Analyzer

Lizard is an extensible Cyclomatic Complexity Analyzer for many imperative programming languages including C/C++.

[ascl:1706.005] LMC: Logarithmantic Monte Carlo

LMC is a Markov Chain Monte Carlo engine in Python that implements adaptive Metropolis-Hastings and slice sampling, as well as the affine-invariant method of Goodman & Weare, in a flexible framework. It can be used for simple problems, but the main use case is problems where expensive likelihood evaluations are provided by less flexible third-party software, which benefit from parallelization across many nodes at the sampling level. The parallel/adaptive methods use communication through MPI, or alternatively by writing/reading files, and mostly follow the approaches pioneered by CosmoMC (ascl:1106.025).

[ascl:1606.014] Lmfit: Non-Linear Least-Square Minimization and Curve-Fitting for Python

Lmfit provides a high-level interface to non-linear optimization and curve fitting problems for Python. Lmfit builds on and extends many of the optimization algorithm of scipy.optimize, especially the Levenberg-Marquardt method from optimize.leastsq. Its enhancements to optimization and data fitting problems include using Parameter objects instead of plain floats as variables, the ability to easily change fitting algorithms, and improved estimation of confidence intervals and curve-fitting with the Model class. Lmfit includes many pre-built models for common lineshapes.

[submitted] loci: Smooth Cubic Multivariate Local Interpolations

loci is a shared library for interpolations in up to 4 dimensions. It is written in C and can be used with C/C++, Python and others. In order to calculate the coefficients of the cubic polynom, only local values are used: The data itself and all combinations of first-order derivatives, i.e. in 2D f_x, f_y and f_xy. This is in contrast to splines, where the coefficients are not calculated using derivatives, but non-local data, which can lead to over-smoothing the result.

[ascl:1608.018] LORENE: Spectral methods differential equations solver

LORENE (Langage Objet pour la RElativité NumériquE) solves various problems arising in numerical relativity, and more generally in computational astrophysics. It is a set of C++ classes and provides tools to solve partial differential equations by means of multi-domain spectral methods. LORENE classes implement basic structures such as arrays and matrices, but also abstract mathematical objects, such as tensors, and astrophysical objects, such as stars and black holes.

[ascl:1309.003] LOSP: Liège Orbital Solution Package

LOSP is a FORTRAN77 numerical package that computes the orbital parameters of spectroscopic binaries. The package deals with SB1 and SB2 systems and is able to adjust either circular or eccentric orbits through a weighted fit.

[ascl:1308.002] LOSSCONE: Capture rates of stars by a supermassive black hole

LOSSCONE computes the rates of capture of stars by supermassive black holes. It uses a stationary and time-dependent solutions for the Fokker-Planck equation describing the evolution of the distribution function of stars due to two-body relaxation, and works for arbitrary spherical and axisymmetric galactic models that are provided by the user in the form of M(r), the cumulative mass as a function of radius.

[ascl:1010.038] Low Resolution Spectral Templates For AGNs and Galaxies From 0.03 -- 30 microns

We present a set of low resolution empirical SED templates for AGNs and galaxies in the wavelength range from 0.03 to 30 microns based on the multi-wavelength photometric observations of the NOAO Deep-Wide Field Survey Bootes field and the spectroscopic observations of the AGN and Galaxy Evolution Survey. Our training sample is comprised of 14448 galaxies in the redshift range 0<~z<~1 and 5347 likely AGNs in the range 0<~z<~5.58. We use our templates to determine photometric redshifts for galaxies and AGNs. While they are relatively accurate for galaxies, their accuracies for AGNs are a strong function of the luminosity ratio between the AGN and galaxy components. Somewhat surprisingly, the relative luminosities of the AGN and its host are well determined even when the photometric redshift is significantly in error. We also use our templates to study the mid-IR AGN selection criteria developed by Stern et al.(2005) and Lacy et al.(2004). We find that the Stern et al.(2005) criteria suffers from significant incompleteness when there is a strong host galaxy component and at z =~ 4.5, when the broad Halpha emission line is redshifted into the [3.6] band, but that it is little contaminated by low and intermediate redshift galaxies. The Lacy et al.(2004) criterion is not affected by incompleteness at z =~ 4.5 and is somewhat less affected by strong galaxy host components, but is heavily contaminated by low redshift star forming galaxies. Finally, we use our templates to predict the color-color distribution of sources in the upcoming WISE mission and define a color criterion to select AGNs analogous to those developed for IRAC photometry. We estimate that in between 640,000 and 1,700,000 AGNs will be identified by these criteria, but will have serious completeness problems for z >~ 3.4.

[ascl:1501.007] LP-VIcode: La Plata Variational Indicators Code

LP-VIcode computes variational chaos indicators (CIs) quickly and easily. The following CIs are included:

  • Lyapunov Indicators, also known as Lyapunov Characteristic Exponents, Lyapunov Characteristic Numbers or Finite Time Lyapunov Characteristic Numbers (LIs)
  • Mean Exponential Growth factor of Nearby Orbits (MEGNO)
  • Slope Estimation of the largest Lyapunov Characteristic Exponent (SElLCE)
  • Smaller ALignment Index (SALI)
  • Generalized ALignment Index (GALI)
  • Fast Lyapunov Indicator (FLI)
  • Orthogonal Fast Lyapunov Indicator (OFLI)
  • Spectral Distance (SD)
  • dynamical Spectra of Stretching Numbers (SSNs)
  • Relative Lyapunov Indicator (RLI)

[ascl:1902.002] LPNN: Limited Post-Newtonian N-body code for collisionless self-gravitating systems

The Limited Post-Newtonian N-body code (LPNN) simulates post-Newtonian interactions between a massive object and many low-mass objects. The interaction between one massive object and low-mass objects is calculated by post-Newtonian approximation, and the interaction between low-mass objects is calculated by Newtonian gravity. This code is based on the sticky9 code, and can be accelerated with the use of GPU in a CUDA (version 4.2 or earlier) environment.

[ascl:1306.012] LRG DR7 Likelihood Software

This software computes likelihoods for the Luminous Red Galaxies (LRG) data from the Sloan Digital Sky Survey (SDSS). It includes a patch to the existing CAMB software (the February 2009 release) to calculate the theoretical LRG halo power spectrum for various models. The code is written in Fortran 90 and has been tested with the Intel Fortran 90 and GFortran compilers.

[ascl:1602.005] LRGS: Linear Regression by Gibbs Sampling

LRGS (Linear Regression by Gibbs Sampling) implements a Gibbs sampler to solve the problem of multivariate linear regression with uncertainties in all measured quantities and intrinsic scatter. LRGS extends an algorithm by Kelly (2007) that used Gibbs sampling for performing linear regression in fairly general cases in two ways: generalizing the procedure for multiple response variables, and modeling the prior distribution of covariates using a Dirichlet process.

[ascl:1807.033] LSC: Supervised classification of time-series variable stars

LSC (LINEAR Supervised Classification) trains a number of classifiers, including random forest and K-nearest neighbor, to classify variable stars and compares the results to determine which classifier is most successful. Written in R, the package includes anomaly detection code for testing the application of the selected classifier to new data, thus enabling the creation of highly reliable data sets of classified variable stars.

[ascl:1209.003] LSD: Large Survey Database framework

The Large Survey Database (LSD) is a Python framework and DBMS for distributed storage, cross-matching and querying of large survey catalogs (>10^9 rows, >1 TB). The primary driver behind its development is the analysis of Pan-STARRS PS1 data. It is specifically optimized for fast queries and parallel sweeps of positionally and temporally indexed datasets. It transparently scales to more than >10^2 nodes, and can be made to function in "shared nothing" architectures.

[ascl:1612.002] LSDCat: Line Source Detection and Cataloguing Tool

LSDCat is a conceptually simple but robust and efficient detection package for emission lines in wide-field integral-field spectroscopic datacubes. The detection utilizes a 3D matched-filtering approach for compact single emission line objects. Furthermore, the software measures fluxes and extents of detected lines. LSDCat is implemented in Python, with a focus on fast processing of large data-volumes.

[ascl:1505.012] LSSGALPY: Visualization of the large-scale environment around galaxies on the 3D space

LSSGALPY provides visualization tools to compare the 3D positions of a sample (or samples) of isolated systems with respect to the locations of the large-scale structures galaxies in their local and/or large scale environments. The interactive tools use different projections in the 3D space (right ascension, declination, and redshift) to study the relation of the galaxies with the LSS. The tools permit visualization of the locations of the galaxies for different values of redshifts and redshift ranges; the relationship of isolated galaxies, isolated pairs, and isolated triplets to the galaxies in the LSS can be visualized for different values of the declinations and declination ranges.

[ascl:1312.006] LTL: The Little Template Library

LTL provides dynamic arrays of up to 7-dimensions, subarrays and slicing, support for fixed-size vectors and matrices including basic linear algebra operations, expression templates-based evaluation, and I/O facilities for ascii and FITS format files. Utility classes for command-line processing and configuration-file processing are provided as well.

[ascl:1404.001] LTS_LINEFIT & LTS_PLANEFIT: LTS fit of lines or planes

LTS_LINEFIT and LTS_PLANEFIT are IDL programs to robustly fit lines and planes to data with intrinsic scatter. The code combines the Least Trimmed Squares (LTS) robust technique, proposed by Rousseeuw (1984) and optimized in Rousseeuw & Driessen (2006), into a least-squares fitting algorithm which allows for intrinsic scatter. This method makes the fit converge to the correct solution even in the presence of a large number of catastrophic outliers, where the much simpler σ-clipping approach can converge to the wrong solution.

[ascl:1201.016] LumFunc: Luminosity Function Modeling

LumFunc is a numerical code to model the Luminosity Function based on central galaxy luminosity-halo mass and total galaxy luminosity-halo mass relations. The code can handle rest b_J-band (2dFGRS), r'-band (SDSS), and K-band luminosities, and any redshift with redshift dependences specified by the user. It separates the luminosity function (LF) to conditional luminosity functions, LF as a function of halo mass, and also to galaxy types. By specifying a narrow mass range, the code will return the conditional luminosity functions. The code returns luminosity functions for galaxy types as well (broadly divided to early-type and late-type). The code also models the cluster luminosity function, either mass averaged or for individual clusters.

[ascl:1803.012] LWPC: Long Wavelength Propagation Capability

Long Wavelength Propagation Capability (LWPC), written as a collection of separate programs that perform unique actions, generates geographical maps of signal availability for coverage analysis. The program makes it easy to set up these displays by automating most of the required steps. The user specifies the transmitter location and frequency, the orientation of the transmitting and receiving antennae, and the boundaries of the operating area. The program automatically selects paths along geographic bearing angles to ensure that the operating area is fully covered. The diurnal conditions and other relevant geophysical parameters are then determined along each path. After the mode parameters along each path are determined, the signal strength along each path is computed. The signal strength along the paths is then interpolated onto a grid overlying the operating area. The final grid of signal strength values is used to display the signal-strength in a geographic display. The LWPC uses character strings to control programs and to specify options. The control strings have the same meaning and use among all the programs.

[ascl:1607.018] LZIFU: IDL emission line fitting pipeline for integral field spectroscopy data

LZIFU (LaZy-IFU) is an emission line fitting pipeline for integral field spectroscopy (IFS) data. Written in IDL, the pipeline turns IFS data to 2D emission line flux and kinematic maps for further analysis. LZIFU has been applied and tested extensively to various IFS data, including the SAMI Galaxy Survey, the Wide-Field Spectrograph (WiFeS), the CALIFA survey, the S7 survey and the MUSE instrument on the VLT.

[ascl:1209.006] macula: Rotational modulations in the photometry of spotted stars

Photometric rotational modulations due to starspots remain the most common and accessible way to study stellar activity. Modelling rotational modulations allows one to invert the observations into several basic parameters, such as the rotation period, spot coverage, stellar inclination and differential rotation rate. The most widely used analytic model for this inversion comes from Budding (1977) and Dorren (1987), who considered circular, grey starspots for a linearly limb darkened star. That model is extended to be more suitable in the analysis of high precision photometry such as that by Kepler. Macula, a Fortran 90 code, provides several improvements, such as non-linear limb darkening of the star and spot, a single-domain analytic function, partial derivatives for all input parameters, temporal partial derivatives, diluted light compensation, instrumental offset normalisations, differential rotation, starspot evolution and predictions of transit depth variations due to unocculted spots. The inclusion of non-linear limb darkening means macula has a maximum photometric error an order-of-magnitude less than that of Dorren (1987) for Sun-like stars observed in the Kepler-bandpass. The code executes three orders-of-magnitude faster than comparable numerical codes making it well-suited for inference problems.

[ascl:1306.010] MADCOW: Microwave Anisotropy Dataset Computational softWare

MADCOW is a set of parallelized programs written in ANSI C and Fortran 77 that perform a maximum likelihood analysis of visibility data from interferometers observing the cosmic microwave background (CMB) radiation. This software has been used to produce power spectra of the CMB with the Very Small Array (VSA) telescope.

[ascl:1712.012] MadDM: Computation of dark matter relic abundance

MadDM computes dark matter relic abundance and dark matter nucleus scattering rates in a generic model. The code is based on the existing MadGraph 5 architecture and as such is easily integrable into any MadGraph collider study. A simple Python interface offers a level of user-friendliness characteristic of MadGraph 5 without sacrificing functionality. MadDM is able to calculate the dark matter relic abundance in models which include a multi-component dark sector, resonance annihilation channels and co-annihilations. The direct detection module of MadDM calculates spin independent / spin dependent dark matter-nucleon cross sections and differential recoil rates as a function of recoil energy, angle and time. The code provides a simplified simulation of detector effects for a wide range of target materials and volumes.

[ascl:1110.018] MADmap: Fast Parallel Maximum Likelihood CMB Map Making Code

MADmap produces maximum-likelihood images of the sky from time-ordered data which include correlated noise, such as those gathered by Cosmic Microwave Background (CMB) experiments. It works efficiently on platforms ranging from small workstations to the most massively parallel supercomputers. Map-making is a critical step in the analysis of all CMB data sets, and the maximum-likelihood approach is the most accurate and widely applicable algorithm; however, it is a computationally challenging task. This challenge will only increase with the next generation of ground-based, balloon-borne and satellite CMB polarization experiments. The faintness of the B-mode signal that these experiments seek to measure requires them to gather enormous data sets. MADmap has the ability to address problems typically encountered in the analysis of realistic CMB data sets. The massively parallel and distributed implementation is detailed and scaling complexities are given for the resources required. MADmap is capable of analyzing the largest data sets now being collected on computing resources currently available.

[ascl:1010.044] MAESTRO: An Adaptive Low Mach Number Hydrodynamics Algorithm for Stellar Flows

Many astrophysical phenomena are highly subsonic, requiring specialized numerical methods suitable for long-time integration. In a series of earlier papers we described the development of MAESTRO, a low Mach number stellar hydrodynamics code that can be used to simulate long-time, low-speed flows that would be prohibitively expensive to model using traditional compressible codes. MAESTRO is based on an equation set derived using low Mach number asymptotics; this equation set does not explicitly track acoustic waves and thus allows a significant increase in the time step. MAESTRO is suitable for two- and three-dimensional local atmospheric flows as well as three-dimensional full-star flows. Here, we continue the development of MAESTRO by incorporating adaptive mesh refinement (AMR). The primary difference between MAESTRO and other structured grid AMR approaches for incompressible and low Mach number flows is the presence of the time-dependent base state, whose evolution is coupled to the evolution of the full solution. We also describe how to incorporate the expansion of the base state for full-star flows, which involves a novel mapping technique between the one-dimensional base state and the Cartesian grid, as well as a number of overall improvements to the algorithm. We examine the efficiency and accuracy of our adaptive code, and demonstrate that it is suitable for further study of our initial scientific application, the convective phase of Type Ia supernovae.

[ascl:1709.010] MagIC: Fluid dynamics in a spherical shell simulator

MagIC simulates fluid dynamics in a spherical shell. It solves for the Navier-Stokes equation including Coriolis force, optionally coupled with an induction equation for Magneto-Hydro Dynamics (MHD), a temperature (or entropy) equation and an equation for chemical composition under both the anelastic and the Boussinesq approximations. MagIC uses either Chebyshev polynomials or finite differences in the radial direction and spherical harmonic decomposition in the azimuthal and latitudinal directions. The time-stepping scheme relies on a semi-implicit Crank-Nicolson for the linear terms of the MHD equations and a Adams-Bashforth scheme for the non-linear terms and the Coriolis force.

Would you like to view a random code?