ASCL.net

Astrophysics Source Code Library

Making codes discoverable since 1999

Browsing Codes

Results 751-1000 of 1932 (1899 ASCL, 33 submitted)

Order
Title Date
 
Mode
Abstract Compact
Per Page
50100250All
[ascl:1106.015] OrbFit: Software to Determine Orbits of Asteroids

OrbFit is a software system allowing one to compute the orbits of asteroids starting from the observations, to propagate these orbits, and to compute predictions on the future (and past) position on the celestial sphere. It is a tool to be used to find a well known asteroid, to recover a lost one, to attribute a small group of observations, to identify two orbits with each other, to study the future (and/or past) close approaches to Earth, thus to assess the risk of an impact, and more.

[ascl:1307.016] orbfit: Orbit fitting software

Orbfit determines positions and orbital elements, and associated uncertainties, of outer solar system planets. The orbit-fitting procedure is greatly streamlined compared with traditional methods because acceleration can be treated as a perturbation to the inertial motion of the body. Orbfit quickly and accurately calculates orbital elements and ephemerides and their associated uncertainties for targets ≳ 10 AU from the Sun and produces positional estimates and uncertainty ellipses even in the face of the substantial degeneracies of short-arc orbit fits; the sole a priori assumption is that the orbit should be bound or nearly so.

[ascl:1702.001] ORBE: Orbital integrator for educational purposes

ORBE performs numerical integration of an arbitrary planetary system composed by a central star and up to 100 planets and minor bodies. ORBE calculates the orbital evolution of a system of bodies by means of the computation of the time evolution of their orbital elements. It is easy to use and is suitable for educational use by undergraduate students in the classroom as a first approach to orbital integrators.

[ascl:1210.024] ORBADV: ORBital ADVection by interpolation

ORBADV adopts a ZEUS-like scheme to solve magnetohydrodynamic equations of motion in a shearing sheet. The magnetic field is discretized on a staggered mesh, and magnetic field variables represent fluxes through zone faces. The code uses obital advection to ensure fast and accurate integration in a large shearing box.

[ascl:1310.001] ORAC-DR: Astronomy data reduction pipeline

ORAC-DR is a generic data reduction pipeline infrastructure; it includes specific data processing recipes for a number of instruments. It is used at the James Clerk Maxwell Telescope, United Kingdom Infrared Telescope, AAT, and LCOGT. This pipeline runs at the JCMT Science Archive hosted by CADC to generate near-publication quality data products; the code has been in use since 1998.

[ascl:1803.013] optBINS: Optimal Binning for histograms

optBINS (optimal binning) determines the optimal number of bins in a uniform bin-width histogram by deriving the posterior probability for the number of bins in a piecewise-constant density model after assigning a multinomial likelihood and a non-informative prior. The maximum of the posterior probability occurs at a point where the prior probability and the the joint likelihood are balanced. The interplay between these opposing factors effectively implements Occam's razor by selecting the most simple model that best describes the data.

[submitted] Opik Collision Probability

The Opik method gives the mean probability of collision of a small body with a given planet. It is a statistical value valid for an orbit with given (a,e,i) and undefined argument of perihelion. In some cases, the planet can eject the small body from the solar system; in these cases, the program estimates the mean time for the ejection. The Opik method does not take into account other perturbers than the planet considered, so it only provides an idea of the timescales involved.

[ascl:1411.004] OPERA: Open-source Pipeline for Espadons Reduction and Analysis

OPERA (Open-source Pipeline for Espadons Reduction and Analysis) is an open-source collaborative software reduction pipeline for ESPaDOnS data. ESPaDOnS is a bench-mounted high-resolution echelle spectrograph and spectro-polarimeter designed to obtain a complete optical spectrum (from 370 to 1,050 nm) in a single exposure with a mode-dependent resolving power between 68,000 and 81,000. OPERA is fully automated, calibrates on two-dimensional images and reduces data to produce one-dimensional intensity and polarimetric spectra. Spectra are extracted using an optimal extraction algorithm. Though designed for CFHT ESPaDOnS data, the pipeline is extensible to other echelle spectrographs.

[ascl:1509.009] OPERA: Objective Prism Enhanced Reduction Algorithms

OPERA (Objective Prism Enhanced Reduction Algorithms) automatically analyzes astronomical images using the objective-prism (OP) technique to register thousands of low resolution spectra in large areas. It detects objects in an image, extracts one-dimensional spectra, and identifies the emission line feature. The main advantages of this method are: 1) to avoid subjectivity inherent to visual inspection used in past studies; and 2) the ability to obtain physical parameters without follow-up spectroscopy.

[ascl:1502.002] OpenOrb: Open-source asteroid orbit computation software

OpenOrb (OOrb) contains tools for rigorously estimating the uncertainties resulting from the inverse problem of computing orbital elements using scarce astrometry. It uses the least-squares method and also contains both Monte-Carlo (MC) and Markov-Chain MC versions of the statistical ranging method. Ranging obtains sampled, non-Gaussian orbital-element probability-density functions and is optimized for cases where the amount of astrometry is scarce or spans a relatively short time interval.

[ascl:1604.001] OpenMHD: Godunov-type code for ideal/resistive magnetohydrodynamics (MHD)

OpenMHD is a Godunov-type finite-volume code for ideal/resistive magnetohydrodynamics (MHD). It is written in Fortran 90 and is parallelized by using MPI-2 and OpenMP. The code was originally developed for studying magnetic reconnection problems and has been made publicly available in the hope that others may find it useful.

[ascl:1806.018] OMEGA: One-zone Model for the Evolution of GAlaxies

OMEGA (One-zone Model for the Evolution of GAlaxies) calculates the global chemical evolution trends of galaxies. From an input star formation history, it uses SYGMA to create as a function of time multiple simple stellar populations with different masses, ages, and initial compositions. OMEGA offers several prescriptions for modeling the star formation efficiency and the evolution of galactic inflows and outflows. OMEGA is part of the NuGrid (ascl:1610.015) chemical evolution package.

[ascl:1601.004] Odyssey: Ray tracing and radiative transfer in Kerr spacetime

Odyssey is a GPU-based General Relativistic Radiative Transfer (GRRT) code for computing images and/or spectra in Kerr metric describing the spacetime around a rotating black hole. Odyssey is implemented in CUDA C/C++. For flexibility, the namespace structure in C++ is used for different tasks; the two default tasks presented in the source code are the redshift of a Keplerian disk and the image of a Keplerian rotating shell at 340GHz. Odyssey_Edu, an educational software package for visualizing the ray trajectories in the Kerr spacetime that uses Odyssey, is also available.

[ascl:1810.010] ODTBX: Orbit Determination Toolbox

ODTBX (Orbit Determination Toolbox) provides orbit determination analysis, advanced mission simulation, and analysis for concept exploration, proposal, early design phase, and/or rapid design center environments. The core ODTBX functionality is realized through a set of estimation commands that incorporate Monte Carlo data simulation, linear covariance analysis, and measurement processing at a generic level; its functions and utilities are combined in a flexible architecture to allow modular development of navigation algorithms and simulations. ODTBX is written in Matlab and Java.

[ascl:1010.048] OCTGRAV: Sparse Octree Gravitational N-body Code on Graphics Processing Units

Octgrav is a very fast tree-code which runs on massively parallel Graphical Processing Units (GPU) with NVIDIA CUDA architecture. The algorithms are based on parallel-scan and sort methods. The tree-construction and calculation of multipole moments is carried out on the host CPU, while the force calculation which consists of tree walks and evaluation of interaction list is carried out on the GPU. In this way, a sustained performance of about 100GFLOP/s and data transfer rates of about 50GB/s is achieved. It takes about a second to compute forces on a million particles with an opening angle of $ heta approx 0.5$.

To test the performance and feasibility, we implemented the algorithms in CUDA in the form of a gravitational tree-code which completely runs on the GPU. The tree construction and traverse algorithms are portable to many-core devices which have support for CUDA or OpenCL programming languages. The gravitational tree-code outperforms tuned CPU code during the tree-construction and shows a performance improvement of more than a factor 20 overall, resulting in a processing rate of more than 2.8 million particles per second.

The code has a convenient user interface and is freely available for use.

[ascl:1812.018] OctApps: Octave functions for continuous gravitational-wave data analysis

The OctApps library provides various functions, written in Octave, for performing searches for the weak signatures of continuous gravitational waves from rapidly-rotating neutron stars amidst the instrumental noise of the LIGO and Virgo detectors.

[ascl:1901.002] OCFit: Python package for fitting of O-C diagrams

OCFit fits and analyzes O-C diagrams using Genetic Algorithms and Markov chain Monte Carlo methods. The MC method is used to determine a very good estimation of errors of the parameters. Unlike some other fitting routines, OCFit does not need any initial values of fitted parameters. An intuitive graphic user interface is provided for ease of fitting, and nine common models of periodic O-C changes are included.

[submitted] OCD: O'Connell Effect Detector using Push-Pull Learning

OCD (O'Connell Effect Detector using Push-Pull Learning) detects eclipsing binaries that demonstrate the O'Connell Effect. This time-domain signature extraction methodology uses a supporting supervised pattern detection algorithm. The methodology maps stellar variable observations (time-domain data) to a new representation known as Distribution Fields (DF), the properties of which enable efficient handling of issues such as irregular sampling and multiple values per time instance. Using this representation, the code applies a metric learning technique directly on the DF space capable of specifically identifying the stars of interest; the metric is tuned on a set of labeled eclipsing binary data from the Kepler survey, targeting particular systems exhibiting the O’Connell Effect. This code is useful for large-scale data volumes such as that expected from next generation telescopes such as LSST.

[ascl:1011.017] Occultation and Microlensing

Occultation and microlensing are different limits of the same phenomena of one body passing in front of another body. We derive a general exact analytic expression which describes both microlensing and occultation in the case of spherical bodies with a source of uniform brightness and a non-relativistic foreground body. We also compute numerically the case of a source with quadratic limb-darkening. In the limit that the gravitational deflection angle is comparable to the angular size of the foreground body, both microlensing and occultation occur as the objects align. Such events may be used to constrain the size ratio of the lens and source stars, the limb-darkening coefficients of the source star, and the surface gravity of the lens star (if the lens and source distances are known). Application of these results to microlensing during transits in binaries and giant-star microlensing are discussed. These results unify the microlensing and occultation limits and should be useful for rapid model fitting of microlensing, eclipse, and "microccultation" events.

[ascl:1307.008] Obit: Radio Astronomy Data Handling

Obit is a group of software packages for handling radio astronomy data, especially interferometric and single dish OTF imaging. Obit is primarily an environment in which new data processing algorithms can be developed and tested but which can also be used for production processing of a certain range of scientific problems. The package supports both prepackaged, compiled tasks and a python interface to the major class functionality to allow rapid prototyping using python scripts; it allows access to multiple disk--resident data formats, in particular access to either AIPS disk data or FITS files. Obit applications are interoperable with Classic AIPS and the ObitTalk python interface gives access to AIPS tasks as well as Obit libraries and tasks.

[ascl:1608.012] OBERON: OBliquity and Energy balance Run on N-body systems

OBERON (OBliquity and Energy balance Run on N-body systems) models the climate of Earthlike planets under the effects of an arbitrary number and arrangement of other bodies, such as stars, planets and moons. The code, written in C++, simultaneously computes N body motions using a 4th order Hermite integrator, simulates climates using a 1D latitudinal energy balance model, and evolves the orbital spin of bodies using the equations of Laskar (1986a,b).

[ascl:1408.019] O2scl: Object-oriented scientific computing library

O2scl is an object-oriented library for scientific computing in C++ useful for solving, minimizing, differentiating, integrating, interpolating, optimizing, approximating, analyzing, fitting, and more. Many classes operate on generic function and vector types; it includes classes based on GSL and CERNLIB. O2scl also contains code for computing the basic thermodynamic integrals for fermions and bosons, for generating almost all of the most common equations of state of nuclear and neutron star matter, and for solving the TOV equations. O2scl can be used on Linux, Mac and Windows (Cygwin) platforms and has extensive documentation.

[ascl:1712.006] Nyx: Adaptive mesh, massively-parallel, cosmological simulation code

Nyx code solves equations of compressible hydrodynamics on an adaptive grid hierarchy coupled with an N-body treatment of dark matter. The gas dynamics in Nyx use a finite volume methodology on an adaptive set of 3-D Eulerian grids; dark matter is represented as discrete particles moving under the influence of gravity. Particles are evolved via a particle-mesh method, using Cloud-in-Cell deposition/interpolation scheme. Both baryonic and dark matter contribute to the gravitational field. In addition, Nyx includes physics for accurately modeling the intergalactic medium; in optically thin limits and assuming ionization equilibrium, the code calculates heating and cooling processes of the primordial-composition gas in an ionizing ultraviolet background radiation field.

[ascl:1610.015] NuPyCEE: NuGrid Python Chemical Evolution Environment

The NuGrid Python Chemical Evolution Environment (NuPyCEE) simulates the chemical enrichment and stellar feedback of stellar populations. It contains three modules. The Stellar Yields for Galactic Modeling Applications module (SYGMA) models the enrichment and feedback of simple stellar populations which can be included in hydrodynamic simulations and semi-analytic models of galaxies. It is the basic building block of the One-zone Model for the Evolution of GAlaxies (OMEGA, ascl:1806.018) module which models the chemical evolution of galaxies such as the Milky Way and its dwarf satellites. The STELLAB (STELLar ABundances) module provides a library of observed stellar abundances useful for comparing predictions of SYGMA and OMEGA.

[ascl:1408.013] NumCosmo: Numerical Cosmology

NumCosmo is a free software C library whose main purposes are to test cosmological models using observational data and to provide a set of tools to perform cosmological calculations. The software implements three different probes: cosmic microwave background (CMB), supernovae type Ia (SNeIa) and large scale structure (LSS) information, such as baryonic acoustic oscillations (BAO) and galaxy cluster abundance. The code supports a joint analysis of these data and the parameter space can include cosmological and phenomenological parameters. NumCosmo matter power spectrum and CMB codes were written independent of other implementations such as CMBFAST (ascl:9909.004), CAMB (ascl:1102.026), etc.

The library structure simplifies the inclusion of non-standard cosmological models. Besides the functions related to cosmological quantities, this library also implements mathematical and statistical tools. The former were developed to enable the inclusion of other probes and/or theoretical models and to optimize the codes. The statistical framework comprises algorithms which define likelihood functions, minimization, Monte Carlo, Fisher Matrix and profile likelihood methods.

[ascl:1601.014] Nulike: Neutrino telescope likelihood tools

Nulike is software for including full event-level information in likelihood calculations for neutrino telescope searches for dark matter annihilation. It includes both angular and spectral information about neutrino events as well as their total number, and can be used for single models without reference to the rest of a parameter space.

[ascl:1602.008] NuCraft: Oscillation probabilities for atmospheric neutrinos calculator

NuCraft calculates oscillation probabilities for atmospheric neutrinos, taking into account matter effects and the Earth's atmosphere, and supports an arbitrary number of sterile neutrino flavors with easily configurable continuous Earth models. Continuous modeling of the Earth instead of the often-used approximation of four layers with constant density and consideration of the smearing of baseline lengths due to the variable neutrino production heights in Earth's atmosphere each lead to deviations of 10% or more for conventional neutrinos between 1 and 10 GeV.

[ascl:1609.009] NSCool: Neutron star cooling code

NSCool is a 1D (i.e., spherically symmetric) neutron star cooling code written in Fortran 77. The package also contains a series of EOSs (equation of state) to build stars, a series of pre-built stars, and a TOV (Tolman- Oppenheimer-Volkoff) integrator to build stars from an EOS. It can also handle “strange stars” that have a huge density discontinuity between the quark matter and the covering thin baryonic crust. NSCool solves the heat transport and energy balance equations in whole GR, resulting in a time sequence of temperature profiles (and, in particular, a Teff - age curve). Several heating processes are included, and more can easily be incorporated. In particular it can evolve a star undergoing accretion with the resulting deep crustal heating, under a steady or time-variable accretion rate. NSCool is robust, very fast, and highly modular, making it easy to add new subroutines for new processes.

[ascl:1807.025] NRPy+: Code generator for Numerical Relativity

NRPy+ (Python-based Code generation for Numerical Relativity and Beyond) generates highly-optimized C code from complex tensorial expressions input in Einstein-like notation. NRPy+ uses SymPy as its computer algebra system backend. It is part of the NRPy+/SENR numerical relativity code package for solving Einstein's equations of general relativity to model compact objects at about 1/100 the cost in memory of more traditional, AMR-based numerical relativity codes, thus allowing desktop computers to be used for gravitational wave astrophysics.

[ascl:1804.015] NR-code: Nonlinear reconstruction code

NR-code applies nonlinear reconstruction to the dark matter density field in redshift space and solves for the nonlinear mapping from the initial Lagrangian positions to the final redshift space positions; this reverses the large-scale bulk flows and improves the precision measurement of the baryon acoustic oscillations (BAO) scale.

[ascl:1705.014] NPTFit: Non-Poissonian Template Fitting

NPTFit is a specialized Python/Cython package that implements Non-Poissonian Template Fitting (NPTF), originally developed for characterizing populations of unresolved point sources. It offers fast evaluation of likelihoods for NPTF analyses and has an easy-to-use interface for performing non-Poissonian (as well as standard Poissonian) template fits using MultiNest (ascl:1109.006) or other inference tools. It allows inclusion of an arbitrary number of point source templates, with an arbitrary number of degrees of freedom in the modeled flux distribution, and has modules for analyzing and plotting the results of an NPTF.

[ascl:1202.003] NOVAS: Naval Observatory Vector Astrometry Software

NOVAS is an integrated package of subroutines and functions for computing various commonly needed quantities in positional astronomy. The package can provide, in one or two subroutine or function calls, the instantaneous coordinates of any star or planet in a variety of coordinate systems. At a lower level, NOVAS also supplies astrometric utility transformations, such as those for precession, nutation, aberration, parallax, and the gravitational deflection of light. The computations are accurate to better than one milliarcsecond. The NOVAS package is an easy-to-use facility that can be incorporated into data reduction programs, telescope control systems, and simulations. The U.S. parts of The Astronomical Almanac are prepared using NOVAS. Three editions of NOVAS are available: Fortran, C, and Python.

[ascl:1011.016] Non-LTE Models and Theoretical Spectra of Accretion Disks in Active Galactic Nuclei. III. Integrated Spectra for Hydrogen-Helium Disks

We have constructed a grid of non-LTE disk models for a wide range of black hole mass and mass accretion rate, for several values of viscosity parameter alpha, and for two extreme values of the black hole spin: the maximum-rotation Kerr black hole, and the Schwarzschild (non-rotating) black hole. Our procedure calculates self-consistently the vertical structure of all disk annuli together with the radiation field, without any approximations imposed on the optical thickness of the disk, and without any ad hoc approximations to the behavior of the radiation intensity. The total spectrum of a disk is computed by summing the spectra of the individual annuli, taking into account the general relativistic transfer function. The grid covers nine values of the black hole mass between M = 1/8 and 32 billion solar masses with a two-fold increase of mass for each subsequent value; and eleven values of the mass accretion rate, each a power of 2 times 1 solar mass/year. The highest value of the accretion rate corresponds to 0.3 Eddington. We show the vertical structure of individual annuli within the set of accretion disk models, along with their local emergent flux, and discuss the internal physical self-consistency of the models. We then present the full disk-integrated spectra, and discuss a number of observationally interesting properties of the models, such as optical/ultraviolet colors, the behavior of the hydrogen Lyman limit region, polarization, and number of ionizing photons. Our calculations are far from definitive in terms of the input physics, but generally we find that our models exhibit rather red optical/UV colors. Flux discontinuities in the region of the hydrogen Lyman limit are only present in cool, low luminosity models, while hotter models exhibit blueshifted changes in spectral slope.

[ascl:1305.013] Non-Gaussian Realisations

Non-Gaussian Realisations provides code based on a spectral distortion/quantile transformation that generates a realization of a field on a cubic grid that has a specified probability distribution function and a specified power spectrum.

[ascl:1711.024] NOD3: Single dish reduction software

NOD3 processes and analyzes maps from single-dish observations affected by scanning effects from clouds, receiver instabilities, or radio-frequency interference. Its “basket-weaving” tool combines orthogonally scanned maps into a final map that is almost free of scanning effects. A restoration tool for dual-beam observations reduces the noise by a factor of about two compared to the NOD2 version. Combining single-dish with interferometer data in the map plane ensures the full recovery of the total flux density.

[ascl:1101.006] NIRVANA: A Numerical Tool for Astrophysical Gas Dynamics

The NIRVANA code is capable of the simulation of multi-scale self-gravitational magnetohydrodynamics problems in three space dimensions employing the technique of adaptive mesh refinement. The building blocks of NIRVANA are (i) a fully conservative, divergence-free Godunov-type central scheme for the solution of the equations of magnetohydrodynamics; (ii) a block-structured mesh refinement algorithm which automatically adds and removes elementary grid blocks whenever necessary to achieve adequate resolution and; (iii) an adaptive mesh Poisson solver based on multigrid philosophy which incorporates the so-called elliptic matching condition to keep the gradient of the gravitational potential continous at fine/coarse mesh interfaces.

[ascl:1501.002] NIGO: Numerical Integrator of Galactic Orbits

NIGO (Numerical Integrator of Galactic Orbits) predicts the orbital evolution of test particles moving within a fully-analytical gravitational potential generated by a multi-component galaxy. The code can simulate the orbits of stars in elliptical and disc galaxies, including non-axisymmetric components represented by a spiral pattern and/or rotating bar(s).

[ascl:1106.016] Nightfall: Animated Views of Eclipsing Binary Stars

Nightfall is an astronomy application for fun, education, and science. It can produce animated views of eclipsing binary stars, calculate synthetic lightcurves and radial velocity curves, and eventually determine the best-fit model for a given set of observational data of an eclipsing binary star system.

Nightfall comes with a user guide and a set of observational data for several eclipsing binary star systems.

[ascl:1903.008] NIFTy5: Numerical Information Field Theory v5

NIFTy (Numerical Information Field Theory) facilitates the construction of Bayesian field reconstruction algorithms for fields being defined over multidimensional domains. A NIFTy algorithm can be developed for 1D field inference and then be used in 2D or 3D, on the sphere, or on product spaces thereof. NIFTy5 is a complete redesign of the previous framework (ascl:1302.013), and requires only the specification of a probabilistic generative model for all involved fields and the data in order to be able to recover the former from the latter. This is achieved via Metric Gaussian Variational Inference, which also provides posterior samples for all unknown quantities jointly.

[ascl:1302.013] NIFTY: A versatile Python library for signal inference

NIFTY (Numerical Information Field TheorY) is a versatile library enables the development of signal inference algorithms that operate regardless of the underlying spatial grid and its resolution. Its object-oriented framework is written in Python, although it accesses libraries written in Cython, C++, and C for efficiency. NIFTY offers a toolkit that abstracts discretized representations of continuous spaces, fields in these spaces, and operators acting on fields into classes. Thereby, the correct normalization of operations on fields is taken care of automatically. This allows for an abstract formulation and programming of inference algorithms, including those derived within information field theory. Thus, NIFTY permits rapid prototyping of algorithms in 1D and then the application of the developed code in higher-dimensional settings of real world problems. NIFTY operates on point sets, n-dimensional regular grids, spherical spaces, their harmonic counterparts, and product spaces constructed as combinations of those.

[ascl:1508.002] NICOLE: NLTE Stokes Synthesis/Inversion Code

NICOLE, written in Fortran 90, seeks the model atmosphere that provides the best fit to the Stokes profiles (in a least-squares sense) of an arbitrary number of simultaneously-observes spectral lines from solar/stellar atmospheres. The inversion core used for the development of NICOLE is the LORIEN engine (the Lovely Reusable Inversion ENgine), which combines the SVD technique with the Levenberg-Marquardt minimization method to solve the inverse problem.

[ascl:1608.016] NICIL: Non-Ideal magnetohydrodynamics Coefficients and Ionisation Library

NICIL (Non-Ideal magnetohydrodynamics Coefficients and Ionisation Library) calculates the ionization values and the coefficients of the non-ideal magnetohydrodynamics terms of Ohmic resistivity, the Hall effect, and ambipolar diffusion. Written as a standalone Fortran90 module that can be implemented in existing codes, NICIL is fully parameterizable, allowing the user to choose which processes to include and decide the values of the free parameters. The module includes both cosmic ray and thermal ionization; the former includes two ion species and three species of dust grains (positively charged, negatively charged and neutral), and the latter includes five elements which can be doubly ionized.

[ascl:1508.008] NGMIX: Gaussian mixture models for 2D images

NGMIX implements Gaussian mixture models for 2D images. Both the PSF profile and the galaxy are modeled using mixtures of Gaussians. Convolutions are thus performed analytically, resulting in fast model generation as compared to methods that perform the convolution in Fourier space. For the galaxy model, NGMIX supports exponential disks and de Vaucouleurs and Sérsic profiles; these are implemented approximately as a sum of Gaussians using the fits from Hogg & Lang (2013). Additionally, any number of Gaussians can be fit, either completely free or constrained to be cocentric and co-elliptical.

[ascl:1903.013] NFWdist: Density, distribution function, quantile function and random generation for the 3D NFW profile

Available in R and Python, the simple analytic scheme NFWdist performs highly efficient and exact sampling of the Navarro, Frenk & White (NFW) profile as a true probability distribution function, with the only variable being the concentration.

[ascl:1807.011] nfield: Stochastic tool for QFT on inflationary backgrounds

nfield uses a stochastic formalism to compute the IR correlation functions of quantum fields during cosmic inflation in n-field dimensions. This is a necessary 1-loop resummation of the correlation functions to render them finite. The code supports the implementation of n-numbers of coupled test fields (energetically sub-dominant) as well as non-test fields.

[ascl:1010.085] Network Tools for Astronomical Data Retrieval

The first step in a science project is the acquisition and understanding of the relevant data. The tools range from simple data transfer methods to more complex browser-emulating scripts. When integrated with a defined sample or catalog, these scripts provide seamless techniques to retrieve and store data of varying types. These tools can be used to leapfrog from website to website to acquire multi-wavelength datasets. This project demonstrates the capability to use multiple data websites, in conjunction, to perform the type of calculations once reserved for on-site datasets.

[ascl:1809.012] nestcheck: Nested sampling calculations analysis

Nestcheck analyzes nested sampling runs and estimates numerical uncertainties on calculations using them. The package can load results from a number of nested sampling software packages, including MultiNest (ascl:1109.006), PolyChord (ascl:1502.011), dynesty (ascl:1809.013) and perfectns (ascl:1809.005), and offers the flexibility to add input functions for other nested sampling software packages. Nestcheck utilities include error analysis, diagnostic tests, and plots for nested sampling calculations.

[ascl:1307.017] NEST: Noble Element Simulation Technique

NEST (Noble Element Simulation Technique) offers comprehensive, accurate, and precise simulation of the excitation, ionization, and corresponding scintillation and electroluminescence processes in liquid noble elements, useful for direct dark matter detectors, double beta decay searches, PET scans, and general radiation detection technology. Written in C++, NEST is an add-on module for the Geant4 simulation package that incorporates more detailed physics than is currently available into the simulation of scintillation. NEST is of particular use for low-energy nuclear recoils. All available liquid xenon data on nuclear recoils and electron recoils to date have been taken into consideration in arriving at the current models. NEST also handles the magnitude of the light and charge yields of nuclear recoils, including their electric field dependence, thereby shedding light on the possibility of detection or exclusion of a low-mass dark matter WIMP by liquid xenon detectors.

[ascl:1010.051] NEMO: A Stellar Dynamics Toolbox

NEMO is an extendible Stellar Dynamics Toolbox, following an Open-Source Software model. It has various programs to create, integrate, analyze and visualize N-body and SPH like systems, following the pipe and filter architecture. In addition there are various tools to operate on images, tables and orbits, including FITS files to export/import to/from other astronomical data reduction packages. A large growing fraction of NEMO has been contributed by a growing list of authors. The source code consist of a little over 4000 files and a little under 1,000,000 lines of code and documentation, mostly C, and some C++ and Fortran. NEMO development started in 1986 in Princeton (USA) by Barnes, Hut and Teuben. See also ZENO (ascl:1102.027) for the version that Barnes maintains.

[ascl:1010.004] Needatool: A Needlet Analysis Tool for Cosmological Data Processing

NeedATool (Needlet Analysis Tool) performs data analysis based on needlets, a wavelet rendition powerful for the analysis of fields defined on a sphere. Needlets have been applied successfully to the treatment of astrophysical and cosmological observations, particularly to the analysis of cosmic microwave background (CMB) data. Wavelets have emerged as a useful tool for CMB data analysis, as they combine most of the advantages of both pixel space, where it is easier to deal with partial sky coverage and experimental noise, and the harmonic domain, in which beam treatment and comparison with theoretical predictions are more effective due in large part to their sharp localization.

[ascl:1608.019] NEBULAR: Spectrum synthesis for mixed hydrogen-helium gas in ionization equilibrium

NEBULAR synthesizes the spectrum of a mixed hydrogen helium gas in collisional ionization equilibrium. It is not a spectral fitting code, but it can be used to resample a model spectrum onto the wavelength grid of a real observation. It supports a wide range of temperatures and densities. NEBULAR includes free-free, free-bound, two-photon and line emission from HI, HeI and HeII. The code will either return the composite model spectrum, or, if desired, the unrescaled atomic emission coefficients. It is written in C++ and depends on the GNU Scientific Library (GSL).

[ascl:1809.009] NEBULA: Radiative transfer code of ionized nebulae at radio wavelengths

NEBULA performs the radiative transfer of the 3He+ hyperfine transition, radio recombination lines (RRLs), and free-free continuum emission through a model nebula. The model nebula is composed of only H and He within a three-dimension Cartesian grid with arbitrary density, temperature, and ionization structure. The 3He+ line is assumed to be in local thermodynamic equilibrium (LTE), but non-LTE effects and pressure broadening from electron impacts can be included for the RRLs. All spectra are broadened by thermal and microturbulent motions.

[ascl:1411.013] NEAT: Nebular Empirical Analysis Tool

NEAT is a fully automated code which carries out a complete analysis of lists of emission lines to estimate the amount of interstellar extinction, calculate representative temperatures and densities, compute ionic abundances from both collisionally excited lines and recombination lines, and finally to estimate total elemental abundances using an ionization correction scheme. NEAT uses a Monte Carlo technique to robustly propagate uncertainties from line flux measurements through to the derived abundances.

[ascl:1101.002] NDSPMHD Smoothed Particle Magnetohydrodynamics Code

This paper presents an overview and introduction to Smoothed Particle Hydrodynamics and Magnetohydrodynamics in theory and in practice. Firstly, we give a basic grounding in the fundamentals of SPH, showing how the equations of motion and energy can be self-consistently derived from the density estimate. We then show how to interpret these equations using the basic SPH interpolation formulae and highlight the subtle difference in approach between SPH and other particle methods. In doing so, we also critique several `urban myths' regarding SPH, in particular the idea that one can simply increase the `neighbour number' more slowly than the total number of particles in order to obtain convergence. We also discuss the origin of numerical instabilities such as the pairing and tensile instabilities. Finally, we give practical advice on how to resolve three of the main issues with SPMHD: removing the tensile instability, formulating dissipative terms for MHD shocks and enforcing the divergence constraint on the particles, and we give the current status of developments in this area. Accompanying the paper is the first public release of the NDSPMHD SPH code, a 1, 2 and 3 dimensional code designed as a testbed for SPH/SPMHD algorithms that can be used to test many of the ideas and used to run all of the numerical examples contained in the paper.

[ascl:1411.023] NDF: Extensible N-dimensional Data Format Library

The Extensible N-Dimensional Data Format (NDF) stores bulk data in the form of N-dimensional arrays of numbers. It is typically used for storing spectra, images and similar datasets with higher dimensionality. The NDF format is based on the Hierarchical Data System (HDS) and is extensible; not only does it provide a comprehensive set of standard ancillary items to describe the data, it can also be extended indefinitely to handle additional user-defined information of any type. The NDF library is used to read and write files in the NDF format. It is distributed with the Starlink software (ascl:1110.012).

[ascl:1010.019] NBSymple: A Double Parallel, Symplectic N-body Code Running on Graphic Processing Units

NBSymple is a numerical code which numerically integrates the equation of motions of N 'particles' interacting via Newtonian gravitation and move in an external galactic smooth field. The force evaluation on every particle is done by mean of direct summation of the contribution of all the other system's particle, avoiding truncation error. The time integration is done with second-order and sixth-order symplectic schemes. NBSymple has been parallelized twice, by mean of the Computer Unified Device Architecture to make the all-pair force evaluation as fast as possible on high-performance Graphic Processing Units NVIDIA TESLA C 1060, while the O(N) computations are distributed on various CPUs by mean of OpenMP Application Program. The code works both in single precision floating point arithmetics or in double precision. The use of single precision allows the use at best of the GPU performances but, of course, limits the precision of simulation in some critical situations. We find a good compromise in using a software reconstruction of double precision for those variables that are most critical for the overall precision of the code.

[ascl:1502.010] nbody6tt: Tidal tensors in N-body simulations

nbody6tt, based on Aarseth's nbody6 (ascl:1102.006) code, includes the treatment of complex galactic tides in a direct N-body simulation of a star cluster through the use of tidal tensors (tt) and offers two complementary methods. The first allows consideration of any kind of galaxy and orbit, thus offering versatility; this method cannot be used to study tidal debris, as it relies on the tidal approximation (linearization of the tidal force). The second method is not limited by this and does not require a galaxy simulation; the user defines a numerical function which takes position and time as arguments, and the galactic potential is returned. The space and time derivatives of the potential are used to (i) integrate the motion of the cluster on its orbit in the galaxy (starting from user-defined initial position and velocity vector), and (ii) compute the tidal acceleration on the stars.

[ascl:1102.006] NBODY Codes: Numerical Simulations of Many-body (N-body) Gravitational Interactions

I review the development of direct N-body codes at Cambridge over nearly 40 years, highlighting the main stepping stones. The first code (NBODY1) was based on the simple concepts of a force polynomial combined with individual time steps, where numerical problems due to close encounters were avoided by a softened potential. Fortuitously, the elegant Kustaanheimo-Stiefel two-body regularization soon permitted small star clusters to be studied (NBODY3). Subsequent extensions to unperturbed three-body and four-body regularization proved beneficial in dealing with multiple interactions. Investigations of larger systems became possible with the Ahmad-Cohen neighbor scheme which was used more than 20 years ago for expanding universe models of 4000 galaxies (NBODY2). Combining the neighbor scheme with the regularization procedures enabled more realistic star clusters to be considered (NBODY5). After a period of simulations with no apparent technical progress, chain regularization replaced the treatment of compact subsystems (NBODY3, NBODY5). More recently, the Hermite integration method provided a major advance and has been implemented on the special-purpose HARP computers (NBODY4) together with an alternative version for workstations and supercomputers (NBODY6). These codes also include a variety of algorithms for stellar evolution based on fast lookup functions. The treatment of primordial binaries contains efficient procedures for chaotic two-body motion as well as tidal circularization, and special attention is paid to hierarchical systems and their stability. This family of N-body codes constitutes a powerful tool for dynamical simulations which is freely available to the astronomical community, and the massive effort owes much to collaborators.

[ascl:1803.004] nanopipe: Calibration and data reduction pipeline for pulsar timing

nanopipe is a data reduction pipeline for calibration, RFI removal, and pulse time-of-arrival measurement from radio pulsar data. It was developed primarily for use by the NANOGrav project. nanopipe is written in Python, and depends on the PSRCHIVE (ascl:1105.014) library.

[ascl:1708.022] Naima: Derivation of non-thermal particle distributions through MCMC spectral fitting

Naima computes non-thermal radiation from relativistic particle populations. It includes tools to perform MCMC fitting of radiative models to X-ray, GeV, and TeV spectra using emcee (ascl:1303.002), an affine-invariant ensemble sampler for Markov Chain Monte Carlo. Naima is an Astropy (ascl:1304.002) affiliated package.

[ascl:1409.009] Nahoon: Time-dependent gas-phase chemical model

Nahoon is a gas-phase chemical model that computes the chemical evolution in a 1D temperature and density structure. It uses chemical networks downloaded from the KInetic Database for Astrochemistry (KIDA) but the model can be adapted to any network. The program is written in Fortran 90 and uses the DLSODES (double precision) solver from the ODEPACK package to solve the coupled stiff differential equations. The solver computes the chemical evolution of gas-phase species at a fixed temperature and density and can be used in one dimension (1D) if a grid of temperature, density, and visual extinction is provided. Grains, both neutral and negatively charged, and electrons are considered as chemical species and their concentrations are computed at the same time as those of the other species. Nahoon contains a test to check the temperature range of the validity of the rate coefficients and avoid extrapolations outside this range. A test is also included to check for duplication of chemical reactions, defined over complementary ranges of temperature.

[ascl:1411.014] NAFE: Noise Adaptive Fuzzy Equalization

NAFE (Noise Adaptive Fuzzy Equalization) is an image processing method allowing for visualization of fine structures in SDO AIA high dynamic range images. It produces artifact-free images and gives significantly better results than methods based on convolution or Fourier transform.

[ascl:1102.001] N-MODY: A Code for Collisionless N-body Simulations in Modified Newtonian Dynamics

N-MODY is a parallel particle-mesh code for collisionless N-body simulations in modified Newtonian dynamics (MOND). N-MODY is based on a numerical potential solver in spherical coordinates that solves the non-linear MOND field equation, and is ideally suited to simulate isolated stellar systems. N-MODY can be used also to compute the MOND potential of arbitrary static density distributions. A few applications of N-MODY indicate that some astrophysically relevant dynamical processes are profoundly different in MOND and in Newtonian gravity with dark matter.

[ascl:1502.003] N-GenIC: Cosmological structure initial conditions

N-GenIC is an initial conditions code for cosmological structure formation that can be used to set-up random N-body realizations of Gaussian random fields with a prescribed power spectrum in a homogeneously sampled periodic box. The code creates cosmological initial conditions based on the Zeldovich approximation, in a format directly compatible with GADGET or AREPO.

[ascl:1203.009] MYRIAD: N-body code for simulations of star clusters

MYRIAD is a C++ code for collisional N-body simulations of star clusters. The code uses the Hermite fourth-order scheme with block time steps, for advancing the particles in time, while the forces and neighboring particles are computed using the GRAPE-6 board. Special treatment is used for close encounters, binary and multiple sub-systems that either form dynamically or exist in the initial configuration. The structure of the code is modular and allows the appropriate treatment of more physical phenomena, such as stellar and binary evolution, stellar collisions and evolution of close black-hole binaries. Moreover, it can be easily modified so that the part of the code that uses GRAPE-6 could be replaced by another module that uses other accelerating-hardware like the Graphics Processing Units (GPUs). Appropriate choice of the free parameters give a good accuracy and speed for simulations of star clusters up to and beyond core collapse. The code accuracy becomes comparable and even better than the accuracy of existing codes when a number of close binary systems is dynamically created in a simulation; this is due to the high accuracy of the method that is used for close binary and multiple sub-systems. The code can be used for evolving star clusters containing equal-mass stars or star clusters with an initial mass function (IMF) containing an intermediate mass black hole (IMBH) at the center and/or a fraction of primordial binaries, which are systems of particular astrophysical interest.

[ascl:1311.011] MUSIC: MUlti-Scale Initial Conditions

MUSIC generates multi-scale initial conditions with multiple levels of refinements for cosmological ‘zoom-in’ simulations. The code uses an adaptive convolution of Gaussian white noise with a real-space transfer function kernel together with an adaptive multi-grid Poisson solver to generate displacements and velocities following first- (1LPT) or second-order Lagrangian perturbation theory (2LPT). MUSIC achieves rms relative errors of the order of 10−4 for displacements and velocities in the refinement region and thus improves in terms of errors by about two orders of magnitude over previous approaches. In addition, errors are localized at coarse-fine boundaries and do not suffer from Fourier space-induced interference ringing.

[ascl:1610.004] MUSE-DRP: MUSE Data Reduction Pipeline

The MUSE pipeline turns the complex raw data of the MUSE integral field spectrograph into a ready-to-use datacube for scientific analysis.

[ascl:1605.007] MUSCLE: MUltiscale Spherical-ColLapse Evolution

MUSCLE (MUltiscale Spherical ColLapse Evolution) produces low-redshift approximate N-body realizations accurate to few-Megaparsec scales. It applies a spherical-collapse prescription on multiple Gaussian-smoothed scales. It achieves higher accuracy than perturbative schemes (Zel'dovich and second-order Lagrangian perturbation theory - 2LPT), and by including the void-in-cloud process (voids in large-scale collapsing regions), solves problems with a single-scale spherical-collapse scheme.

[ascl:1402.006] Munipack: General astronomical image processing software

Munipack provides easy-to-use tools for all astronomical astrometry and photometry, access to Virtual Observatory as well as FITS files operations and a simple user interface along with a powerful processing engine. Its many features include a FITS images viewer that allows for basic (astronomical) operations with frames, advanced image processor supporting an infinite dynamic range and advanced color management, and astrometric calibration of images. The astrometry module uses robust statistical estimators and algorithms. The photometry module provides the classical method detection of stars and implements the aperture photometry, calibrated on the basis of photon statistics, and allows for the automatic detection and aperture photometry of stars; calibration on absolute fluxes is possible. The software also provides a standard way to correct for all the bias, dark and flat-field frames, and many other features.

[ascl:1704.014] Multipoles: Potential gain for binary lens estimation

Multipoles, written in Python, calculates the quadrupole and hexadecapole approximations of the finite-source magnification: quadrupole (Wk,rho,Gamma) and hexadecapole (Wk,rho,Gamma). The code is efficient and faster than previously available methods, and could be generalized for use on large portions of the light curves.

[ascl:1109.008] Multipole Vectors: Decomposing Functions on a Sphere

We propose a novel representation of cosmic microwave anisotropy maps, where each multipole order l is represented by l unit vectors pointing in directions on the sky and an overall magnitude. These "multipole vectors and scalars" transform as vectors under rotations. Like the usual spherical harmonics, multipole vectors form an irreducible representation of the proper rotation group SO(3). However, they are related to the familiar spherical harmonic coefficients, alm, in a nonlinear way, and are therefore sensitive to different aspects of the CMB anisotropy. Nevertheless, it is straightforward to determine the multipole vectors for a given CMB map and we present an algorithm to compute them. Using the WMAP full-sky maps, we perform several tests of the hypothesis that the CMB anisotropy is statistically isotropic and Gaussian random. We find that the result from comparing the oriented area of planes defined by these vectors between multipole pairs 2<=l1!=l2<=8 is inconsistent with the isotropic Gaussian hypothesis at the 99.4% level for the ILC map and at 98.9% level for the cleaned map of Tegmark et al. A particular correlation is suggested between the l=3 and l=8 multipoles, as well as several other pairs. This effect is entirely different from the now familiar planarity and alignment of the quadrupole and octupole: while the aforementioned is fairly unlikely, the multipole vectors indicate correlations not expected in Gaussian random skies that make them unusually likely. The result persists after accounting for pixel noise and after assuming a residual 10% dust contamination in the cleaned WMAP map. While the definitive analysis of these results will require more work, we hope that multipole vectors will become a valuable tool for various cosmological tests, in particular those of cosmic isotropy.

[ascl:1109.006] MultiNest: Efficient and Robust Bayesian Inference

We present further development and the first public release of our multimodal nested sampling algorithm, called MultiNest. This Bayesian inference tool calculates the evidence, with an associated error estimate, and produces posterior samples from distributions that may contain multiple modes and pronounced (curving) degeneracies in high dimensions. The developments presented here lead to further substantial improvements in sampling efficiency and robustness, as compared to the original algorithm presented in Feroz & Hobson (2008), which itself significantly outperformed existing MCMC techniques in a wide range of astrophysical inference problems. The accuracy and economy of the MultiNest algorithm is demonstrated by application to two toy problems and to a cosmological inference problem focusing on the extension of the vanilla $Lambda$CDM model to include spatial curvature and a varying equation of state for dark energy. The MultiNest software is fully parallelized using MPI and includes an interface to CosmoMC (ascl:1106.025). It will also be released as part of the SuperBayeS package (ascl:1109.007) for the analysis of supersymmetric theories of particle physics.

[ascl:1506.004] multiband_LS: Multiband Lomb-Scargle Periodograms

The multiband periodogram is a general extension of the well-known Lomb-Scargle approach for detecting periodic signals in time-domain data. In addition to advantages of the Lomb-Scargle method such as treatment of non-uniform sampling and heteroscedastic errors, the multiband periodogram significantly improves period finding for randomly sampled multiband light curves (e.g., Pan-STARRS, DES and LSST). The light curves in each band are modeled as arbitrary truncated Fourier series, with the period and phase shared across all bands.

[ascl:1803.006] MulensModel: Microlensing light curves modeling

MulensModel calculates light curves of microlensing events. Both single and binary lens events are modeled and various higher-order effects can be included: extended source (with limb-darkening), annual microlensing parallax, and satellite microlensing parallax. The code is object-oriented and written in Python3, and requires AstroPy (ascl:1304.002).

[ascl:1811.012] muLAn: gravitational MICROlensing Analysis Software

muLAn analyzes and fits light curves of gravitational microlensing events. The code includes all classical microlensing models (for example, single and binary microlenses, ground- and space-based parallax effects, orbital motion, finite-source effects, and limb-darkening); these can be combined into several time intervals of the analyzed light curve. Minimization methods include an Affine-Invariant Ensemble Sampler to generate a multivariate proposal function while running several Markov Chain Monte Carlo (MCMC) chains, for the set of parameters which is chosen to be fit; non-fitting parameters can be either kept fixed or set on a grid defined by the user. Furthermore, the software offers a model-free option to align all data sets together and allow inspection the light curve before any modeling work. It also comes with many useful routines (export publication-quality figures, data formatting and cleaning) and state-of-the-art statistical tools.

Modeling results can be interpreted using an interactive html page which contains all information about the light curve model, caustics, source trajectory, best-fit parameters and chi-square. Parameters uncertainties and statistical properties (such as multi-modal features of the posterior density) can be assessed from correlation plots. The code is modular, allowing the addition of other computation or minimization routines by directly adding their Python files without modifying the main code. The software has been designed to be easy to use even for the newcomer in microlensing, with external, synthetic and self-explanatory setup files containing all important commands and option settings. The user may choose to launch the code through command line instructions, or to import muLAn within another Python project like any standard Python package.

[ascl:1710.011] mTransport: Two-point-correlation function calculator

mTransport computes the 2-point-correlation function of the curvature and tensor perturbations in multifield models of inflation in the presence of a curved field space. It is a Mathematica implementation of the transport method which encompasses scenarios with violations of slow-roll conditions and turns of the trajectory in field space. It can be used for an arbitrary mass spectrum, including massive modes, particle production and models with quasi-single-field dynamics.

[ascl:1701.006] MSWAVEF: Momentum-Space Wavefunctions

MSWAVEF calculates hydrogenic and non-hydrogenic momentum-space electronic wavefunctions. Such wavefunctions are often required to calculate various collision processes, such as excitation and line broadening cross sections. The hydrogenic functions are calculated using the standard analytical expressions. The non-hydrogenic functions are calculated within quantum defect theory according to the method of Hoang Binh and van Regemorter (1997). Required Hankel transforms have been determined analytically for angular momentum quantum numbers ranging from zero to 13 using Mathematica. Calculations for higher angular momentum quantum numbers are possible, but slow (since calculated numerically). The code is written in IDL.

[ascl:1709.007] MSSC: Multi-Source Self-Calibration

Multi-Source Self-Calibration (MSSC) provides direction-dependent calibration to standard phase referencing. The code combines multiple faint sources detected within the primary beam to derive phase corrections. Each source has its CLEAN model divided into the visibilities which results in multiple point sources that are stacked in the uv plane to increase the S/N, thus permitting self-calibration. This process applies only to wide-field VLBI data sets that detect and image multiple sources within one epoch.

[ascl:1112.010] MRS3D: 3D Spherical Wavelet Transform on the Sphere

Future cosmological surveys will provide 3D large scale structure maps with large sky coverage, for which a 3D Spherical Fourier-Bessel (SFB) analysis is natural. Wavelets are particularly well-suited to the analysis and denoising of cosmological data, but a spherical 3D isotropic wavelet transform does not currently exist to analyse spherical 3D data. We present a new fast Discrete Spherical Fourier-Bessel Transform (DSFBT) based on both a discrete Bessel Transform and the HEALPIX angular pixelisation scheme. We tested the 3D wavelet transform and as a toy-application, applied a denoising algorithm in wavelet space to the Virgo large box cosmological simulations and found we can successfully remove noise without much loss to the large scale structure. The new spherical 3D isotropic wavelet transform, called MRS3D, is ideally suited to analysing and denoising future 3D spherical cosmological surveys; it uses a novel discrete spherical Fourier-Bessel Transform. MRS3D is based on two packages, IDL and Healpix and can be used only if these two packages have been installed.

[ascl:1504.016] MRrelation: Posterior predictive mass distribution

MRrelation calculates the posterior predictive mass distribution for an individual planet. The probabilistic mass-radius relationship (M-R relation) is evaluated within a Bayesian framework, which both quantifies this intrinsic dispersion and the uncertainties on the M-R relation parameters.

[ascl:1802.015] mrpy: Renormalized generalized gamma distribution for HMF and galaxy ensemble properties comparisons

mrpy calculates the MRP parameterization of the Halo Mass Function. It calculates basic statistics of the truncated generalized gamma distribution (TGGD) with the TGGD class, including mean, mode, variance, skewness, pdf, and cdf. It generates MRP quantities with the MRP class, such as differential number counts and cumulative number counts, and offers various methods for generating normalizations. It can generate the MRP-based halo mass function as a function of physical parameters via the mrp_b13 function, and fit MRP parameters to data in the form of arbitrary curves and in the form of a sample of variates with the SimFit class. mrpy also calculates analytic hessians and jacobians at any point, and allows the user to alternate parameterizations of the same form via the reparameterize module.

[ascl:1809.015] MrMoose: Multi-Resolution Multi-Object/Origin Spectral Energy distribution fitting procedure

MrMoose (Multi-Resolution Multi-Object/Origin Spectral Energy) fits user-defined models onto a set of multi-wavelength data using a Bayesian framework. The code can handle blended sources, large variation in resolution, and even upper limits consistently. It also generates a series of outputs allowing for an quick interpretation of the results. The code uses emcee (ascl:1303.002), and saves the emcee sampler object, thus allowing users to transfer the output to a personal graphical interface.

[ascl:1102.005] MRLENS: Multi-Resolution methods for gravitational LENSing

The MRLENS package offers a new method for the reconstruction of weak lensing mass maps. It uses the multiscale entropy concept, which is based on wavelets, and the False Discovery Rate which allows us to derive robust detection levels in wavelet space. We show that this new restoration approach outperforms several standard techniques currently used for weak shear mass reconstruction. This method can also be used to separate E and B modes in the shear field, and thus test for the presence of residual systematic effects. We concentrate on large blind cosmic shear surveys, and illustrate our results using simulated shear maps derived from N-Body Lambda-CDM simulations with added noise corresponding to both ground-based and space-based observations.

[ascl:1212.003] MPWide: Light-weight communication library for distributed computing

MPWide is a light-weight communication library for distributed computing. It is specifically developed to allow message passing over long-distance networks using path-specific optimizations. An early version of MPWide was used in the Gravitational Billion Body Project to allow simulations across multiple supercomputers.

[ascl:1106.022] MPI-Defrost: Extension of Defrost to MPI-based Cluster Environment

MPI-Defrost extends Frolov’s Defrost to an MPI-based cluster environment. This version has been restricted to a single field. Restoring two-field support should be straightforward, but will require some code changes. Some output options may also not be fully supported under MPI.

This code was produced to support our own work, and has been made available for the benefit of anyone interested in either oscillon simulations or an MPI capable version of Defrost, and it is provided on an "as-is" basis. Andrei Frolov is the primary developer of Defrost and we thank him for placing his work under the GPL (GNU Public License), and thus allowing us to distribute this modified version.

[ascl:1208.014] MPI-AMRVAC: MPI-Adaptive Mesh Refinement-Versatile Advection Code

MPI-AMRVAC is an MPI-parallelized Adaptive Mesh Refinement code, with some heritage (in the solver part) to the Versatile Advection Code or VAC, initiated by Gábor Tóth at the Astronomical Institute at Utrecht in November 1994, with help from Rony Keppens since 1996. Previous incarnations of the Adaptive Mesh Refinement version of VAC were of restricted use only, and have been used for basic research in AMR strategies, or for well-targeted applications. This MPI version uses a full octree block-based approach, and allows for general orthogonal coordinate systems. MPI-AMRVAC aims to advance any system of (primarily hyperbolic) partial differential equations by a number of different numerical schemes. The emphasis is on (near) conservation laws, with shock-dominated problems as a main research target. The actual equations are stored in separate modules, can be added if needed, and they can be selected by a simple configuration of the VACPP preprocessor. The dimensionality of the problem is also set through VACPP. The numerical schemes are able to handle discontinuities and smooth flows as well.

[ascl:1712.002] MPI_XSTAR: MPI-based parallelization of XSTAR program

MPI_XSTAR parallelizes execution of multiple XSTAR runs using Message Passing Interface (MPI). XSTAR (ascl:9910.008), part of the HEASARC's HEAsoft (ascl:1408.004) package, calculates the physical conditions and emission spectra of ionized gases. MPI_XSTAR invokes XSTINITABLE from HEASoft to generate a job list of XSTAR commands for given physical parameters. The job list is used to make directories in ascending order, where each individual XSTAR is spawned on each processor and outputs are saved. HEASoft's XSTAR2TABLE program is invoked upon the contents of each directory in order to produce table model FITS files for spectroscopy analysis tools.

[ascl:1304.014] MPgrafic: A parallel MPI version of Grafic-1

MPgrafic is a parallel MPI version of Grafic-1 which can produce large cosmological initial conditions on a cluster without requiring shared memory. The real Fourier transforms are carried in place using fftw while minimizing the amount of used memory (at the expense of performance) in the spirit of Grafic-1. The writing of the output file is also carried in parallel. In addition to the technical parallelization, it provides three extensions over Grafic-1:

  • it can produce power spectra with baryon wiggles (DJ Eisenstein and W. Hu, Ap. J. 496);
  • it has the optional ability to load a lower resolution noise map corresponding to the low frequency component which will fix the larger scale modes of the simulation (extra flag 0/1 at the end of the input process) in the spirit of Grafic-2;
  • it can be used in conjunction with constrfield, which generates initial conditions phases from a list of local constraints on density, tidal field density gradient and velocity.

[ascl:1208.019] MPFIT: Robust non-linear least squares curve fitting

These IDL routines provide a robust and relatively fast way to perform least-squares curve and surface fitting. The algorithms are translated from MINPACK-1, which is a rugged minimization routine found on Netlib, and distributed with permission. This algorithm is more desirable than CURVEFIT because it is generally more stable and less likely to crash than the brute-force approach taken by CURVEFIT, which is based upon Numerical Recipes.

[ascl:1611.003] MPDAF: MUSE Python Data Analysis Framework

MPDAF, the MUSE Python Data Analysis Framework, provides tools to work with MUSE-specific data (for example, raw data and pixel tables), and with more general data such as spectra, images, and data cubes. Originally written to work with MUSE data, it can also be used for other data, such as that from the Hubble Space Telescope. MPDAF also provides MUSELET, a SExtractor-based tool to detect emission lines in a data cube, and a format to gather all the information on a source in one FITS file. MPDAF was developed and is maintained by CRAL (Centre de Recherche Astrophysique de Lyon).

[ascl:1710.006] MOSFiT: Modular Open-Source Fitter for Transients

MOSFiT (Modular Open-Source Fitter for Transients) downloads transient datasets from open online catalogs (e.g., the Open Supernova Catalog), generates Monte Carlo ensembles of semi-analytical light curve fits to those datasets and their associated Bayesian parameter posteriors, and optionally delivers the fitting results back to those same catalogs to make them available to the rest of the community. MOSFiT helps bridge the gap between observations and theory in time-domain astronomy; in addition to making the application of existing models and creation of new models as simple as possible, MOSFiT yields statistically robust predictions for transient characteristics, with a standard output format that includes all the setup information necessary to reproduce a given result.

[ascl:1303.011] MOPSIC: Extended Version of MOPSI

MOPSIC was created to analyze bolometer data but can be used for much more versatile tasks. It is an extension of MOPSI; this software had been merged with the command interpreter of GILDAS. For data reduction, MOPSIC uses a special method to calculate the chopped signal. This gives much better results than the straight difference of the signals obtained at both chopper positions. In addition there are also scripts to reduce pointings, skydips, and to calculate the RCPs (Receiver Channel Parameters) from calibration maps. MOPSIC offers a much broader range of applications including advanced planning functions for mapping and onoff observations, post-reduction data analysis and processing and even reduction of non-bolometer data (optical, IR, spectroscopy).

[ascl:1111.006] MOPEX: MOsaicker and Point source EXtractor

MOPEX (MOsaicker and Point source EXtractor) is a package for reducing and analyzing imaging data, as well as MIPS SED data. MOPEX includes the point source extraction package, APEX.
MOPEX is designed to allow the user to:

  • perform sophisticated background matching of individual data frames
  • mosaic the individual frames downloaded from the Spitzer archive
  • perform both temporal and spatial outlier rejection during mosaicking
  • apply offline pointing refinement for MIPS data (refinement is already applied to IRAC data)
  • perform source detection on the mosaics using APEX
  • compute aperture photometry or PRF-fitting photometry for point sources
  • perform interpolation, coaddition, and spectrum extraction of MIPS SED images.
MOPEX comes in two different interfaces (GUI and command-line), both of which come packaged together. We recommend that all new users start with the GUI, which is more user-friendly than the command-line interface

[ascl:1308.018] MoogStokes: Zeeman polarized radiative transfer

MOOGStokes is a version of the MOOG one-dimensional local thermodynamic equilibrium radiative transfer code that incorporates a Stokes vector treatment of polarized radiation through a magnetic medium. It consists of three complementary programs that together can synthesize the disk-averaged emergent spectrum of a star with a magnetic field. The MOOGStokes package synthesizes emergent spectra of stars with magnetic fields in a familiar computational framework and produces disk-averaged spectra for all Stokes vectors ( I, Q, U, V ), normalized by the continuum.

[ascl:1202.009] MOOG: LTE line analysis and spectrum synthesis

MOOG performs a variety of LTE line analysis and spectrum synthesis tasks. The typical use of MOOG is to assist in the determination of the chemical composition of a star. The basic equations of LTE stellar line analysis are followed. The coding is in various subroutines that are called from a few driver routines; these routines are written in standard FORTRAN. The standard MOOG version has been developed on unix, linux and macintosh computers.

One of the chief assets of MOOG is its ability to do on-line graphics. The plotting commands are given within the FORTRAN code. MOOG uses the graphics package SM, chosen for its ease of implementation in FORTRAN codes. Plotting calls are concentrated in just a few routines, and it should be possible for users of other graphics packages to substitute other appropriate FORTRAN commands.

[ascl:1805.027] MontePython 3: Parameter inference code for cosmology

MontePython 3 provides numerous ways to explore parameter space using Monte Carlo Markov Chain (MCMC) sampling, including Metropolis-Hastings, Nested Sampling, Cosmo Hammer, and a Fisher sampling method. This improved version of the Monte Python (ascl:1307.002) parameter inference code for cosmology offers new ingredients that improve the performance of Metropolis-Hastings sampling, speeding up convergence and offering significant time improvement in difficult runs. Additional likelihoods and plotting options are available, as are post-processing algorithms such as Importance Sampling and Adding Derived Parameter.

[ascl:1307.002] Monte Python: Monte Carlo code for CLASS in Python

Monte Python is a parameter inference code which combines the flexibility of the python language and the robustness of the cosmological code CLASS into a simple and easy to manipulate Monte Carlo Markov Chain code.

This version has been archived and replaced by MontePython 3 (ascl:1805.027).

[ascl:1502.006] Montblanc: GPU accelerated Radio Interferometer Measurement Equations in support of Bayesian Inference for Radio Observations

Montblanc, written in Python, is a GPU implementation of the Radio interferometer measurement equation (RIME) in support of the Bayesian inference for radio observations (BIRO) technique. The parameter space that BIRO explores results in tens of thousands of computationally expensive RIME evaluations before reduction to a single X2 value. The RIME is calculated over four dimensions, time, baseline, channel and source and the values in this 4D space can be independently calculated; therefore, the RIME is particularly amenable to a parallel implementation accelerated by Graphics Programming Units (GPUs). Montblanc is implemented for NVIDIA's CUDA architecture and outperforms MeqTrees (ascl:1209.010) and OSKAR.

[ascl:1010.036] Montage: An Astronomical Image Mosaicking Toolkit

Montage is an open source code toolkit for assembling Flexible Image Transport System (FITS) images into custom mosaics. It runs on all common Linux/Unix platforms, on desktops, clusters and computational grids, and supports all World Coordinate System (WCS) projections and common coordinate systems. Montage preserves spatial and calibration fidelity of input images, processes 40 million pixels in up to 32 minutes on 128 nodes on a Linux cluster, and provides independent engines for analyzing the geometry of images on the sky, re-projecting images, rectifying background emission to a common level, and co-adding images. It offers convenient tools for managing and manipulating large image files.

[ascl:1206.004] MOLSCAT: MOLecular SCATtering

MOLSCAT is a FORTRAN code for quantum mechanical (coupled channel) solution of the nonreactive molecular scattering problem and was developed to obtain collision rates for molecules in the interstellar gas which are needed to understand microwave and infrared astronomical observations. The code is implemented for various types of collision partners. In addition to the essentially exact close coupling method several approximate methods, including the Coupled States and Infinite Order Sudden approximations, are provided.

[ascl:1212.004] MOLIERE-5: Forward and inversion model for sub-mm wavelengths

MOLIERE-5 (Microwave Observation LIne Estimation and REtrieval) is a versatile forward and inversion model for the millimeter and submillimeter wavelengths range and includes an inversion model. The MOLIERE-5 forward model includes modules for the calculation of absorption coefficients, radiative transfer, and instrumental characteristics. The radiative transfer model is supplemented by a sensitivity module for estimating the contribution to the spectrum of each catalog line at its center frequency enabling the model to effectively filter for small spectral lines. The instrument model consists of several independent modules, including the calculation of the convolution of spectra and weighting functions with the spectrometer response functions. The instrument module also provides several options for modeling of frequency-switched observations. The MOLIERE-5 inversion model calculates linear Optimal Estimation, a least-squares retrieval method which uses statistical apriori knowledge on the retrieved parameters for the regularization of ill-posed inversion problems and computes diagnostics such as the measurement and smoothing error covariance matrices along with contribution and averaging kernel functions.

[ascl:1501.013] Molecfit: Telluric absorption correction tool

Molecfit corrects astronomical observations for atmospheric absorption features based on fitting synthetic transmission spectra to the astronomical data, which saves a significant amount of valuable telescope time and increases the instrumental efficiency. Molecfit can also estimate molecular abundances, especially the water vapor content of the Earth’s atmosphere. The tool can be run from a command-line or more conveniently through a GUI.

[ascl:1109.023] MOKA: A New Tool for Strong Lensing Studies

MOKA simulates the gravitational lensing signal from cluster-sized haloes. This algorithm implements recent results from numerical simulations to create realistic lenses with properties independent of numerical resolution and can be used for studies of the strong lensing cross section in dependence of halo structure.

[ascl:1010.009] ModeCode: Bayesian Parameter Estimation for Inflation

ModeCode is a publicly available code that computes the primordial scalar and tensor power spectra for single field inflationary models. ModeCode solves the inflationary mode equations numerically, avoiding the slow roll approximation. It provides an efficient and robust numerical evaluation of the inflationary perturbation spectrum, and allows the free parameters in the inflationary potential to be estimated within an MCMC computation. ModeCode also allows the estimation of reheating uncertainties once a potential has been specified. It is interfaced with CAMB and CosmoMC to compute cosmic microwave background angular power spectra and perform likelihood analysis and parameter estimation. It can be run as a standalone code as well. Errors in the results from ModeCode contribute negligibly to the error budget for analyses of data from Planck or other next generation experiments.

[ascl:1110.010] MOCASSIN: MOnte CArlo SimulationS of Ionized Nebulae

MOCASSIN is a fully 3D or 2D photoionisation and dust radiative transfer code which employs a Monte Carlo approach to the transfer of radiation through media of arbitrary geometry and density distribution. Written in Fortran, it was originally developed for the modelling of photoionised regions like HII regions and planetary nebulae and has since expanded and been applied to a variety of astrophysical problems, including modelling clumpy dusty supernova envelopes, star forming galaxies, protoplanetary disks and inner shell fluorence emission in the photospheres of stars and disk atmospheres. The code can deal with arbitrary Cartesian grids of variable resolution, it has successfully been used to model complex density fields from SPH calculations and can deal with ionising radiation extending from Lyman edge to the X-ray. The dust and gas microphysics is fully coupled both in the radiation transfer and in the thermal balance.

[ascl:1412.010] MMAS: Make Me A Star

Make Me A Star (MMAS) quickly generates stellar collision remnants and can be used in combination with realistic dynamical simulations of star clusters that include stellar collisions. The code approximates the merger process (including shock heating, hydrodynamic mixing, mass ejection, and angular momentum transfer) with simple algorithms based on conservation laws and a basic qualitative understanding of the hydrodynamics. These simple models agree very well with those from SPH (smoothed particle hydrodynamics) calculations of stellar collisions, and the subsequent stellar evolution of these models also matches closely that of the more accurate hydrodynamic models.

[ascl:1403.003] MLZ: Machine Learning for photo-Z

The parallel Python framework MLZ (Machine Learning and photo-Z) computes fast and robust photometric redshift PDFs using Machine Learning algorithms. It uses a supervised technique with prediction trees and random forest through TPZ that can be used for a regression or a classification problem, or a unsupervised methods with self organizing maps and random atlas called SOMz. These machine learning implementations can be efficiently combined into a more powerful one resulting in robust and accurate probability distributions for photometric redshifts.

[ascl:0104.001] MLAPM: Simulating Structure Formation from Collisionless Matter

MLAPM simulates structure formation from collisionless matter. The code, written in C, is purely grid-based and uses a recursively refined Cartesian grid to solve Poisson's equation for the potential, rather than obtaining the potential from a Green's function. Refinements can have arbitrary shapes and in practice closely follow the complex morphology of the density field that evolves. The timestep shortens by a factor two with each successive refinement. It is argued that an appropriate choice of softening length is of great importance and that the softening should be at all points an appropriate multiple of the local inter-particle separation. Unlike tree and P3M codes, multigrid codes automatically satisfy this requirement.

[ascl:1206.010] mkj_libs: Helper routines for plane-fitting & analysis tools

mkj_libs provides a set of helper routines (vector operations, astrometry, statistical analysis of spherical data) for the main plane-fitting and analysis tools.

[ascl:1409.001] mixT: single-temperature fit for a multi-component thermal plasma

mixT accurately predicts T derived from a single-temperature fit for a multi-component thermal plasma. It can be applied in the deprojection analysis of objects with the temperature and metallicity gradients, for correction of the PSF effects, for consistent comparison of numerical simulations of galaxy clusters and groups with the X-ray observations, and for estimating how emission from undetected components can bias the global X-ray spectral analysis.

[ascl:1505.011] missForest: Nonparametric missing value imputation using random forest

missForest imputes missing values particularly in the case of mixed-type data. It uses a random forest trained on the observed values of a data matrix to predict the missing values. It can be used to impute continuous and/or categorical data including complex interactions and non-linear relations. It yields an out-of-bag (OOB) imputation error estimate without the need of a test set or elaborate cross-validation and can be run in parallel to save computation time. missForest has been used to, among other things, impute variable star colors in an All-Sky Automated Survey (ASAS) dataset of variable stars with no NOMAD match.

[ascl:1010.062] MissFITS: Basic Maintenance and Packaging Tasks on FITS Files

MissFITS is a program that performs basic maintenance and packaging tasks on FITS files using an optimized FITS library. MissFITS can:

  • add, edit, and remove FITS header keywords;
  • split and join Multi-Extension-FITS (MEF) files;
  • unpile and pile FITS data-cubes; and,
  • create, check, and update FITS checksums, using R. Seaman’s protocol.

[ascl:1110.025] MIS: A Miriad Interferometry Singledish Toolkit

MIS is a pipeline toolkit using the package MIRIAD to combine Interferometric and Single Dish data. This was prompted by our observations made with the Combined Array For Research in Millimeter-wave Astronomy (CARMA) interferometer of the star-forming region NGC 1333, a large survey highlighting the new 23-element and singledish observing modes. The project consists of 20 CARMA datasets each containing interferometric as well as simultaneously obtained single dish data, for 3 molecular spectral lines and continuum, in 527 different pointings, covering an area of about 8 by 11 arcminutes. A small group of collaborators then shared this toolkit and their parameters via CVS, and scripts were developed to ensure uniform data reduction across the group. The pipeline was run end-to-end each night that new observations were obtained, producing maps that contained all the data to date. This approach could serve as a model for repeated calibration and mapping of large mixed-mode correlation datasets from ALMA.

[ascl:1106.007] MIRIAD: Multi-channel Image Reconstruction, Image Analysis, and Display

MIRIAD is a radio interferometry data-reduction package, designed for taking raw visibility data through calibration to the image analysis stage. It has been designed to handle any interferometric array, with working examples for BIMA, CARMA, SMA, WSRT, and ATCA. A separate version for ATCA is available, which differs in a few minor ways from the CARMA version.

[ascl:1302.006] Minerva: Cylindrical coordinate extension for Athena

Minerva is a cylindrical coordinate extension of the Athena astrophysical MHD code of Stone, Gardiner, Teuben, and Hawley. The extension follows the approach of Athena's original developers and has been designed to alter the existing Cartesian-coordinates code as minimally and transparently as possible. The numerical equations in cylindrical coordinates are formulated to maintain consistency with constrained transport (CT), a central feature of the Athena algorithm, while making use of previously implemented code modules such as the Riemann solvers. Angular momentum transport, which is critical in astrophysical disk systems dominated by rotation, is treated carefully.

[ascl:0101.001] MILLISEARCH: A Search for Millilensing in BATSE GRB Data

The millisearch.for code was used to generate a new search for the gravitational lens effects of a significant cosmological density of supermassive compact objects (SCOs) on gamma-ray bursts. No signal attributable to millilensing was found. We inspected the timing data of 774 BATSE-triggered GRBs for evidence of millilensing: repeated peaks similar in light-curve shape and spectra. Our null detection leads us to conclude that, in all candidate universes simulated, OmegaSCO < 0.1 is favored for 105 < MSCO/Modot < 109, while in some universes and mass ranges the density limits are as much as 10 times lower. Therefore, a cosmologically significant population of SCOs near globular cluster mass neither came out of the primordial universe, nor condensed at recombination.

[submitted] millennium-tap-query: A Python Tool to Query the Millennium Simulation UWS/TAP client

millennium-tap-query is a simple wrapper for the Python package requests to deal with connections to the Millennium TAP Web Client. With this tool you can perform basic or advanced queries to the Millennium Simulation database and download the data products. millennium-tap-query is similar to the TAP query tool in the German Astrophysical Virtual Observatory (GAVO) VOtables package.

[ascl:1811.010] MillCgs: Searching for Compact Groups in the Millennium Simulation

MillCgs clusters galaxies from the semi-analytic models run on top of the Millennium Simulation to identify Compact Groups. MillCgs uses a machine learning clustering algorithm to find the groups and then runs analytics to filter out the groups that do not fit the user specified criteria. The package downloads the data, processes it, and then creates graphs of the data.

[ascl:1511.012] milkywayproject_triggering: Correlation functions for two catalog datasets

This triggering code calculates the correlation function between two astrophysical data catalogs using the Landy-Szalay approximator generalized for heterogeneous datasets (Landy & Szalay, 1993; Bradshaw et al, 2011) or the auto-correlation function of one dataset. It assumes that one catalog has positional information as well as an object size (effective radius), and the other only positional information.

[ascl:1810.019] MIEX: Mie scattering code for large grains

Miex calculates Mie scattering coefficients and efficiency factors for broad grain size distributions and a very wide wavelength range (λ ≈ 10-10-10-2m) of the interacting radiation and incorporates standard solutions of the scattering amplitude functions. The code handles arbitrary size parameters, and single scattering by particle ensembles is calculated by proper averaging of the respective parameters.

[ascl:1807.016] MIDLL: Markwardt IDL Library

The Markwardt IDL Library contains routines for curve fitting and function minimization, including MPFIT (ascl:1208.019), statistical tests, and non-linear optimization (TNMIN); graphics programs including plotting three-dimensional data as a cube and fixed- or variable-width histograms; adaptive numerical integration (Quadpack), Chebyshev approximation and interpolation, and other mathematical tools; many ephemeris and timing routines; and array and set operations, such as computing the fast product of a large array, efficiently inserting or deleting elements in an array, and performing set operations on numbers and strings; and many other useful and varied routines.

[ascl:1010.008] midIR_sensitivity: Mid-infrared astronomy with METIS

midIR_sensitivity is IDL code that calculates the sensitivity of a ground-based mid-infrared instrument for astronomy. The code was written for the Phase A study of the instrument METIS (http://www.strw.leidenuniv.nl/metis), the Mid-Infrared E-ELT Imager and Spectrograph, for the 42-m European Extremely Large Telescope. The model uses a detailed set of input parameters for site characteristics and atmospheric profiles, optical design, and thermal background. The code and all input parameters are highly tailored for the particular design parameters of the E-ELT and METIS, however, the program is structured in such a way that the parameters can easily be adjusted for a different system, or alternative input files used.

[ascl:1303.007] micrOMEGAs: Calculation of dark matter properties

micrOMEGAs calculates the properties of cold dark matter in a generic model of particle physics. First developed to compute the relic density of dark matter, the code also computes the rates for dark matter direct and indirect detection. The code provides the mass spectrum, cross-sections, relic density and exotic fluxes of gamma rays, positrons and antiprotons. The propagation of charged particles in the Galactic halo is handled with a module that allows to easily modify the propagation parameters. The cross-sections for both spin dependent and spin independent interactions of WIMPS on protons are computed automatically as well as the rates for WIMP scattering on nuclei in a large detector. Annihilation cross-sections of the dark matter candidate at zero velocity, relevant for indirect detection of dark matter, are computed automatically, and the propagation of charged particles in the Galactic halo is also handled.

[ascl:1205.003] MIA+EWS: MIDI data reduction tool

MIA+EWS is a package of two data reduction tools for MIDI data which uses power-spectrum analysis or the information contained in the spectrally-dispersed fringe measurements in order to estimate the correlated flux and the visibility as function of wavelength in the N-band. MIA, which stands for MIDI Interactive Analysis, uses a Fast Fourier Transformation to calculate the Fourier amplitudes of the fringe packets to calculate the correlated flux and visibility. EWS stands for Expert Work-Station, which is a collection of IDL tools to apply coherent visibility analysis to reduce MIDI data. The EWS package allows the user to control and examine almost every aspect of MIDI data and its reduction. The usual data products are the correlated fluxes, total fluxes and differential phase.

[ascl:1511.007] MHF: MLAPM Halo Finder

MHF is a Dark Matter halo finder that is based on the refinement grids of MLAPM. The grid structure of MLAPM adaptively refines around high-density regions with an automated refinement algorithm, thus naturally "surrounding" the Dark Matter halos, as they are simply manifestations of over-densities within (and exterior) to the underlying host halo. Using this grid structure, MHF restructures the hierarchy of nested isolated MLAPM grids into a "grid tree". The densest cell in the end of a tree branch marks center of a prospective Dark Matter halo. All gravitationally bound particles about this center are collected to obtain the final halo catalog. MHF automatically finds halos within halos within halos.

[ascl:1402.035] MGHalofit: Modified Gravity extension of Halofit

MGHalofit is a modified gravity extension of the fitting formula for the matter power spectrum of HALOFIT and its improvement by Takahashi et al. MGHalofit is implemented in MGCAMB, which is based on CAMB. MGHalofit calculates the nonlinear matter power spectrum P(k) for the Hu-Sawicki model. Comparing MGHalofit predictions at various redshifts (z<=1) to the f(R) simulations, the accuracy on P(k) is 6% at k<1 h/Mpc and 12% at 1<k<10 h/Mpc respectively.

[ascl:1010.081] MGGPOD: A Monte Carlo Suite for Gamma-Ray Astronomy

We have developed MGGPOD, a user-friendly suite of Monte Carlo codes built around the widely used GEANT (Version 3.21) package. The MGGPOD Monte Carlo suite and documentation are publicly available for download. MGGPOD is an ideal tool for supporting the various stages of gamma-ray astronomy missions, ranging from the design, development, and performance prediction through calibration and response generation to data reduction. In particular, MGGPOD is capable of simulating ab initio the physical processes relevant for the production of instrumental backgrounds. These include the build-up and delayed decay of radioactive isotopes as well as the prompt de-excitation of excited nuclei, both of which give rise to a plethora of instrumental gamma-ray background lines in addition to continuum backgrounds.

[ascl:1403.017] MGE_FIT_SECTORS: Multi-Gaussian Expansion fits to galaxy images

MGE_FIT_SECTORS performs Multi-Gaussian Expansion (MGE) fits to galaxy images. The MGE parameterizations are useful in the construction of realistic dynamical models of galaxies, PSF deconvolution of images, the correction and estimation of dust absorption effects, and galaxy photometry. The algorithm is well suited for use with multiple-resolution images (e.g. Hubble Space Telescope (HST) and ground-based images).

[ascl:1106.013] MGCAMB: Modification of Growth with CAMB

CAMB is a public Fortran 90 code written by Antony Lewis and Anthony Challinor for evaluating cosmological observables. MGCAMB is a modified version of CAMB in which the linearized Einstein equations of General Relativity (GR) are modified. MGCAMB can also be used in CosmoMC to fit different modified-gravity (MG) models to data.

[ascl:1205.010] Meudon PDR: Atomic & molecular structure of interstellar clouds

The Meudon PDR code computes the atomic and molecular structure of interstellar clouds. It can be used to study the physics and chemistry of diffuse clouds, photodissociation regions (PDRs), dark clouds, or circumstellar regions. The model computes the thermal balance of a stationary plane-parallel slab of gas and dust illuminated by a radiation field and takes into account heating processes such as the photoelectric effect on dust, chemistry, cosmic rays, etc. and cooling resulting from infrared and millimeter emission of the abundant species. Chemistry is solved for any number of species and reactions. Once abundances of atoms and molecules and level excitation of the most important species have been computed at each point, line intensities and column densities can be deduced.

[ascl:1111.009] MESS: Multi-purpose Exoplanet Simulation System

MESS is a Monte Carlo simulation IDL code which uses either the results of the statistical analysis of the properties of discovered planets, or the results of the planet formation theories, to build synthetic planet populations fully described in terms of frequency, orbital elements and physical properties. They can then be used to either test the consistency of their properties with the observed population of planets given different detection techniques or to actually predict the expected number of planets for future surveys. It can be used to probe the physical and orbital properties of a putative companion within the circumstellar disk of a given star and to test constrain the orbital distribution properties of a potential planet population around the members of the TW Hydrae association. Finally, using in its predictive mode, the synergy of future space and ground-based telescopes instrumentation has been investigated to identify the mass-period parameter space that will be probed in future surveys for giant and rocky planets.

[ascl:1612.012] Meso-NH: Non-hydrostatic mesoscale atmospheric model

Meso-NH is the non-hydrostatic mesoscale atmospheric model of the French research community jointly developed by the Laboratoire d'Aérologie (UMR 5560 UPS/CNRS) and by CNRM (UMR 3589 CNRS/Météo-France). Meso-NH incorporates a non-hydrostatic system of equations for dealing with scales ranging from large (synoptic) to small (large eddy) scales while calculating budgets and has a complete set of physical parameterizations for the representation of clouds and precipitation. It is coupled to the surface model SURFEX for representation of surface atmosphere interactions by considering different surface types (vegetation, city​​, ocean, lake) and allows a multi-scale approach through a grid-nesting technique. Meso-NH is versatile, vectorized, parallelized, and operates in 1D, 2D or 3D; it is coupled with a chemistry module (including gas-phase, aerosol, and aqua-phase components) and a lightning module, and has observation operators that compare model output directly with satellite observations, radar, lidar and GPS.

[ascl:1709.003] MeshLab: 3D triangular meshes processing and editing

MeshLab processes and edits 3D triangular meshes. It includes tools for editing, cleaning, healing, inspecting, rendering, texturing and converting meshes, and offers features for processing raw data produced by 3D digitization tools and devices and for preparing models for 3D printing.

[ascl:1010.083] MESA: Modules for Experiments in Stellar Astrophysics

Stellar physics and evolution calculations enable a broad range of research in astrophysics. Modules for Experiments in Stellar Astrophysics (MESA) is a suite of open source libraries for a wide range of applications in computational stellar astrophysics. A newly designed 1-D stellar evolution module, MESA star, combines many of the numerical and physics modules for simulations of a wide range of stellar evolution scenarios ranging from very-low mass to massive stars, including advanced evolutionary phases. MESA star solves the fully coupled structure and composition equations simultaneously. It uses adaptive mesh refinement and sophisticated timestep controls, and supports shared memory parallelism based on OpenMP. Independently usable modules provide equation of state, opacity, nuclear reaction rates, and atmosphere boundary conditions. Each module is constructed as a separate Fortran 95 library with its own public interface. Examples include comparisons to other codes and show evolutionary tracks of very low mass stars, brown dwarfs, and gas giant planets; the complete evolution of a 1 Msun star from the pre-main sequence to a cooling white dwarf; the Solar sound speed profile; the evolution of intermediate mass stars through the thermal pulses on the He-shell burning AGB phase; the interior structure of slowly pulsating B Stars and Beta Cepheids; evolutionary tracks of massive stars from the pre-main sequence to the onset of core collapse; stars undergoing Roche lobe overflow; and accretion onto a neutron star.

[ascl:1305.015] Merger Trees: Formation history of dark matter haloes

Merger Trees uses a Monte Carlo algorithm to generate merger trees describing the formation history of dark matter haloes; the algorithm is implemented in Fortran. The algorithm is a modification of the algorithm of Cole et al. used in the GALFORM semi-analytic galaxy formation model (ascl:1510.005) based on the Extended Press–Schechter theory. It should be applicable to hierarchical models with a wide range of power spectra and cosmological models. It is tuned to be in accurate agreement with the conditional mass functions found in the analysis of merger trees extracted from the Λ cold dark matter Millennium N-body simulation. The code should be a useful tool for semi-analytic models of galaxy formation and for modelling hierarchical structure formation in general.

[ascl:1201.008] Mercury: A software package for orbital dynamics

Mercury is a new general-purpose software package for carrying out orbital integrations for problems in solar-system dynamics. Suitable applications include studying the long-term stability of the planetary system, investigating the orbital evolution of comets, asteroids or meteoroids, and simulating planetary accretion. Mercury is designed to be versatile and easy to use, accepting initial conditions in either Cartesian coordinates or Keplerian elements in "cometary" or "asteroidal" format, with different epochs of osculation for different objects. Output from an integration consists of osculating elements, written in a machine-independent compressed format, which allows the results of a calculation performed on one platform to be transferred (e.g. via FTP) and decoded on another.

During an integration, Mercury monitors and records details of close encounters, sungrazing events, ejections and collisions between objects. The effects of non-gravitational forces on comets can also be modeled. The package supports integrations using a mixed-variable symplectic routine, the Bulirsch-Stoer method, and a hybrid code for planetary accretion calculations.

[ascl:1511.020] Mercury-T: Tidally evolving multi-planet systems code

Mercury-T calculates the evolution of semi-major axis, eccentricity, inclination, rotation period and obliquity of the planets as well as the rotation period evolution of the host body; it is based on the N-body code Mercury (Chambers 1999, ascl:1201.008). It is flexible, allowing computation of the tidal evolution of systems orbiting any non-evolving object (if its mass, radius, dissipation factor and rotation period are known), but also evolving brown dwarfs (BDs) of mass between 0.01 and 0.08 M⊙, an evolving M-dwarf of 0.1 M⊙, an evolving Sun-like star, and an evolving Jupiter.

[ascl:1209.010] MeqTrees: Software package for implementing Measurement Equations

MeqTrees is a software package for implementing Measurement Equations. This makes it uniquely suited for simulation and calibration of radioastronomical data, especially that involving new radiotelescopes and observational regimes. MeqTrees is implemented as a Python-based front-end called the meqbrowser, and an efficient (C++-based) computational back-end called the meqserver. Numerical models are defined on the front-end via a Python-based Tree Definition Language (TDL), then rapidly executed on the back-end. The use of TDL facilitates an extremely short turn-around time for experimentation with new ideas. This is also helped by unprecedented visualization capabilities for all final and intermediate results. A flexible data model and a number of important optimizations in the back-end ensures that the numerical performance is comparable to that of hand-written code.

MeqTrees includes a highly capable FITS viewer and sky model manager called Tigger, which can also work as a standalone tool.

[ascl:1410.002] MEPSA: Multiple Excess Peak Search Algorithm

MEPSA (Multiple Excess Peak Search Algorithm) identifies peaks within a uniformly sampled time series affected by uncorrelated Gaussian noise. MEPSA scans the time series at different timescales by comparing a given peak candidate with a variable number of adjacent bins. While this has originally been conceived for the analysis of gamma-ray burst light (GRB) curves, its usage can be readily extended to other astrophysical transient phenomena whose activity is recorded through different surveys. MEPSA's high flexibility permits the mask of excess patterns it uses to be tailored and optimized without modifying the code.

[ascl:1711.012] megaman: Manifold Learning for Millions of Points

megaman is a scalable manifold learning package implemented in python. It has a front-end API designed to be familiar to scikit-learn but harnesses the C++ Fast Library for Approximate Nearest Neighbors (FLANN) and the Sparse Symmetric Positive Definite (SSPD) solver Locally Optimal Block Precodition Gradient (LOBPCG) method to scale manifold learning algorithms to large data sets. It is designed for researchers and as such caches intermediary steps and indices to allow for fast re-computation with new parameters.

[ascl:1203.008] MegaLUT: Correcting ellipticity measurements of galaxies

MegaLUT is a simple and fast method to correct ellipticity measurements of galaxies from the distortion by the instrumental and atmospheric point spread function (PSF), in view of weak lensing shear measurements. The method performs a classification of galaxies and associated PSFs according to measured shape parameters, and builds a lookup table of ellipticity corrections by supervised learning. This new method has been applied to the GREAT10 image analysis challenge, and demonstrates a refined solution that obtains the highly competitive quality factor of Q = 142, without any power spectrum denoising or training. Of particular interest is the efficiency of the method, with a processing time below 3 ms per galaxy on an ordinary CPU.

[ascl:1106.006] MECI: A Method for Eclipsing Component Identification

We describe an automated method for assigning the most probable physical parameters to the components of an eclipsing binary, using only its photometric light curve and combined colors. With traditional methods, one attempts to optimize a multi-parameter model over many iterations, so as to minimize the chi-squared value. We suggest an alternative method, where one selects pairs of coeval stars from a set of theoretical stellar models, and compares their simulated light curves and combined colors with the observations. This approach greatly reduces the parameter space over which one needs to search, and allows one to estimate the components' masses, radii and absolute magnitudes, without spectroscopic data. We have implemented this method in an automated program using published theoretical isochrones and limb-darkening coefficients. Since it is easy to automate, this method lends itself to systematic analyses of datasets consisting of photometric time series of large numbers of stars, such as those produced by OGLE, MACHO, TrES, HAT, and many others surveys.

[ascl:1205.001] Mechanic: Numerical MPI framework for dynamical astronomy

The Mechanic package is a numerical framework for dynamical astronomy, designed to help in massive numerical simulations by efficient task management and unified data storage. The code is built on top of the Message Passing Interface (MPI) and Hierarchical Data Format (HDF5) standards and uses the Task Farm approach to manage numerical tasks. It relies on the core-module approach. The numerical problem implemented in the user-supplied module is separated from the host code (core). The core is designed to handle basic setup, data storage and communication between nodes in a computing pool. It has been tested on large CPU-clusters, as well as desktop computers. The Mechanic may be used in computing dynamical maps, data optimization or numerical integration.

[ascl:1302.012] ME(SSY)**2: Monte Carlo Code for Star Cluster Simulations

ME(SSY)**2 stands for “Monte-carlo Experiments with Spherically SYmmetric Stellar SYstems." This code simulates the long term evolution of spherical clusters of stars; it was devised specifically to treat dense galactic nuclei. It is based on the pioneering Monte Carlo scheme proposed by Hénon in the 70's and includes all relevant physical ingredients (2-body relaxation, stellar mass spectrum, collisions, tidal disruption, ldots). It is basically a Monte Carlo resolution of the Fokker-Planck equation. It can cope with any stellar mass spectrum or velocity distribution. Being a particle-based method, it also allows one to take stellar collisions into account in a very realistic way. This unique code, featuring most important physical processes, allows million particle simulations, spanning a Hubble time, in a few CPU days on standard personal computers and provides a wealth of data only rivalized by N-body simulations. The current version of the software requires the use of routines from the "Numerical Recipes in Fortran 77" (http://www.nrbook.com/a/bookfpdf.php).

[ascl:1504.008] MCSpearman: Monte Carlo error analyses of Spearman's rank test

Spearman’s rank correlation test is commonly used in astronomy to discern whether a set of two variables are correlated or not. Unlike most other quantities quoted in astronomical literature, the Spearman’s rank correlation coefficient is generally quoted with no attempt to estimate the errors on its value. This code implements a number of Monte Carlo based methods to estimate the uncertainty on the Spearman’s rank correlation coefficient.

[ascl:1201.001] McScatter: Three-Body Scattering with Stellar Evolution

McScatter illustrates a method of combining stellar dynamics with stellar evolution. The method is intended for elaborate applications, especially the dynamical evolution of rich star clusters. The dynamics is based on binary scattering in a multi-mass field of stars with uniform density and velocity dispersion, using the scattering cross section of Giersz (MNRAS, 2001, 324, 218-30).

[ascl:1210.017] McPHAC: McGill Planar Hydrogen Atmosphere Code

The McGill Planar Hydrogen Atmosphere Code (McPHAC) v1.1 calculates the hydrostatic equilibrium structure and emergent spectrum of an unmagnetized hydrogen atmosphere in the plane-parallel approximation at surface gravities appropriate for neutron stars. McPHAC incorporates several improvements over previous codes for which tabulated model spectra are available: (1) Thomson scattering is treated anisotropically, which is shown to result in a 0.2%-3% correction in the emergent spectral flux across the 0.1-5 keV passband; (2) the McPHAC source code is made available to the community, allowing it to be scrutinized and modified by other researchers wishing to study or extend its capabilities; and (3) the numerical uncertainty resulting from the discrete and iterative solution is studied as a function of photon energy, indicating that McPHAC is capable of producing spectra with numerical uncertainties <0.01%. The accuracy of the spectra may at present be limited to ~1%, but McPHAC enables researchers to study the impact of uncertain inputs and additional physical effects, thereby supporting future efforts to reduce those inaccuracies. Comparison of McPHAC results with spectra from one of the previous model atmosphere codes (NSA) shows agreement to lsim1% near the peaks of the emergent spectra. However, in the Wien tail a significant deficit of flux in the spectra of the previous model is revealed, determined to be due to the previous work not considering large enough optical depths at the highest photon frequencies. The deficit is most significant for spectra with T eff < 105.6 K, though even there it may not be of much practical importance for most observations.

[ascl:1407.004] MCMAC: Monte Carlo Merger Analysis Code

Monte Carlo Merger Analysis Code (MCMAC) aids in the study of merging clusters. It takes observed priors on each subcluster's mass, radial velocity, and projected separation, draws randomly from those priors, and uses them in a analytic model to get posterior PDF's for merger dynamic properties of interest (e.g. collision velocity, time since collision).

[ascl:1107.015] McLuster: A Tool to Make a Star Cluster

The tool McLuster is an open source code that can be used to either set up initial conditions for N-body computations or, alternatively, to generate artificial star clusters for direct investigation. There are two different versions of the code, one basic version for generating all kinds of unevolved clusters (in the following called mcluster) and one for setting up evolved stellar populations at a given age. The former is completely contained in the C file main.c. The latter (dubbed mcluster_sse) is more complex and requires additional FORTRAN routines, namely the Single-Star Evolution (SSE) routines by Hurley, Pols & Tout (ascl:1303.015) that are provided with the McLuster code.

[ascl:1511.008] MCAL: M dwarf metallicity and temperature calculator

MCAL calculates high precision metallicities and effective temperatures for M dwarfs; the method behaves properly down to R = 40 000 and S/N = 25, and results were validated against a sample of stars in common with SOPHIE high resolution spectra.

[ascl:1204.005] MC3D: Monte-Carlo 3D Radiative Transfer Code

MC3D is a 3D continuum radiative transfer code; it is based on the Monte-Carlo method and solves the radiative transfer problem self-consistently. It is designed for the simulation of dust temperatures in arbitrary geometric configurations and the resulting observables: spectral energy distributions, wavelength-dependent images, and polarization maps. The main objective is the investigation of "dust-dominated" astrophysical systems such as young stellar objects surrounded by an optically thick circumstellar disk and an optically thin(ner) envelope, debris disks around more evolved stars, asymptotic giant branch stars, the dust component of the interstellar medium, and active galactic nuclei.

[ascl:1610.013] MC3: Multi-core Markov-chain Monte Carlo code

MC3 (Multi-core Markov-chain Monte Carlo) is a Bayesian statistics tool that can be executed from the shell prompt or interactively through the Python interpreter with single- or multiple-CPU parallel computing. It offers Markov-chain Monte Carlo (MCMC) posterior-distribution sampling for several algorithms, Levenberg-Marquardt least-squares optimization, and uniform non-informative, Jeffreys non-informative, or Gaussian-informative priors. MC3 can share the same value among multiple parameters and fix the value of parameters to constant values, and offers Gelman-Rubin convergence testing and correlated-noise estimation with time-averaging or wavelet-based likelihood estimation methods.

[ascl:1703.014] MC-SPAM: Monte-Carlo Synthetic-Photometry/Atmosphere-Model

MC-SPAM (Monte-Carlo Synthetic-Photometry/Atmosphere-Model) generates limb-darkening coefficients from models that are comparable to transit photometry; it extends the original SPAM algorithm by Howarth (2011) by taking in consideration the uncertainty on the stellar and transit parameters of the system under analysis.

[ascl:1705.008] MBProj2: Multi-Band x-ray surface brightness PROJector 2

MBProj2 obtains thermodynamic profiles of galaxy clusters. It forward-models cluster X-ray surface brightness profiles in multiple bands, optionally assuming hydrostatic equilibrium. The code is a set of Python classes the user can use or extend. When modelling a cluster assuming hydrostatic equilibrium, the user chooses a form for the density profile (e.g. binning or a beta model), the metallicity profile, and the dark matter profile (e.g. NFW). If hydrostatic equilibrium is not assumed, a temperature profile model is used instead of the dark matter profile. The code uses the emcee Markov Chain Monte Carlo code (ascl:1303.002) to sample the model parameters, using these to produce chains of thermodynamic profiles.

[ascl:1602.020] mbb_emcee: Modified Blackbody MCMC

Mbb_emcee fits modified blackbodies to photometry data using an affine invariant MCMC. It has large number of options which, for example, allow computation of the IR luminosity or dustmass as part of the fit. Carrying out a fit produces a HDF5 output file containing the results, which can either be read directly, or read back into a mbb_results object for analysis. Upper and lower limits can be imposed as well as Gaussian priors on the model parameters. These additions are useful for analyzing poorly constrained data. In addition to standard Python packages scipy, numpy, and cython, mbb_emcee requires emcee (ascl:1303.002), Astropy (ascl:1304.002), h5py, and for unit tests, nose.

[ascl:1205.008] Mayavi2: 3D Scientific Data Visualization and Plotting

Mayavi provides general-purpose 3D scientific visualizations. It offers easy interactive tools for data visualization that fit with the scientific user's workflow. Mayavi provides several entry points: a full-blown interactive application; a Python library with both a MATLAB-like interface focused on easy scripting and a feature-rich object hierarchy; widgets associated with these objects for assembling in a domain-specific application, and plugins that work with a general purpose application-building framework.

[ascl:1601.018] MATPHOT: Stellar photometry and astrometry with discrete point spread functions

A discrete Point Spread Function (PSF) is a sampled version of a continuous two-dimensional PSF. The shape information about the photon scattering pattern of a discrete PSF is typically encoded using a numerical table (matrix) or a FITS image file. MATPHOT shifts discrete PSFs within an observational model using a 21-pixel- wide damped sinc function and position partial derivatives are computed using a five-point numerical differentiation formula. MATPHOT achieves accurate and precise stellar photometry and astrometry of undersampled CCD observations by using supersampled discrete PSFs that are sampled two, three, or more times more finely than the observational data.

[ascl:1407.005] MATLAB package for astronomy and astrophysics

The MATLAB package for astronomy and astrophysics is a collection of software tools and modular functions for astronomy and astrophysics, written in the MATLAB environment. It includes over 700 MATLAB functions and a few tens of data files and astronomical catalogs. The scripts cover a wide range of subjects including: astronomical image processing, ds9 control, astronomical spectra, optics and diffraction phenomena, catalog retrieval and searches, celestial maps and projections, Solar System ephemerides, planar and spherical geometry, time and coordinates conversion and manipulation, cosmology, gravitational lensing, function fitting, general utilities, plotting utilities, statistics, and time series analysis.

[ascl:1406.010] MATCH: A program for matching star lists

MATCH matches up items in two different lists, which can have two different systems of coordinates. The program allows the two sets of coordinates to be related by a linear, quadratic, or cubic transformation. MATCH was designed and written to work on lists of stars and other astronomical objects but can be applied to other types of data. In order to match two lists of N points, the main algorithm calls for O(N^6) operations; though not the most efficient choice, it does allow for arbitrary translation, rotation, and scaling.

[ascl:1401.008] massconvert: Halo Mass Conversion

massconvert, written in Fortran, provides driver and fitting routines for converting halo mass definitions from one spherical overdensity to another assuming an NFW density profile. In surveys that probe ever lower cluster masses and temperatures, sample variance is generally comparable to or greater than shot noise and thus cannot be neglected in deriving precision cosmological constraints; massconvert offers an accurate fitting formula for the conversion between different definitions of halo mass.

[ascl:1104.004] MASSCLEAN: MASSive CLuster Evolution and ANalysis Package

MASSCLEAN is a sophisticated and robust stellar cluster image and photometry simulation package. This package is able to create color-magnitude diagrams and standard FITS images in any of the traditional optical and near-infrared bands based on cluster characteristics input by the user, including but not limited to distance, age, mass, radius and extinction. At the limit of very distant, unresolved clusters, we have checked the integrated colors created in MASSCLEAN against those from other simple stellar population (SSP) models with consistent results. Because the algorithm populates the cluster with a discrete number of tenable stars, it can be used as part of a Monte Carlo Method to derive the probabilistic range of characteristics (integrated colors, for example) consistent with a given cluster mass and age.

[ascl:1101.009] MasQU: Finite Differences on Masked Irregular Stokes Q,U Grids

The detection of B-mode polarization in the CMB is one of the most important outstanding tests of inflationary cosmology. One of the necessary steps for extracting polarization information in the CMB is reducing contamination from so-called "ambiguous modes" on a masked sky, which contain leakage from the larger E-mode signal. This can be achieved by utilising derivative operators on the real-space Stokes Q and U parameters. This paper presents an algorithm and a software package to perform this procedure on the nearly full sky, i.e., with projects such as the Planck Surveyor and future satellites in mind; in particular, the package can perform finite differences on masked, irregular grids and is applied to a semi-regular spherical pixellization, the HEALPix grid. The formalism reduces to the known finite-difference solutions in the case of a regular grid. We quantify full-sky improvements on the possible bounds on the CMB B-mode signal. We find that in the specific case of E and B-mode separation, there exists a "pole problem" in our formalism which produces signal contamination at very low multipoles l. Several solutions to the "pole problem" are presented; one proposed solution facilitates a calculation of a general Gaussian quadrature scheme, which finds application in calculating accurate harmonic coefficients on the HEALPix sphere. Nevertheless, on a masked sphere the software represents a considerable reduction in B-mode noise from limited sky coverage.

[ascl:1605.001] MARZ: Redshifting Program

MARZ analyzes objects and produces high quality spectroscopic redshift measurements. Spectra not matched correctly by the automatic algorithm can be redshifted manually by cycling automatic results, manual template comparison, or marking spectral features. The software has an intuitive interface and powerful automatic matching capabilities on spectra, and can be run interactively or from the command line, and runs as a Web application. MARZ can be run on a local server; it is also available for use on a public server.

[ascl:1711.020] MARXS: Multi-Architecture Raytrace Xray mission Simulator

MARXS (Multi-Architecture-Raytrace-Xraymission-Simulator) simulates X-ray observatories. Primarily designed to simulate X-ray instruments on astronomical X-ray satellites and sounding rocket payloads, it can also be used to ray-trace experiments in the laboratory. MARXS performs polarization Monte-Carlo ray-trace simulations from a source (astronomical or lab) through a collection of optical elements such as mirrors, baffles, and gratings to a detector.

[ascl:1302.001] MARX: Model of AXAF Response to X-rays

MARX (Model of AXAF Response to X-rays) is a suite of programs designed to enable the user to simulate the on-orbit performance of the Chandra satellite. MARX provides a detailed ray-trace simulation of how Chandra responds to a variety of astrophysical sources and can generate standard FITS events files and images as output. It contains models for the HRMA mirror system onboard Chandra as well as the HETG and LETG gratings and all focal plane detectors.

[ascl:1011.004] MARS: The MAGIC Analysis and Reconstruction Software

With the commissioning of the second MAGIC gamma-ray Cherenkov telescope situated close to MAGIC-I, the standard analysis package of the MAGIC collaboration, MARS, has been upgraded in order to perform the stereoscopic reconstruction of the detected atmospheric showers. MARS is a ROOT-based code written in C++, which includes all the necessary algorithms to transform the raw data recorded by the telescopes into information about the physics parameters of the observed targets. An overview of the methods for extracting the basic shower parameters is presented, together with a description of the tools used in the background discrimination and in the estimation of the gamma-ray source spectra.

[ascl:1807.005] MAPPINGS V: Astrophysical plasma modeling code

MAPPINGS V is a update of the MAPPINGS code (ascl:1306.008) and provides new cooling function computations for optically thin plasmas based on the greatly expanded atomic data of the CHIANTI 8 database. The number of cooling and recombination lines has been expanded from ~2000 to over 80,000, and temperature-dependent spline-based collisional data have been adopted for the majority of transitions. The expanded atomic data set provides improved modeling of both thermally ionized and photoionized plasmas; the code is now capable of predicting detailed X-ray spectra of nonequilibrium plasmas over the full nonrelativistic temperature range, increasing its utility in cosmological simulations, in modeling cooling flows, and in generating accurate models for the X-ray emission from shocks in supernova remnants.

[ascl:1306.008] MAPPINGS III: Modelling And Prediction in PhotoIonized Nebulae and Gasdynamical Shocks

MAPPINGS III is a general purpose astrophysical plasma modelling code. It is principally intended to predict emission line spectra of medium and low density plasmas subjected to different levels of photoionization and ionization by shockwaves. MAPPINGS III tracks up to 16 atomic species in all stages of ionization, over a useful range of 102 to 108 K. It treats spherical and plane parallel geometries in equilibrium and time-dependent models. MAPPINGS III is useful for computing models of HI and HII regions, planetary nebulae, novae, supernova remnants, Herbig-Haro shocks, active galaxies, the intergalactic medium and the interstellar medium in general. The present version of MAPPINGS III is a large FORTRAN program that runs with a simple TTY interface for historical and portability reasons.

[ascl:1308.003] MapCurvature: Map Projections

MapCurvature, written in IDL, can create map projections with Goldberg-Gott indicatrices. These indicatrices measure the flexion and skewness of a map, and are useful for determining whether features are faithfully reproduced on a particular projection.

[ascl:1305.012] MapCUMBA: Multi-grid map-making algorithm for CMB experiments

The MapCUMBA package applies a multigrid fast iterative Jacobi algorithm for map-making in the context of CMB experiments.

[ascl:1202.005] Mangle: Angular Mask Software

Mangle deals accurately and efficiently with complex angular masks, such as occur typically in galaxy surveys. Mangle performs the following tasks: converts masks between many handy formats (including HEALPix); rapidly finds the polygons containing a given point on the sphere; rapidly decomposes a set of polygons into disjoint parts; expands masks in spherical harmonics; generates random points with weights given by the mask; and implements computations for correlation function analysis. To mangle, a mask is an arbitrary union of arbitrarily weighted angular regions bounded by arbitrary numbers of edges. The restrictions on the mask are only (1) that each edge must be part of some circle on the sphere (but not necessarily a great circle), and (2) that the weight within each subregion of the mask must be constant. Mangle is complementary to and integrated with the HEALPix package (ascl:1107.018); mangle works with vector graphics whereas HEALPix works with pixels.

[ascl:1502.021] MaLTPyNT: Quick look timing analysis for NuSTAR data

MaLTPyNT (Matteo's Libraries and Tools in Python for NuSTAR Timing) provides a quick-look timing analysis of NuSTAR data, properly treating orbital gaps and exploiting the presence of two independent detectors by using the cospectrum as a proxy for the power density spectrum. The output of the analysis is a cospectrum, or a power density spectrum, that can be fitted with XSPEC (ascl:9910.005) or ISIS (ascl:1302.002). The software also calculates time lags. Though written for NuSTAR data, MaLTPyNT can also perform standard spectral analysis on X-ray data from other satellite such as XMM-Newton and RXTE.

[ascl:1307.009] MAH: Minimum Atmospheric Height

MAH calculates the posterior distribution of the "minimum atmospheric height" (MAH) of an exoplanet by inputting the joint posterior distribution of the mass and radius. The code collapses the two dimensions of mass and radius into a one dimensional term that most directly speaks to whether the planet has an atmosphere or not. The joint mass-radius posteriors derived from a fit of some exoplanet data (likely using MCMC) can be used by MAH to evaluate the posterior distribution of R_MAH, from which the significance of a non-zero R_MAH (i.e. an atmosphere is present) is calculated.

[ascl:1106.010] MAGPHYS: Multi-wavelength Analysis of Galaxy Physical Properties

MAGPHYS is a self-contained, user-friendly model package to interpret observed spectral energy distributions of galaxies in terms of galaxy-wide physical parameters pertaining to the stars and the interstellar medium. MAGPHYS is optimized to derive statistical constraints of fundamental parameters related to star formation activity and dust content (e.g. star formation rate, stellar mass, dust attenuation, dust temperatures) of large samples of galaxies using a wide range of multi-wavelength observations. A Bayesian approach is used to interpret the SEDs all the way from the ultraviolet/optical to the far-infrared.

[ascl:1502.014] Magnetron: Fitting bursts from magnetars

Magnetron, written in Python, decomposes magnetar bursts into a superposition of small spike-like features with a simple functional form, where the number of model components is itself part of the inference problem. Markov Chain Monte Carlo (MCMC) sampling and reversible jumps between models with different numbers of parameters are used to characterize the posterior distributions of the model parameters and the number of components per burst.

[ascl:1010.054] MagnetiCS.c: Cosmic String Loop Evolution and Magnetogenesis

Large-scale coherent magnetic fields are observed in galaxies and clusters, but their ultimate origin remains a mystery. We reconsider the prospects for primordial magnetogenesis by a cosmic string network. We show that the magnetic flux produced by long strings has been overestimated in the past, and give improved estimates. We also compute the fields created by the loop population, and find that it gives the dominant contribution to the total magnetic field strength on present-day galactic scales. We present numerical results obtained by evolving semi-analytic models of string networks (including both one-scale and velocity-dependent one-scale models) in a Lambda-CDM cosmology, including the forces and torques on loops from Hubble redshifting, dynamical friction, and gravitational wave emission. Our predictions include the magnetic field strength as a function of correlation length, as well as the volume covered by magnetic fields. We conclude that string networks could account for magnetic fields on galactic scales, but only if coupled with an efficient dynamo amplification mechanism.

[ascl:1303.009] MAGIX: Modeling and Analysis Generic Interface for eXternal numerical codes

MAGIX provides an interface between existing codes and an iterating engine that minimizes deviations of the model results from available observational data; it constrains the values of the model parameters and provides corresponding error estimates. Many models (and, in principle, not only astrophysical models) can be plugged into MAGIX to explore their parameter space and find the set of parameter values that best fits observational/experimental data. MAGIX complies with the data structures and reduction tools of Atacama Large Millimeter Array (ALMA), but can be used with other astronomical and with non-astronomical data.

[ascl:1604.004] magicaxis: Pretty scientific plotting with minor-tick and log minor-tick support

The R suite magicaxis makes useful and pretty plots for scientific plotting and includes functions for base plotting, with particular emphasis on pretty axis labelling in a number of circumstances that are often used in scientific plotting. It also includes functions for generating images and contours that reflect the 2D quantile levels of the data designed particularly for output of MCMC posteriors where visualizing the location of the 68% and 95% 2D quantiles for covariant parameters is a necessary part of the post MCMC analysis, can generate low and high error bars, and allows clipping of values, rejection of bad values, and log stretching.

[ascl:1709.010] MagIC: Fluid dynamics in a spherical shell simulator

MagIC simulates fluid dynamics in a spherical shell. It solves for the Navier-Stokes equation including Coriolis force, optionally coupled with an induction equation for Magneto-Hydro Dynamics (MHD), a temperature (or entropy) equation and an equation for chemical composition under both the anelastic and the Boussinesq approximations. MagIC uses either Chebyshev polynomials or finite differences in the radial direction and spherical harmonic decomposition in the azimuthal and latitudinal directions. The time-stepping scheme relies on a semi-implicit Crank-Nicolson for the linear terms of the MHD equations and a Adams-Bashforth scheme for the non-linear terms and the Coriolis force.

[ascl:1010.044] MAESTRO: An Adaptive Low Mach Number Hydrodynamics Algorithm for Stellar Flows

Many astrophysical phenomena are highly subsonic, requiring specialized numerical methods suitable for long-time integration. In a series of earlier papers we described the development of MAESTRO, a low Mach number stellar hydrodynamics code that can be used to simulate long-time, low-speed flows that would be prohibitively expensive to model using traditional compressible codes. MAESTRO is based on an equation set derived using low Mach number asymptotics; this equation set does not explicitly track acoustic waves and thus allows a significant increase in the time step. MAESTRO is suitable for two- and three-dimensional local atmospheric flows as well as three-dimensional full-star flows. Here, we continue the development of MAESTRO by incorporating adaptive mesh refinement (AMR). The primary difference between MAESTRO and other structured grid AMR approaches for incompressible and low Mach number flows is the presence of the time-dependent base state, whose evolution is coupled to the evolution of the full solution. We also describe how to incorporate the expansion of the base state for full-star flows, which involves a novel mapping technique between the one-dimensional base state and the Cartesian grid, as well as a number of overall improvements to the algorithm. We examine the efficiency and accuracy of our adaptive code, and demonstrate that it is suitable for further study of our initial scientific application, the convective phase of Type Ia supernovae.

[ascl:1110.018] MADmap: Fast Parallel Maximum Likelihood CMB Map Making Code

MADmap produces maximum-likelihood images of the sky from time-ordered data which include correlated noise, such as those gathered by Cosmic Microwave Background (CMB) experiments. It works efficiently on platforms ranging from small workstations to the most massively parallel supercomputers. Map-making is a critical step in the analysis of all CMB data sets, and the maximum-likelihood approach is the most accurate and widely applicable algorithm; however, it is a computationally challenging task. This challenge will only increase with the next generation of ground-based, balloon-borne and satellite CMB polarization experiments. The faintness of the B-mode signal that these experiments seek to measure requires them to gather enormous data sets. MADmap has the ability to address problems typically encountered in the analysis of realistic CMB data sets. The massively parallel and distributed implementation is detailed and scaling complexities are given for the resources required. MADmap is capable of analyzing the largest data sets now being collected on computing resources currently available.

[ascl:1712.012] MadDM: Computation of dark matter relic abundance

MadDM computes dark matter relic abundance and dark matter nucleus scattering rates in a generic model. The code is based on the existing MadGraph 5 architecture and as such is easily integrable into any MadGraph collider study. A simple Python interface offers a level of user-friendliness characteristic of MadGraph 5 without sacrificing functionality. MadDM is able to calculate the dark matter relic abundance in models which include a multi-component dark sector, resonance annihilation channels and co-annihilations. The direct detection module of MadDM calculates spin independent / spin dependent dark matter-nucleon cross sections and differential recoil rates as a function of recoil energy, angle and time. The code provides a simplified simulation of detector effects for a wide range of target materials and volumes.

[ascl:1306.010] MADCOW: Microwave Anisotropy Dataset Computational softWare

MADCOW is a set of parallelized programs written in ANSI C and Fortran 77 that perform a maximum likelihood analysis of visibility data from interferometers observing the cosmic microwave background (CMB) radiation. This software has been used to produce power spectra of the CMB with the Very Small Array (VSA) telescope.

[ascl:1209.006] macula: Rotational modulations in the photometry of spotted stars

Photometric rotational modulations due to starspots remain the most common and accessible way to study stellar activity. Modelling rotational modulations allows one to invert the observations into several basic parameters, such as the rotation period, spot coverage, stellar inclination and differential rotation rate. The most widely used analytic model for this inversion comes from Budding (1977) and Dorren (1987), who considered circular, grey starspots for a linearly limb darkened star. That model is extended to be more suitable in the analysis of high precision photometry such as that by Kepler. Macula, a Fortran 90 code, provides several improvements, such as non-linear limb darkening of the star and spot, a single-domain analytic function, partial derivatives for all input parameters, temporal partial derivatives, diluted light compensation, instrumental offset normalisations, differential rotation, starspot evolution and predictions of transit depth variations due to unocculted spots. The inclusion of non-linear limb darkening means macula has a maximum photometric error an order-of-magnitude less than that of Dorren (1987) for Sun-like stars observed in the Kepler-bandpass. The code executes three orders-of-magnitude faster than comparable numerical codes making it well-suited for inference problems.

[ascl:1607.018] LZIFU: IDL emission line fitting pipeline for integral field spectroscopy data

LZIFU (LaZy-IFU) is an emission line fitting pipeline for integral field spectroscopy (IFS) data. Written in IDL, the pipeline turns IFS data to 2D emission line flux and kinematic maps for further analysis. LZIFU has been applied and tested extensively to various IFS data, including the SAMI Galaxy Survey, the Wide-Field Spectrograph (WiFeS), the CALIFA survey, the S7 survey and the MUSE instrument on the VLT.

[ascl:1803.012] LWPC: Long Wavelength Propagation Capability

Long Wavelength Propagation Capability (LWPC), written as a collection of separate programs that perform unique actions, generates geographical maps of signal availability for coverage analysis. The program makes it easy to set up these displays by automating most of the required steps. The user specifies the transmitter location and frequency, the orientation of the transmitting and receiving antennae, and the boundaries of the operating area. The program automatically selects paths along geographic bearing angles to ensure that the operating area is fully covered. The diurnal conditions and other relevant geophysical parameters are then determined along each path. After the mode parameters along each path are determined, the signal strength along each path is computed. The signal strength along the paths is then interpolated onto a grid overlying the operating area. The final grid of signal strength values is used to display the signal-strength in a geographic display. The LWPC uses character strings to control programs and to specify options. The control strings have the same meaning and use among all the programs.

[ascl:1201.016] LumFunc: Luminosity Function Modeling

LumFunc is a numerical code to model the Luminosity Function based on central galaxy luminosity-halo mass and total galaxy luminosity-halo mass relations. The code can handle rest b_J-band (2dFGRS), r'-band (SDSS), and K-band luminosities, and any redshift with redshift dependences specified by the user. It separates the luminosity function (LF) to conditional luminosity functions, LF as a function of halo mass, and also to galaxy types. By specifying a narrow mass range, the code will return the conditional luminosity functions. The code returns luminosity functions for galaxy types as well (broadly divided to early-type and late-type). The code also models the cluster luminosity function, either mass averaged or for individual clusters.

[ascl:1404.001] LTS_LINEFIT & LTS_PLANEFIT: LTS fit of lines or planes

LTS_LINEFIT and LTS_PLANEFIT are IDL programs to robustly fit lines and planes to data with intrinsic scatter. The code combines the Least Trimmed Squares (LTS) robust technique, proposed by Rousseeuw (1984) and optimized in Rousseeuw & Driessen (2006), into a least-squares fitting algorithm which allows for intrinsic scatter. This method makes the fit converge to the correct solution even in the presence of a large number of catastrophic outliers, where the much simpler σ-clipping approach can converge to the wrong solution.

[ascl:1312.006] LTL: The Little Template Library

LTL provides dynamic arrays of up to 7-dimensions, subarrays and slicing, support for fixed-size vectors and matrices including basic linear algebra operations, expression templates-based evaluation, and I/O facilities for ascii and FITS format files. Utility classes for command-line processing and configuration-file processing are provided as well.

[ascl:1505.012] LSSGALPY: Visualization of the large-scale environment around galaxies on the 3D space

LSSGALPY provides visualization tools to compare the 3D positions of a sample (or samples) of isolated systems with respect to the locations of the large-scale structures galaxies in their local and/or large scale environments. The interactive tools use different projections in the 3D space (right ascension, declination, and redshift) to study the relation of the galaxies with the LSS. The tools permit visualization of the locations of the galaxies for different values of redshifts and redshift ranges; the relationship of isolated galaxies, isolated pairs, and isolated triplets to the galaxies in the LSS can be visualized for different values of the declinations and declination ranges.

[ascl:1612.002] LSDCat: Line Source Detection and Cataloguing Tool

LSDCat is a conceptually simple but robust and efficient detection package for emission lines in wide-field integral-field spectroscopic datacubes. The detection utilizes a 3D matched-filtering approach for compact single emission line objects. Furthermore, the software measures fluxes and extents of detected lines. LSDCat is implemented in Python, with a focus on fast processing of large data-volumes.

[ascl:1209.003] LSD: Large Survey Database framework

The Large Survey Database (LSD) is a Python framework and DBMS for distributed storage, cross-matching and querying of large survey catalogs (>10^9 rows, >1 TB). The primary driver behind its development is the analysis of Pan-STARRS PS1 data. It is specifically optimized for fast queries and parallel sweeps of positionally and temporally indexed datasets. It transparently scales to more than >10^2 nodes, and can be made to function in "shared nothing" architectures.

[ascl:1807.033] LSC: Supervised classification of time-series variable stars

LSC (LINEAR Supervised Classification) trains a number of classifiers, including random forest and K-nearest neighbor, to classify variable stars and compares the results to determine which classifier is most successful. Written in R, the package includes anomaly detection code for testing the application of the selected classifier to new data, thus enabling the creation of highly reliable data sets of classified variable stars.

[ascl:1602.005] LRGS: Linear Regression by Gibbs Sampling

LRGS (Linear Regression by Gibbs Sampling) implements a Gibbs sampler to solve the problem of multivariate linear regression with uncertainties in all measured quantities and intrinsic scatter. LRGS extends an algorithm by Kelly (2007) that used Gibbs sampling for performing linear regression in fairly general cases in two ways: generalizing the procedure for multiple response variables, and modeling the prior distribution of covariates using a Dirichlet process.

[ascl:1306.012] LRG DR7 Likelihood Software

This software computes likelihoods for the Luminous Red Galaxies (LRG) data from the Sloan Digital Sky Survey (SDSS). It includes a patch to the existing CAMB software (the February 2009 release) to calculate the theoretical LRG halo power spectrum for various models. The code is written in Fortran 90 and has been tested with the Intel Fortran 90 and GFortran compilers.

[ascl:1902.002] LPNN: Limited Post-Newtonian N-body code for collisionless self-gravitating systems

The Limited Post-Newtonian N-body code (LPNN) simulates post-Newtonian interactions between a massive object and many low-mass objects. The interaction between one massive object and low-mass objects is calculated by post-Newtonian approximation, and the interaction between low-mass objects is calculated by Newtonian gravity. This code is based on the sticky9 code, and can be accelerated with the use of GPU in a CUDA (version 4.2 or earlier) environment.

[ascl:1501.007] LP-VIcode: La Plata Variational Indicators Code

LP-VIcode computes variational chaos indicators (CIs) quickly and easily. The following CIs are included:

  • Lyapunov Indicators, also known as Lyapunov Characteristic Exponents, Lyapunov Characteristic Numbers or Finite Time Lyapunov Characteristic Numbers (LIs)
  • Mean Exponential Growth factor of Nearby Orbits (MEGNO)
  • Slope Estimation of the largest Lyapunov Characteristic Exponent (SElLCE)
  • Smaller ALignment Index (SALI)
  • Generalized ALignment Index (GALI)
  • Fast Lyapunov Indicator (FLI)
  • Orthogonal Fast Lyapunov Indicator (OFLI)
  • Spectral Distance (SD)
  • dynamical Spectra of Stretching Numbers (SSNs)
  • Relative Lyapunov Indicator (RLI)

[ascl:1010.038] Low Resolution Spectral Templates For AGNs and Galaxies From 0.03 -- 30 microns

We present a set of low resolution empirical SED templates for AGNs and galaxies in the wavelength range from 0.03 to 30 microns based on the multi-wavelength photometric observations of the NOAO Deep-Wide Field Survey Bootes field and the spectroscopic observations of the AGN and Galaxy Evolution Survey. Our training sample is comprised of 14448 galaxies in the redshift range 0<~z<~1 and 5347 likely AGNs in the range 0<~z<~5.58. We use our templates to determine photometric redshifts for galaxies and AGNs. While they are relatively accurate for galaxies, their accuracies for AGNs are a strong function of the luminosity ratio between the AGN and galaxy components. Somewhat surprisingly, the relative luminosities of the AGN and its host are well determined even when the photometric redshift is significantly in error. We also use our templates to study the mid-IR AGN selection criteria developed by Stern et al.(2005) and Lacy et al.(2004). We find that the Stern et al.(2005) criteria suffers from significant incompleteness when there is a strong host galaxy component and at z =~ 4.5, when the broad Halpha emission line is redshifted into the [3.6] band, but that it is little contaminated by low and intermediate redshift galaxies. The Lacy et al.(2004) criterion is not affected by incompleteness at z =~ 4.5 and is somewhat less affected by strong galaxy host components, but is heavily contaminated by low redshift star forming galaxies. Finally, we use our templates to predict the color-color distribution of sources in the upcoming WISE mission and define a color criterion to select AGNs analogous to those developed for IRAC photometry. We estimate that in between 640,000 and 1,700,000 AGNs will be identified by these criteria, but will have serious completeness problems for z >~ 3.4.

[ascl:1308.002] LOSSCONE: Capture rates of stars by a supermassive black hole

LOSSCONE computes the rates of capture of stars by supermassive black holes. It uses a stationary and time-dependent solutions for the Fokker-Planck equation describing the evolution of the distribution function of stars due to two-body relaxation, and works for arbitrary spherical and axisymmetric galactic models that are provided by the user in the form of M(r), the cumulative mass as a function of radius.

[ascl:1309.003] LOSP: Liège Orbital Solution Package

LOSP is a FORTRAN77 numerical package that computes the orbital parameters of spectroscopic binaries. The package deals with SB1 and SB2 systems and is able to adjust either circular or eccentric orbits through a weighted fit.

[ascl:1608.018] LORENE: Spectral methods differential equations solver

LORENE (Langage Objet pour la RElativité NumériquE) solves various problems arising in numerical relativity, and more generally in computational astrophysics. It is a set of C++ classes and provides tools to solve partial differential equations by means of multi-domain spectral methods. LORENE classes implement basic structures such as arrays and matrices, but also abstract mathematical objects, such as tensors, and astrophysical objects, such as stars and black holes.

[submitted] loci: Smooth Cubic Multivariate Local Interpolations

loci is a shared library for interpolations in up to 4 dimensions. It is written in C and can be used with C/C++, Python and others. In order to calculate the coefficients of the cubic polynom, only local values are used: The data itself and all combinations of first-order derivatives, i.e. in 2D f_x, f_y and f_xy. This is in contrast to splines, where the coefficients are not calculated using derivatives, but non-local data, which can lead to over-smoothing the result.

[ascl:1606.014] Lmfit: Non-Linear Least-Square Minimization and Curve-Fitting for Python

Lmfit provides a high-level interface to non-linear optimization and curve fitting problems for Python. Lmfit builds on and extends many of the optimization algorithm of scipy.optimize, especially the Levenberg-Marquardt method from optimize.leastsq. Its enhancements to optimization and data fitting problems include using Parameter objects instead of plain floats as variables, the ability to easily change fitting algorithms, and improved estimation of confidence intervals and curve-fitting with the Model class. Lmfit includes many pre-built models for common lineshapes.

[ascl:1706.005] LMC: Logarithmantic Monte Carlo

LMC is a Markov Chain Monte Carlo engine in Python that implements adaptive Metropolis-Hastings and slice sampling, as well as the affine-invariant method of Goodman & Weare, in a flexible framework. It can be used for simple problems, but the main use case is problems where expensive likelihood evaluations are provided by less flexible third-party software, which benefit from parallelization across many nodes at the sampling level. The parallel/adaptive methods use communication through MPI, or alternatively by writing/reading files, and mostly follow the approaches pioneered by CosmoMC (ascl:1106.025).

[submitted] Lizard: an extensible Cyclomatic Complexity Analyzer

Lizard is an extensible Cyclomatic Complexity Analyzer for many imperative programming languages including C/C++.

[ascl:1902.005] LiveData: Data reduction pipeline

LiveData is a multibeam single-dish data reduction system for bandpass calibration and gridding. It is used for processing Parkes multibeam and Mopra data.

[ascl:1112.009] LISACode: A scientific simulator of LISA

LISACode is a simulator of the LISA mission. Its ambition is to achieve a new degree of sophistication allowing to map, as closely as possible, the impact of the different subsystems on the measurements. Its also a useful tool for generating realistic data including several kind of sources (Massive Black Hole binaries, EMRIs, cosmic string cusp, stochastic background, etc) and for preparing their analysis. It’s fully integrated to the Mock LISA Data Challenge. LISACode is not a detailed simulator at the engineering level but rather a tool whose purpose is to bridge the gap between the basic principles of LISA and a future, sophisticated end-to-end simulator.

[ascl:1601.007] LIRA: Low-counts Image Reconstruction and Analysis

LIRA (Low-counts Image Reconstruction and Analysis) deconvolves any unknown sky components, provides a fully Poisson 'goodness-of-fit' for any best-fit model, and quantifies uncertainties on the existence and shape of unknown sky. It does this without resorting to χ2 or rebinning, which can lose high-resolution information. It is written in R and requires the FITSio package.

[ascl:1602.006] LIRA: LInear Regression in Astronomy

LIRA (LInear Regression in Astronomy) performs Bayesian linear regression that accounts for heteroscedastic errors in both the independent and the dependent variables, intrinsic scatters (in both variables), time evolution of slopes, normalization and scatters, Malmquist and Eddington bias, and break of linearity. The posterior distribution of the regression parameters is sampled with a Gibbs method exploiting the JAGS (ascl:1209.002) library.

[ascl:1504.019] LineProf: Line Profile Indicators

LineProf implements a series of line-profile analysis indicators and evaluates its correlation with RV data. It receives as input a list of Cross-Correlation Functions and an optional list of associated RV. It evaluates the line-profile according to the indicators and compares it with the computed RV if no associated RV is provided, or with the provided RV otherwise.

[ascl:1710.023] LIMEPY: Lowered Isothermal Model Explorer in PYthon

LIMEPY solves distribution function (DF) based lowered isothermal models. It solves Poisson's equation used on input parameters and offers fast solutions for isotropic/anisotropic, single/multi-mass models, normalized DF values, density and velocity moments, projected properties, and generates discrete samples.

[ascl:1107.012] LIME: Flexible, Non-LTE Line Excitation and Radiation Transfer Method for Millimeter and Far-infrared Wavelengths

LIME solves the molecular and atomic excitation and radiation transfer problem in a molecular gas and predicting emergent spectra. The code works in arbitrary three dimensional geometry using unstructured Delaunay latices for the transport of photons. Various physical models can be used as input, ranging from analytical descriptions over tabulated models to SPH simulations. To generate the Delaunay grid we sample the input model randomly, but weigh the sample probability with the molecular density and other parameters, and thereby we obtain an average grid point separation that scales with the local opacity. Slow convergence of opaque models becomes traceable; when convergence between the level populations, the radiation field, and the point separation has been obtained, the grid is ray-traced to produced images that can readily be compared to observations. LIME is particularly well suited for modeling of ALMA data because of the high dynamic range in scales that can be resolved using this type of grid, and can furthermore deal with overlapping lines of multiple molecular and atomic species.

[ascl:1711.009] Lightning: SED Fitting Package

Lightning is a spectral energy distribution (SED) fitting procedure that quickly and reliably recovers star formation history (SFH) and extinction parameters. The SFH is modeled as discrete steps in time. The code consists of a fully vectorized inversion algorithm to determine SFH step intensities and combines this with a grid-based approach to determine three extinction parameters.

[ascl:1812.013] Lightkurve: Kepler and TESS time series analysis in Python

Lightkurve analyzes astronomical flux time series data, in particular the pixels and light curves obtained by NASA’s Kepler, K2, and TESS exoplanet missions. This community-developed Python package is designed to be user friendly to lower the barrier for students, astronomers, and citizen scientists interested in analyzing data from these missions. Lightkurve provides easy tools to download, inspect, and analyze time series data and its documentation is supported by a large syllabus of tutorials.

[ascl:1408.012] LightcurveMC: An extensible lightcurve simulation program

LightcurveMC is a versatile and easily extended simulation suite for testing the performance of time series analysis tools under controlled conditions. It is designed to be highly modular, allowing new lightcurve types or new analysis tools to be introduced without excessive development overhead. The statistical tools are completely agnostic to how the lightcurve data is generated, and the lightcurve generators are completely agnostic to how the data will be analyzed. The use of fixed random seeds throughout guarantees that the program generates consistent results from run to run.

LightcurveMC can generate periodic light curves having a variety of shapes and stochastic light curves having a variety of correlation properties. It features two error models (Gaussian measurement and signal injection using a randomized sample of base light curves), testing of C1 shape statistic, periodograms, ΔmΔt plots, autocorrelation function plots, peak-finding plots, and Gaussian process regression. The code is written in C++ and R.

[ascl:1403.004] Lightcone: Light-cone generating script

Lightcone works with simulated galaxy data stored in a relational database to rearrange the data in a shape of a light-cone; simulated galaxy data is expected to be in a box volume. The light-cone constructing script works with output from the SAGE semi-analytic model (ascl:1601.006), but will work with any other model that has galaxy positions (and other properties) saved per snapshots of the simulation volume distributed in time. The database configuration file is set up for PostgreSQL RDBMS, but can be modified for use with any other SQL database.

[ascl:1402.033] libsharp: Library for spherical harmonic transforms

Libsharp is a collection of algorithms for efficient conversion between maps on the sphere and their spherical harmonic coefficients. It supports a wide range of pixelisations (including HEALPix, GLESP, and ECP). This library is a successor of libpsht (ascl:1010.020); it adds MPI support for distributed memory systems and SHTs of fields with arbitrary spin, and also supports new developments in CPU instruction sets like the Advanced Vector Extensions (AVX) or fused multiply-accumulate (FMA) instructions. libsharp is written in portable C99; it provides an interface accessible to other programming languages such as C++, Fortran, and Python.

[ascl:1010.020] Libpsht: Algorithms for Efficient Spherical Harmonic Transforms

Libpsht (or "library for Performing Spherical Harmonic Transforms") is a collection of algorithms for efficient conversion between spatial-domain and spectral-domain representations of data defined on the sphere. The package supports transforms of scalars as well as spin-1 and spin-2 quantities, and can be used for a wide range of pixelisations (including HEALPix, GLESP and ECP). It will take advantage of hardware features like multiple processor cores and floating-point vector operations, if available. Even without this additional acceleration, the employed algorithms are among the most efficient (in terms of CPU time as well as memory consumption) currently being used in the astronomical community.

The library is written in strictly standard-conforming C90, ensuring portability to many different hard- and software platforms, and allowing straightforward integration with codes written in various programming languages like C, C++, Fortran, Python etc.

Libpsht is distributed under the terms of the GNU General Public License (GPL) version 2.

Development on this project has ended; its successor is libsharp (ascl:1402.033).

[ascl:1612.003] libprofit: Image creation from luminosity profiles

libprofit is a C++ library for image creation based on different luminosity profiles. It offers fast and accurate two-dimensional integration for a useful number of profiles, including Sersic, Core-Sersic, broken-exponential, Ferrer, Moffat, empirical King, point-source and sky, with a simple mechanism for adding new profiles. libprofit provides a utility to read the model and profile parameters from the command-line and generate the corresponding image. It can output the resulting image as text values, a binary stream, or as a simple FITS file. It also provides a shared library exposing an API that can be used by any third-party application. R and Python interfaces are available: ProFit (ascl:1612.004) and PyProfit (ascl:1612.005).

[ascl:1604.002] libpolycomp: Compression/decompression library

Libpolycomp compresses and decompresses one-dimensional streams of numbers by means of several algorithms. It is well-suited for time-ordered data acquired by astronomical instruments or simulations. One of the algorithms, called "polynomial compression", combines two widely-used ideas (namely, polynomial approximation and filtering of Fourier series) to achieve substantial compression ratios for datasets characterized by smoothness and lack of noise. Notable examples are the ephemerides of astronomical objects and the pointing information of astronomical telescopes. Other algorithms implemented in this C library are well known and already widely used, e.g., RLE, quantization, deflate (via libz) and Burrows-Wheeler transform (via libbzip2). Libpolycomp can compress the timelines acquired by the Planck/LFI instrument with an overall compression ratio of ~9, while other widely known programs (gzip, bzip2) reach compression ratios less than 1.5.

[ascl:1502.016] libnova: Celestial mechanics, astrometry and astrodynamics library

libnova is a general purpose, double precision, celestial mechanics, astrometry and astrodynamics library. Among many other calculations, it can calculate aberration, apparent position, proper motion, planetary positions, orbit velocities and lengths, angular separation of bodies, and hyperbolic motion of bodies.

[ascl:1206.009] Libimf

Libimf provides a collection of programming functions based on the general IMF-algorithm by Pflamm-Altenburg & Kroupa (2006).

[ascl:1408.002] LIA: LWS Interactive Analysis

The Long Wavelength Spectrometer (LWS) was one of two complementary spectrometers on the Infrared Space Observatory (ISO). LIA (LWS Interactive Analysis) is used for processing data from the LWS. It provides access to the different processing steps, including visualization of intermediate products and interactive manipulation of the data at each stage.

[ascl:1712.016] LgrbWorldModel: Long-duration Gamma-Ray Burst World Model

LgrbWorldModel is written in Fortran 90 and attempts to model the population distribution of the Long-duration class of Gamma-Ray Bursts (LGRBs) as detected by the NASA's now-defunct Burst And Transient Source Experiment (BATSE) onboard the Compton Gamma Ray Observatory (CGRO). It is assumed that the population distribution of LGRBs is well fit by a multivariate log-normal distribution. The best-fit parameters of the distribution are then found by maximizing the likelihood of the observed data by BATSE detectors via a native built-in Adaptive Metropolis-Hastings Markov-Chain Monte Carlo (AMH-MCMC) Sampler.

[ascl:1710.016] LGMCA: Local-Generalized Morphological Component Analysis

LGMCA (Local-Generalized Morphological Component Analysis) is an extension to GMCA (ascl:1710.015). Similarly to GMCA, it is a Blind Source Separation method which enforces sparsity. The novel aspect of LGMCA, however, is that the mixing matrix changes across pixels allowing LGMCA to deal with emissions sources which vary spatially. These IDL scripts compute the CMB map from WMAP and Planck data; running LGMCA on the WMAP9 temperature products requires the main script and a selection of mandatory files, algorithm parameters and map parameters.

[ascl:1804.023] LFsGRB: Binary neutron star merger rate via the luminosity function of short gamma-ray bursts

LFsGRB models the luminosity function (LF) of short Gamma Ray Bursts (sGRBs) by using the available catalog data of all short GRBs (sGRBs) detected till 2017 October, estimating the luminosities via pseudo-redshifts obtained from the Yonetoku correlation, and then assuming a standard delay distribution between the cosmic star formation rate and the production rate of their progenitors. The data are fit well both by exponential cutoff powerlaw and broken powerlaw models. Using the derived parameters of these models along with conservative values in the jet opening angles seen from afterglow observations, the true rate of short GRBs is derived. Assuming a short GRB is produced from each binary neutron star merger (BNSM), the rate of gravitational wave (GW) detections from these mergers are derived for the past, present and future configurations of the GW detector networks.

[ascl:1804.024] LFlGRB: Luminosity function of long gamma-ray bursts

LFlGRB models the luminosity function (LF) of long Gamma Ray Bursts (lGRBs) by using a sample of Swift and Fermi lGRBs to re-derive the parameters of the Yonetoku correlation and self-consistently estimate pseudo-redshifts of all the bursts with unknown redshifts. The GRB formation rate is modeled as the product of the cosmic star formation rate and a GRB formation efficiency for a given stellar mass.

[ascl:1711.018] LExTeS: Link Extraction and Testing Suite

LExTeS (Link Extraction and Testing Suite) extracts hyperlinks from PDF documents, tests the extracted links to see which are broken, and tabulates the results. Though written to support a particular set of PDF documents, the dataset and scripts can be edited for use on other documents.

[ascl:1108.009] LePHARE: Photometric Analysis for Redshift Estimate

LePHARE is a set of Fortran commands to compute photometric redshifts and to perform SED fitting. The latest version includes new features with FIR fitting and a more complete treatment of physical parameters and uncertainties based on PÉGASE and Bruzual & Charlot population synthesis models. The program is based on a simple chi2 fitting method between the theoretical and observed photometric catalogue. A simulation program is also available in order to generate realistic multi-colour catalogues taking into account observational effects.

[ascl:1307.005] LENSVIEW: Resolved gravitational lens images modeling

Lensview models resolved gravitational lens systems based on LensMEM but using the Skilling & Bryan MEM algorithm. Though its primary purpose is to find statistically acceptable lens models for lensed images and to reconstruct the surface brightness profile of the source, LENSVIEW can also be used for more simple tasks such as projecting a given source through a lens model to generate a “true” image by conserving surface brightness. The user can specify complicated lens models based on one or more components, such as softened isothermal ellipsoids, point masses, exponential discs, and external shears; LENSVIEW generates a best-fitting source matching the observed data for each specific combination of model parameters.

[ascl:1804.012] Lenstronomy: Multi-purpose gravitational lens modeling software package

Lenstronomy is a multi-purpose open-source gravitational lens modeling python package. Lenstronomy reconstructs the lens mass and surface brightness distributions of strong lensing systems using forward modelling and supports a wide range of analytic lens and light models in arbitrary combination. The software is also able to reconstruct complex extended sources as well as point sources. Lenstronomy is flexible and numerically accurate, with a clear user interface that could be deployed across different platforms. Lenstronomy has been used to derive constraints on dark matter properties in strong lenses, measure the expansion history of the universe with time-delay cosmography, measure cosmic shear with Einstein rings, and decompose quasar and host galaxy light.

[ascl:1602.009] LensTools: Weak Lensing computing tools

LensTools implements a wide range of routines frequently used in Weak Gravitational Lensing, including tools for image analysis, statistical processing and numerical theory predictions. The package offers many useful features, including complete flexibility and easy customization of input/output formats; efficient measurements of power spectrum, PDF, Minkowski functionals and peak counts of convergence maps; survey masks; artificial noise generation engines; easy to compute parameter statistical inferences; ray tracing simulations; and many others. It requires standard numpy and scipy, and depending on tools used, may require Astropy (ascl:1304.002), emcee (ascl:1303.002), matplotlib, and mpi4py.

[ascl:1102.004] LENSTOOL: A Gravitational Lensing Software for Modeling Mass Distribution of Galaxies and Clusters (strong and weak regime)

We describe a procedure for modelling strong lensing galaxy clusters with parametric methods, and to rank models quantitatively using the Bayesian evidence. We use a publicly available Markov chain Monte-Carlo (MCMC) sampler ('Bayesys'), allowing us to avoid local minima in the likelihood functions. To illustrate the power of the MCMC technique, we simulate three clusters of galaxies, each composed of a cluster-scale halo and a set of perturbing galaxy-scale subhalos. We ray-trace three light beams through each model to produce a catalogue of multiple images, and then use the MCMC sampler to recover the model parameters in the three different lensing configurations. We find that, for typical Hubble Space Telescope (HST)-quality imaging data, the total mass in the Einstein radius is recovered with ~1-5% error according to the considered lensing configuration. However, we find that the mass of the galaxies is strongly degenerated with the cluster mass when no multiple images appear in the cluster centre. The mass of the galaxies is generally recovered with a 20% error, largely due to the poorly constrained cut-off radius. Finally, we describe how to rank models quantitatively using the Bayesian evidence. We confirm the ability of strong lensing to constrain the mass profile in the central region of galaxy clusters in this way. Ultimately, such a method applied to strong lensing clusters with a very large number of multiple images may provide unique geometrical constraints on cosmology.

[ascl:1705.009] LensPop: Galaxy-galaxy strong lensing population simulation

LensPop simulates observations of the galaxy-galaxy strong lensing population in the Dark Energy Survey (DES), the Large Synoptic Survey Telescope (LSST), and Euclid surveys.

[ascl:1102.025] LensPix: Fast MPI full sky transforms for HEALPix

Modelling of the weak lensing of the CMB will be crucial to obtain correct cosmological parameter constraints from forthcoming precision CMB anisotropy observations. The lensing affects the power spectrum as well as inducing non-Gaussianities. We discuss the simulation of full sky CMB maps in the weak lensing approximation and describe a fast numerical code. The series expansion in the deflection angle cannot be used to simulate accurate CMB maps, so a pixel remapping must be used. For parameter estimation accounting for the change in the power spectrum but assuming Gaussianity is sufficient to obtain accurate results up to Planck sensitivity using current tools. A fuller analysis may be required to obtain accurate error estimates and for more sensitive observations. We demonstrate a simple full sky simulation and subsequent parameter estimation at Planck-like sensitivity.

[ascl:1010.050] LensPerfect: Gravitational Lens Massmap Reconstructions Yielding Exact Reproduction of All Multiple Images

LensPerfect is a new approach to the massmap reconstruction of strong gravitational lenses. Conventional methods iterate over possible lens models which reproduce the observed multiple image positions well but not exactly. LensPerfect only produces solutions which fit all of the data exactly. Magnifications and shears of the multiple images can also be perfectly constrained to match observations.

[ascl:9903.001] LENSKY: Galactic Microlensing Probability

Given a model for the Galaxy, this program computes the microlensing rate in any direction. Program features include the ability to include the brightness of the lens and to compute the probability of lens detection at any level of lensing amplification. The program limits itself to lensing by single stars of single sources. The program is currently setup to accept input from the Galactic models of Bahcall and Soniera (1982, 1986).

There are three files needed for LENSKY, the Fortran file lensky.for and two input files: galmod.dsk (15 Megs) and galmod.sph (22 Megs). The zip file available below contains all three files. The program generates output to the file lensky.out. The program is pretty self-explanatory past that.

[ascl:1308.004] LensEnt2: Maximum-entropy weak lens reconstruction

LensEnt2 is a maximum entropy reconstructor of weak lensing mass maps. The method takes each galaxy shape as an independent estimator of the reduced shear field and incorporates an intrinsic smoothness, determined by Bayesian methods, into the reconstruction. The uncertainties from both the intrinsic distribution of galaxy shapes and galaxy shape estimation are carried through to the final mass reconstruction, and the mass within arbitrarily shaped apertures are calculated with corresponding uncertainties. The input is a galaxy ellipticity catalog with each measured galaxy shape treated as a noisy tracer of the reduced shear field, which is inferred on a fine pixel grid assuming positivity, and smoothness on scales of w arcsec where w is an input parameter. The ICF width w can be chosen by computing the evidence for it.

[ascl:1505.026] Lensed: Forward parametric modelling of strong lenses

Lensed performs forward parametric modelling of strong lenses. Using a provided model, Lensed renders the expected image of the lensing event for a large number of parameter settings, thereby exploring the space of possible realizations of the observation. It compares the expectation to the observed image by calculating the likelihood that the observation was indeed produced by the assumed model, thus reconstructing the probability distribution over the parameter space of the model. Written in C, the code uses a massively parallel ray-tracing kernel to perform the necessary calculations on a graphics processing unit (GPU), making the precise rendering of the background lensed sources fast and allowing the simultaneous optimization of tens of parameters for the selected model.

[ascl:1809.001] LEMON: Differential photometry pipeline

LEMON is a differential-photometry pipeline, written in Python, that determines the changes in the brightness of astronomical objects over time and compiles their measurements into light curves. This code makes it possible to completely reduce thousands of FITS images of time series in a matter of only a few hours, requiring minimal user interaction.

[ascl:1104.006] LECTOR: Line-strengths in One-dimensional ASCII Spectra

LECTOR is a Fortran 77 code that measures line-strengths in one dimensional ascii spectra. The code returns the values of the Lick indices as well as those of Vazdekis & Arimoto 1999, Vazdekis et al. 2001, Rose 1994, Jones & Worthey 1995 and Cenarro et al. 2001. The code measures as many indices as you wish if the limits of two pseudocontinua (at each side of the feature) and the feature itself (i.e. Lick-style index definition) are provided. The Lick-style indices could be either expressed in pseudo-equivalent widths or in magnitudes. If requested the program provides index error estimates on the basis of photon statistics.

[ascl:1507.016] Least Asymmetry: Centering Method

Least Asymmetry finds the center of a distribution of light in an image using the least asymmetry method; the code also contains center of light and fitting a Gaussian routines. All functions in Least Asymmetry are designed to take optional weights.

[ascl:1511.018] LDC3: Three-parameter limb darkening coefficient sampling

LDC3 samples physically permissible limb darkening coefficients for the Sing et al. (2009) three-parameter law. It defines the physically permissible intensity profile as being everywhere-positive, monotonically decreasing from center to limb and having a curl at the limb. The approximate sampling method is analytic and thus very fast, reproducing physically permissible samples in 97.3% of random draws (high validity) and encompassing 94.4% of the physically permissible parameter volume (high completeness).

[ascl:1805.003] lcps: Light curve pre-selection

lcps searches for transit-like features (i.e., dips) in photometric data. Its main purpose is to restrict large sets of light curves to a number of files that show interesting behavior, such as drops in flux. While lcps is adaptable to any format of time series, its I/O module is designed specifically for photometry of the Kepler spacecraft. It extracts the pre-conditioned PDCSAP data from light curves files created by the standard Kepler pipeline. It can also handle csv-formatted ascii files. lcps uses a sliding window technique to compare a section of flux time series with its surroundings. A dip is detected if the flux within the window is lower than a threshold fraction of the surrounding fluxes.

[ascl:1708.017] LCC: Light Curves Classifier

Light Curves Classifier uses data mining and machine learning to obtain and classify desired objects. This task can be accomplished by attributes of light curves or any time series, including shapes, histograms, or variograms, or by other available information about the inspected objects, such as color indices, temperatures, and abundances. After specifying features which describe the objects to be searched, the software trains on a given training sample, and can then be used for unsupervised clustering for visualizing the natural separation of the sample. The package can be also used for automatic tuning parameters of used methods (for example, number of hidden neurons or binning ratio).

Trained classifiers can be used for filtering outputs from astronomical databases or data stored locally. The Light Curve Classifier can also be used for simple downloading of light curves and all available information of queried stars. It natively can connect to OgleII, OgleIII, ASAS, CoRoT, Kepler, Catalina and MACHO, and new connectors or descriptors can be implemented. In addition to direct usage of the package and command line UI, the program can be used through a web interface. Users can create jobs for ”training” methods on given objects, querying databases and filtering outputs by trained filters. Preimplemented descriptors, classifier and connectors can be picked by simple clicks and their parameters can be tuned by giving ranges of these values. All combinations are then calculated and the best one is used for creating the filter. Natural separation of the data can be visualized by unsupervised clustering.

[ascl:1405.001] LBLRTM: Line-By-Line Radiative Transfer Model

LBLRTM (Line-By-Line Radiative Transfer Model) is an accurate line-by-line model that is efficient and highly flexible. LBLRTM attributes provide spectral radiance calculations with accuracies consistent with the measurements against which they are validated and with computational times that greatly facilitate the application of the line-by-line approach to current radiative transfer applications. LBLRTM has been extensively validated against atmospheric radiance spectra from the ultra-violet to the sub-millimeter.

LBLRTM's heritage is in FASCODE [Clough et al., 1981, 1992].

[ascl:1202.011] Lattimer-Swesty Equation of State Code

The Lattimer-Swesty Equation of State code is rapid enough to use directly in hydrodynamical simulations such as stellar collapse calculations. It contains an adjustable nuclear force that accurately models both potential and mean-field interactions and allows for the input of various nuclear parameters, including the bulk incompressibility parameter, the bulk and surface symmetry energies, the symmetric matter surface tension, and the nucleon effective masses. This permits parametric studies of the equation of state in astrophysical situations. The equation of state is modeled after the Lattimer, Lamb, Pethick, and Ravenhall (LLPR) compressible liquid drop model for nuclei, and includes the effects of interactions and degeneracy of the nucleon outside nuclei.

[ascl:1806.021] LASR: Linear Algorithm for Significance Reduction

LASR removes stellar variability in the light curves of δ-Scuti and similar stars. It subtracts oscillations from a time series by minimizing their statistical significance in frequency space.

[ascl:1208.015] Lare3d: Lagrangian-Eulerian remap scheme for MHD

Lare3d is a Lagrangian-remap code for solving the non-linear MHD equations in three spatial dimensions.

[ascl:1703.001] Larch: X-ray Analysis for Synchrotron Applications using Python

Larch is an open-source library and toolkit written in Python for processing and analyzing X-ray spectroscopic data. The primary emphasis is on X-ray spectroscopic and scattering data collected at modern synchrotron sources. Larch provides a wide selection of general-purpose processing, analysis, and visualization tools for processing X-ray data; its related target application areas include X-ray absorption fine structure (XAFS), micro-X-ray fluorescence (XRF) maps, quantitative X-ray fluorescence, X-ray absorption near edge spectroscopy (XANES), and X-ray standing waves and surface scattering. Larch provides a complete set of XAFS Analysis tools and has support for visualizing and analyzing XRF maps and spectra, and additional tools for X-ray spectral analysis, data handling, and general-purpose data modeling.

Would you like to view a random code?