ASCL.net

Astrophysics Source Code Library

Making codes discoverable since 1999

Browsing Codes

Results 801-900 of 1509 (1485 ASCL, 24 submitted)

Order
Title Date
 
Mode
Abstract Compact
Per Page
50100250All
[ascl:1511.008] MCAL: M dwarf metallicity and temperature calculator

MCAL calculates high precision metallicities and effective temperatures for M dwarfs; the method behaves properly down to R = 40 000 and S/N = 25, and results were validated against a sample of stars in common with SOPHIE high resolution spectra.

[ascl:1107.015] McLuster: A Tool to Make a Star Cluster

The tool McLuster is an open source code that can be used to either set up initial conditions for N-body computations or, alternatively, to generate artificial star clusters for direct investigation. There are two different versions of the code, one basic version for generating all kinds of unevolved clusters (in the following called mcluster) and one for setting up evolved stellar populations at a given age. The former is completely contained in the C file main.c. The latter (dubbed mcluster_sse) is more complex and requires additional FORTRAN routines, namely the Single-Star Evolution (SSE) routines by Hurley, Pols & Tout (ascl:1303.015) that are provided with the McLuster code.

[ascl:1407.004] MCMAC: Monte Carlo Merger Analysis Code

Monte Carlo Merger Analysis Code (MCMAC) aids in the study of merging clusters. It takes observed priors on each subcluster's mass, radial velocity, and projected separation, draws randomly from those priors, and uses them in a analytic model to get posterior PDF's for merger dynamic properties of interest (e.g. collision velocity, time since collision).

[ascl:1210.017] McPHAC: McGill Planar Hydrogen Atmosphere Code

The McGill Planar Hydrogen Atmosphere Code (McPHAC) v1.1 calculates the hydrostatic equilibrium structure and emergent spectrum of an unmagnetized hydrogen atmosphere in the plane-parallel approximation at surface gravities appropriate for neutron stars. McPHAC incorporates several improvements over previous codes for which tabulated model spectra are available: (1) Thomson scattering is treated anisotropically, which is shown to result in a 0.2%-3% correction in the emergent spectral flux across the 0.1-5 keV passband; (2) the McPHAC source code is made available to the community, allowing it to be scrutinized and modified by other researchers wishing to study or extend its capabilities; and (3) the numerical uncertainty resulting from the discrete and iterative solution is studied as a function of photon energy, indicating that McPHAC is capable of producing spectra with numerical uncertainties <0.01%. The accuracy of the spectra may at present be limited to ~1%, but McPHAC enables researchers to study the impact of uncertain inputs and additional physical effects, thereby supporting future efforts to reduce those inaccuracies. Comparison of McPHAC results with spectra from one of the previous model atmosphere codes (NSA) shows agreement to lsim1% near the peaks of the emergent spectra. However, in the Wien tail a significant deficit of flux in the spectra of the previous model is revealed, determined to be due to the previous work not considering large enough optical depths at the highest photon frequencies. The deficit is most significant for spectra with T eff < 105.6 K, though even there it may not be of much practical importance for most observations.

[ascl:1201.001] McScatter: Three-Body Scattering with Stellar Evolution

McScatter illustrates a method of combining stellar dynamics with stellar evolution. The method is intended for elaborate applications, especially the dynamical evolution of rich star clusters. The dynamics is based on binary scattering in a multi-mass field of stars with uniform density and velocity dispersion, using the scattering cross section of Giersz (MNRAS, 2001, 324, 218-30).

[ascl:1504.008] MCSpearman: Monte Carlo error analyses of Spearman's rank test

Spearman’s rank correlation test is commonly used in astronomy to discern whether a set of two variables are correlated or not. Unlike most other quantities quoted in astronomical literature, the Spearman’s rank correlation coefficient is generally quoted with no attempt to estimate the errors on its value. This code implements a number of Monte Carlo based methods to estimate the uncertainty on the Spearman’s rank correlation coefficient.

[ascl:1302.012] ME(SSY)**2: Monte Carlo Code for Star Cluster Simulations

ME(SSY)**2 stands for “Monte-carlo Experiments with Spherically SYmmetric Stellar SYstems." This code simulates the long term evolution of spherical clusters of stars; it was devised specifically to treat dense galactic nuclei. It is based on the pioneering Monte Carlo scheme proposed by Hénon in the 70's and includes all relevant physical ingredients (2-body relaxation, stellar mass spectrum, collisions, tidal disruption, ldots). It is basically a Monte Carlo resolution of the Fokker-Planck equation. It can cope with any stellar mass spectrum or velocity distribution. Being a particle-based method, it also allows one to take stellar collisions into account in a very realistic way. This unique code, featuring most important physical processes, allows million particle simulations, spanning a Hubble time, in a few CPU days on standard personal computers and provides a wealth of data only rivalized by N-body simulations. The current version of the software requires the use of routines from the "Numerical Recipes in Fortran 77" (http://www.nrbook.com/a/bookfpdf.php).

[ascl:1205.001] Mechanic: Numerical MPI framework for dynamical astronomy

The Mechanic package is a numerical framework for dynamical astronomy, designed to help in massive numerical simulations by efficient task management and unified data storage. The code is built on top of the Message Passing Interface (MPI) and Hierarchical Data Format (HDF5) standards and uses the Task Farm approach to manage numerical tasks. It relies on the core-module approach. The numerical problem implemented in the user-supplied module is separated from the host code (core). The core is designed to handle basic setup, data storage and communication between nodes in a computing pool. It has been tested on large CPU-clusters, as well as desktop computers. The Mechanic may be used in computing dynamical maps, data optimization or numerical integration.

[ascl:1106.006] MECI: A Method for Eclipsing Component Identification

We describe an automated method for assigning the most probable physical parameters to the components of an eclipsing binary, using only its photometric light curve and combined colors. With traditional methods, one attempts to optimize a multi-parameter model over many iterations, so as to minimize the chi-squared value. We suggest an alternative method, where one selects pairs of coeval stars from a set of theoretical stellar models, and compares their simulated light curves and combined colors with the observations. This approach greatly reduces the parameter space over which one needs to search, and allows one to estimate the components' masses, radii and absolute magnitudes, without spectroscopic data. We have implemented this method in an automated program using published theoretical isochrones and limb-darkening coefficients. Since it is easy to automate, this method lends itself to systematic analyses of datasets consisting of photometric time series of large numbers of stars, such as those produced by OGLE, MACHO, TrES, HAT, and many others surveys.

[ascl:1203.008] MegaLUT: Correcting ellipticity measurements of galaxies

MegaLUT is a simple and fast method to correct ellipticity measurements of galaxies from the distortion by the instrumental and atmospheric point spread function (PSF), in view of weak lensing shear measurements. The method performs a classification of galaxies and associated PSFs according to measured shape parameters, and builds a lookup table of ellipticity corrections by supervised learning. This new method has been applied to the GREAT10 image analysis challenge, and demonstrates a refined solution that obtains the highly competitive quality factor of Q = 142, without any power spectrum denoising or training. Of particular interest is the efficiency of the method, with a processing time below 3 ms per galaxy on an ordinary CPU.

[ascl:1410.002] MEPSA: Multiple Excess Peak Search Algorithm

MEPSA (Multiple Excess Peak Search Algorithm) identifies peaks within a uniformly sampled time series affected by uncorrelated Gaussian noise. MEPSA scans the time series at different timescales by comparing a given peak candidate with a variable number of adjacent bins. While this has originally been conceived for the analysis of gamma-ray burst light (GRB) curves, its usage can be readily extended to other astrophysical transient phenomena whose activity is recorded through different surveys. MEPSA's high flexibility permits the mask of excess patterns it uses to be tailored and optimized without modifying the code.

[ascl:1209.010] MeqTrees: Software package for implementing Measurement Equations

MeqTrees is a software package for implementing Measurement Equations. This makes it uniquely suited for simulation and calibration of radioastronomical data, especially that involving new radiotelescopes and observational regimes. MeqTrees is implemented as a Python-based front-end called the meqbrowser, and an efficient (C++-based) computational back-end called the meqserver. Numerical models are defined on the front-end via a Python-based Tree Definition Language (TDL), then rapidly executed on the back-end. The use of TDL facilitates an extremely short turn-around time for experimentation with new ideas. This is also helped by unprecedented visualization capabilities for all final and intermediate results. A flexible data model and a number of important optimizations in the back-end ensures that the numerical performance is comparable to that of hand-written code.

MeqTrees includes a highly capable FITS viewer and sky model manager called Tigger, which can also work as a standalone tool.

[ascl:1511.020] Mercury-T: Tidally evolving multi-planet systems code

Mercury-T calculates the evolution of semi-major axis, eccentricity, inclination, rotation period and obliquity of the planets as well as the rotation period evolution of the host body; it is based on the N-body code Mercury (Chambers 1999, ascl:1201.008). It is flexible, allowing computation of the tidal evolution of systems orbiting any non-evolving object (if its mass, radius, dissipation factor and rotation period are known), but also evolving brown dwarfs (BDs) of mass between 0.01 and 0.08 M⊙, an evolving M-dwarf of 0.1 M⊙, an evolving Sun-like star, and an evolving Jupiter.

[ascl:1201.008] Mercury: A software package for orbital dynamics

Mercury is a new general-purpose software package for carrying out orbital integrations for problems in solar-system dynamics. Suitable applications include studying the long-term stability of the planetary system, investigating the orbital evolution of comets, asteroids or meteoroids, and simulating planetary accretion. Mercury is designed to be versatile and easy to use, accepting initial conditions in either Cartesian coordinates or Keplerian elements in "cometary" or "asteroidal" format, with different epochs of osculation for different objects. Output from an integration consists of osculating elements, written in a machine-independent compressed format, which allows the results of a calculation performed on one platform to be transferred (e.g. via FTP) and decoded on another.

During an integration, Mercury monitors and records details of close encounters, sungrazing events, ejections and collisions between objects. The effects of non-gravitational forces on comets can also be modeled. The package supports integrations using a mixed-variable symplectic routine, the Bulirsch-Stoer method, and a hybrid code for planetary accretion calculations.

[ascl:1305.015] Merger Trees: Formation history of dark matter haloes

Merger Trees uses a Monte Carlo algorithm to generate merger trees describing the formation history of dark matter haloes; the algorithm is implemented in Fortran. The algorithm is a modification of the algorithm of Cole et al. used in the GALFORM semi-analytic galaxy formation model (ascl:1510.005) based on the Extended Press–Schechter theory. It should be applicable to hierarchical models with a wide range of power spectra and cosmological models. It is tuned to be in accurate agreement with the conditional mass functions found in the analysis of merger trees extracted from the Λ cold dark matter Millennium N-body simulation. The code should be a useful tool for semi-analytic models of galaxy formation and for modelling hierarchical structure formation in general.

[ascl:1010.083] MESA: Modules for Experiments in Stellar Astrophysics

Stellar physics and evolution calculations enable a broad range of research in astrophysics. Modules for Experiments in Stellar Astrophysics (MESA) is a suite of open source libraries for a wide range of applications in computational stellar astrophysics. A newly designed 1-D stellar evolution module, MESA star, combines many of the numerical and physics modules for simulations of a wide range of stellar evolution scenarios ranging from very-low mass to massive stars, including advanced evolutionary phases. MESA star solves the fully coupled structure and composition equations simultaneously. It uses adaptive mesh refinement and sophisticated timestep controls, and supports shared memory parallelism based on OpenMP. Independently usable modules provide equation of state, opacity, nuclear reaction rates, and atmosphere boundary conditions. Each module is constructed as a separate Fortran 95 library with its own public interface. Examples include comparisons to other codes and show evolutionary tracks of very low mass stars, brown dwarfs, and gas giant planets; the complete evolution of a 1 Msun star from the pre-main sequence to a cooling white dwarf; the Solar sound speed profile; the evolution of intermediate mass stars through the thermal pulses on the He-shell burning AGB phase; the interior structure of slowly pulsating B Stars and Beta Cepheids; evolutionary tracks of massive stars from the pre-main sequence to the onset of core collapse; stars undergoing Roche lobe overflow; and accretion onto a neutron star.

[ascl:1612.012] Meso-NH: Non-hydrostatic mesoscale atmospheric model

Meso-NH is the non-hydrostatic mesoscale atmospheric model of the French research community jointly developed by the Laboratoire d'Aérologie (UMR 5560 UPS/CNRS) and by CNRM (UMR 3589 CNRS/Météo-France). Meso-NH incorporates a non-hydrostatic system of equations for dealing with scales ranging from large (synoptic) to small (large eddy) scales while calculating budgets and has a complete set of physical parameterizations for the representation of clouds and precipitation. It is coupled to the surface model SURFEX for representation of surface atmosphere interactions by considering different surface types (vegetation, city​​, ocean, lake) and allows a multi-scale approach through a grid-nesting technique. Meso-NH is versatile, vectorized, parallelized, and operates in 1D, 2D or 3D; it is coupled with a chemistry module (including gas-phase, aerosol, and aqua-phase components) and a lightning module, and has observation operators that compare model output directly with satellite observations, radar, lidar and GPS.

[ascl:1111.009] MESS: Multi-purpose Exoplanet Simulation System

MESS is a Monte Carlo simulation IDL code which uses either the results of the statistical analysis of the properties of discovered planets, or the results of the planet formation theories, to build synthetic planet populations fully described in terms of frequency, orbital elements and physical properties. They can then be used to either test the consistency of their properties with the observed population of planets given different detection techniques or to actually predict the expected number of planets for future surveys. It can be used to probe the physical and orbital properties of a putative companion within the circumstellar disk of a given star and to test constrain the orbital distribution properties of a potential planet population around the members of the TW Hydrae association. Finally, using in its predictive mode, the synergy of future space and ground-based telescopes instrumentation has been investigated to identify the mass-period parameter space that will be probed in future surveys for giant and rocky planets.

[ascl:1205.010] Meudon PDR: Atomic & molecular structure of interstellar clouds

The Meudon PDR code computes the atomic and molecular structure of interstellar clouds. It can be used to study the physics and chemistry of diffuse clouds, photodissociation regions (PDRs), dark clouds, or circumstellar regions. The model computes the thermal balance of a stationary plane-parallel slab of gas and dust illuminated by a radiation field and takes into account heating processes such as the photoelectric effect on dust, chemistry, cosmic rays, etc. and cooling resulting from infrared and millimeter emission of the abundant species. Chemistry is solved for any number of species and reactions. Once abundances of atoms and molecules and level excitation of the most important species have been computed at each point, line intensities and column densities can be deduced.

[ascl:1106.013] MGCAMB: Modification of Growth with CAMB

CAMB is a public Fortran 90 code written by Antony Lewis and Anthony Challinor for evaluating cosmological observables. MGCAMB is a modified version of CAMB in which the linearized Einstein equations of General Relativity (GR) are modified. MGCAMB can also be used in CosmoMC to fit different modified-gravity (MG) models to data.

[ascl:1403.017] MGE_FIT_SECTORS: Multi-Gaussian Expansion fits to galaxy images

MGE_FIT_SECTORS performs Multi-Gaussian Expansion (MGE) fits to galaxy images. The MGE parameterizations are useful in the construction of realistic dynamical models of galaxies, PSF deconvolution of images, the correction and estimation of dust absorption effects, and galaxy photometry. The algorithm is well suited for use with multiple-resolution images (e.g. Hubble Space Telescope (HST) and ground-based images).

[ascl:1010.081] MGGPOD: A Monte Carlo Suite for Gamma-Ray Astronomy

We have developed MGGPOD, a user-friendly suite of Monte Carlo codes built around the widely used GEANT (Version 3.21) package. The MGGPOD Monte Carlo suite and documentation are publicly available for download. MGGPOD is an ideal tool for supporting the various stages of gamma-ray astronomy missions, ranging from the design, development, and performance prediction through calibration and response generation to data reduction. In particular, MGGPOD is capable of simulating ab initio the physical processes relevant for the production of instrumental backgrounds. These include the build-up and delayed decay of radioactive isotopes as well as the prompt de-excitation of excited nuclei, both of which give rise to a plethora of instrumental gamma-ray background lines in addition to continuum backgrounds.

[ascl:1402.035] MGHalofit: Modified Gravity extension of Halofit

MGHalofit is a modified gravity extension of the fitting formula for the matter power spectrum of HALOFIT and its improvement by Takahashi et al. MGHalofit is implemented in MGCAMB, which is based on CAMB. MGHalofit calculates the nonlinear matter power spectrum P(k) for the Hu-Sawicki model. Comparing MGHalofit predictions at various redshifts (z<=1) to the f(R) simulations, the accuracy on P(k) is 6% at k<1 h/Mpc and 12% at 1<k<10 h/Mpc respectively.

[ascl:1511.007] MHF: MLAPM Halo Finder

MHF is a Dark Matter halo finder that is based on the refinement grids of MLAPM. The grid structure of MLAPM adaptively refines around high-density regions with an automated refinement algorithm, thus naturally "surrounding" the Dark Matter halos, as they are simply manifestations of over-densities within (and exterior) to the underlying host halo. Using this grid structure, MHF restructures the hierarchy of nested isolated MLAPM grids into a "grid tree". The densest cell in the end of a tree branch marks center of a prospective Dark Matter halo. All gravitationally bound particles about this center are collected to obtain the final halo catalog. MHF automatically finds halos within halos within halos.

[ascl:1205.003] MIA+EWS: MIDI data reduction tool

MIA+EWS is a package of two data reduction tools for MIDI data which uses power-spectrum analysis or the information contained in the spectrally-dispersed fringe measurements in order to estimate the correlated flux and the visibility as function of wavelength in the N-band. MIA, which stands for MIDI Interactive Analysis, uses a Fast Fourier Transformation to calculate the Fourier amplitudes of the fringe packets to calculate the correlated flux and visibility. EWS stands for Expert Work-Station, which is a collection of IDL tools to apply coherent visibility analysis to reduce MIDI data. The EWS package allows the user to control and examine almost every aspect of MIDI data and its reduction. The usual data products are the correlated fluxes, total fluxes and differential phase.

[ascl:1303.007] micrOMEGAs: Calculation of dark matter properties

micrOMEGAs calculates the properties of cold dark matter in a generic model of particle physics. First developed to compute the relic density of dark matter, the code also computes the rates for dark matter direct and indirect detection. The code provides the mass spectrum, cross-sections, relic density and exotic fluxes of gamma rays, positrons and antiprotons. The propagation of charged particles in the Galactic halo is handled with a module that allows to easily modify the propagation parameters. The cross-sections for both spin dependent and spin independent interactions of WIMPS on protons are computed automatically as well as the rates for WIMP scattering on nuclei in a large detector. Annihilation cross-sections of the dark matter candidate at zero velocity, relevant for indirect detection of dark matter, are computed automatically, and the propagation of charged particles in the Galactic halo is also handled.

[ascl:1010.008] midIR_sensitivity: Mid-infrared astronomy with METIS

midIR_sensitivity is idl code that calculates the sensitivity of a ground-based mid-infrared instrument for astronomy. The code was written for the Phase A study of the instrument METIS (http://www.strw.leidenuniv.nl/metis), the Mid-Infrared E-ELT Imager and Spectrograph, for the 42-m European Extremely Large Telescope. The model uses a detailed set of input parameters for site characteristics and atmospheric profiles, optical design, and thermal background. The code and all input parameters are highly tailored for the particular design parameters of the E-ELT and METIS, however, the program is structured in such a way that the parameters can easily be adjusted for a different system, or alternative input files used.

[ascl:1511.012] milkywayproject_triggering: Correlation functions for two catalog datasets

This triggering code calculates the correlation function between two astrophysical data catalogs using the Landy-Szalay approximator generalized for heterogeneous datasets (Landy & Szalay, 1993; Bradshaw et al, 2011) or the auto-correlation function of one dataset. It assumes that one catalog has positional information as well as an object size (effective radius), and the other only positional information.

[submitted] millennium-tap-query: A Python Tool to Query the Millennium Simulation UWS/TAP client

millennium-tap-query is a simple wrapper for the Python package requests to deal with connections to the Millennium TAP Web Client. With this tool you can perform basic or advanced queries to the Millennium Simulation database and download the data products. millennium-tap-query is similar to the TAP query tool in the German Astrophysical Virtual Observatory (GAVO) VOtables package.

[ascl:0101.001] MILLISEARCH: A Search for Millilensing in BATSE GRB Data

The millisearch.for code was used to generate a new search for the gravitational lens effects of a significant cosmological density of supermassive compact objects (SCOs) on gamma-ray bursts. No signal attributable to millilensing was found. We inspected the timing data of 774 BATSE-triggered GRBs for evidence of millilensing: repeated peaks similar in light-curve shape and spectra. Our null detection leads us to conclude that, in all candidate universes simulated, OmegaSCO < 0.1 is favored for 105 < MSCO/Modot < 109, while in some universes and mass ranges the density limits are as much as 10 times lower. Therefore, a cosmologically significant population of SCOs near globular cluster mass neither came out of the primordial universe, nor condensed at recombination.

[ascl:1302.006] Minerva: Cylindrical coordinate extension for Athena

Minerva is a cylindrical coordinate extension of the Athena astrophysical MHD code of Stone, Gardiner, Teuben, and Hawley. The extension follows the approach of Athena's original developers and has been designed to alter the existing Cartesian-coordinates code as minimally and transparently as possible. The numerical equations in cylindrical coordinates are formulated to maintain consistency with constrained transport (CT), a central feature of the Athena algorithm, while making use of previously implemented code modules such as the Riemann solvers. Angular momentum transport, which is critical in astrophysical disk systems dominated by rotation, is treated carefully.

[ascl:1106.007] MIRIAD: Multi-channel Image Reconstruction, Image Analysis, and Display

MIRIAD is a radio interferometry data-reduction package, designed for taking raw visibility data through calibration to the image analysis stage. It has been designed to handle any interferometric array, with working examples for BIMA, CARMA, SMA, WSRT, and ATCA. A separate version for ATCA is available, which differs in a few minor ways from the CARMA version.

[ascl:1110.025] MIS: A Miriad Interferometry Singledish Toolkit

MIS is a pipeline toolkit using the package MIRIAD to combine Interferometric and Single Dish data. This was prompted by our observations made with the Combined Array For Research in Millimeter-wave Astronomy (CARMA) interferometer of the star-forming region NGC 1333, a large survey highlighting the new 23-element and singledish observing modes. The project consists of 20 CARMA datasets each containing interferometric as well as simultaneously obtained single dish data, for 3 molecular spectral lines and continuum, in 527 different pointings, covering an area of about 8 by 11 arcminutes. A small group of collaborators then shared this toolkit and their parameters via CVS, and scripts were developed to ensure uniform data reduction across the group. The pipeline was run end-to-end each night that new observations were obtained, producing maps that contained all the data to date. This approach could serve as a model for repeated calibration and mapping of large mixed-mode correlation datasets from ALMA.

[ascl:1010.062] MissFITS: Basic Maintenance and Packaging Tasks on FITS Files

MissFITS is a program that performs basic maintenance and packaging tasks on FITS files using an optimized FITS library. MissFITS can:

  • add, edit, and remove FITS header keywords;
  • split and join Multi-Extension-FITS (MEF) files;
  • unpile and pile FITS data-cubes; and,
  • create, check, and update FITS checksums, using R. Seaman’s protocol.

[ascl:1505.011] missForest: Nonparametric missing value imputation using random forest

missForest imputes missing values particularly in the case of mixed-type data. It uses a random forest trained on the observed values of a data matrix to predict the missing values. It can be used to impute continuous and/or categorical data including complex interactions and non-linear relations. It yields an out-of-bag (OOB) imputation error estimate without the need of a test set or elaborate cross-validation and can be run in parallel to save computation time. missForest has been used to, among other things, impute variable star colors in an All-Sky Automated Survey (ASAS) dataset of variable stars with no NOMAD match.

[ascl:1409.001] mixT: single-temperature fit for a multi-component thermal plasma

mixT accurately predicts T derived from a single-temperature fit for a multi-component thermal plasma. It can be applied in the deprojection analysis of objects with the temperature and metallicity gradients, for correction of the PSF effects, for consistent comparison of numerical simulations of galaxy clusters and groups with the X-ray observations, and for estimating how emission from undetected components can bias the global X-ray spectral analysis.

[ascl:1206.010] mkj_libs: Helper routines for plane-fitting & analysis tools

mkj_libs provides a set of helper routines (vector operations, astrometry, statistical analysis of spherical data) for the main plane-fitting and analysis tools.

[ascl:0104.001] MLAPM: Simulating Structure Formation from Collisionless Matter

We present a computer code written in C that is designed to simulate structure formation from collisionless matter. The code is purely grid-based and uses a recursively refined Cartesian grid to solve Poisson's equation for the potential, rather than obtaining the potential from a Green's function. Refinements can have arbitrary shapes and in practice closely follow the complex morphology of the density field that evolves. The timestep shortens by a factor two with each successive refinement. It is argued that an appropriate choice of softening length is of great importance and that the softening should be at all points an appropriate multiple of the local inter-particle separation. Unlike tree and P3M codes, multigrid codes automatically satisfy this requirement. We show that at early times and low densities in cosmological simulations, the softening needs to be significantly smaller relative to the inter-particle separation than in virialized regions. Tests of the ability of the code's Poisson solver to recover the gravitational fields of both virialized halos and Zel'dovich waves are presented, as are tests of the code's ability to reproduce analytic solutions for plane-wave evolution. The times required to conduct a LCDM cosmological simulation for various configurations are compared with the times required to complete the same simulation with the ART, AP3M and GADGET codes. The power spectra, halo mass functions and halo-halo correlation functions of simulations conducted with different codes are compared.

[ascl:1403.003] MLZ: Machine Learning for photo-Z

The parallel Python framework MLZ (Machine Learning and photo-Z) computes fast and robust photometric redshift PDFs using Machine Learning algorithms. It uses a supervised technique with prediction trees and random forest through TPZ that can be used for a regression or a classification problem, or a unsupervised methods with self organizing maps and random atlas called SOMz. These machine learning implementations can be efficiently combined into a more powerful one resulting in robust and accurate probability distributions for photometric redshifts.

[ascl:1412.010] MMAS: Make Me A Star

Make Me A Star (MMAS) quickly generates stellar collision remnants and can be used in combination with realistic dynamical simulations of star clusters that include stellar collisions. The code approximates the merger process (including shock heating, hydrodynamic mixing, mass ejection, and angular momentum transfer) with simple algorithms based on conservation laws and a basic qualitative understanding of the hydrodynamics. These simple models agree very well with those from SPH (smoothed particle hydrodynamics) calculations of stellar collisions, and the subsequent stellar evolution of these models also matches closely that of the more accurate hydrodynamic models.

[ascl:1110.010] MOCASSIN: MOnte CArlo SimulationS of Ionized Nebulae

MOCASSIN is a fully 3D or 2D photoionisation and dust radiative transfer code which employs a Monte Carlo approach to the transfer of radiation through media of arbitrary geometry and density distribution. Written in Fortran, it was originally developed for the modelling of photoionised regions like HII regions and planetary nebulae and has since expanded and been applied to a variety of astrophysical problems, including modelling clumpy dusty supernova envelopes, star forming galaxies, protoplanetary disks and inner shell fluorence emission in the photospheres of stars and disk atmospheres. The code can deal with arbitrary Cartesian grids of variable resolution, it has successfully been used to model complex density fields from SPH calculations and can deal with ionising radiation extending from Lyman edge to the X-ray. The dust and gas microphysics is fully coupled both in the radiation transfer and in the thermal balance.

[ascl:1010.009] ModeCode: Bayesian Parameter Estimation for Inflation

ModeCode is a publicly available code that computes the primordial scalar and tensor power spectra for single field inflationary models. ModeCode solves the inflationary mode equations numerically, avoiding the slow roll approximation. It provides an efficient and robust numerical evaluation of the inflationary perturbation spectrum, and allows the free parameters in the inflationary potential to be estimated within an MCMC computation. ModeCode also allows the estimation of reheating uncertainties once a potential has been specified. It is interfaced with CAMB and CosmoMC to compute cosmic microwave background angular power spectra and perform likelihood analysis and parameter estimation. It can be run as a standalone code as well. Errors in the results from ModeCode contribute negligibly to the error budget for analyses of data from Planck or other next generation experiments.

[ascl:1109.023] MOKA: A New Tool for Strong Lensing Studies

We present a new algorithm for simulating the gravitational lensing signal from cluster-sized haloes: MOKA. This algorithm implements the most recent results from numerical simulations to create realistic lenses with properties independent of numerical resolution. We perform systematic studies of the strong lensing cross section in dependence of halo structure. We find that the cross sections depend most strongly on the concentration and on the inner slope of the density profile of a halo. However, fixing these properties, further important contributions are due to halo triaxiality and the presence of a bright central galaxy.

[ascl:1501.013] Molecfit: Telluric absorption correction tool

Molecfit corrects astronomical observations for atmospheric absorption features based on fitting synthetic transmission spectra to the astronomical data, which saves a significant amount of valuable telescope time and increases the instrumental efficiency. Molecfit can also estimate molecular abundances, especially the water vapor content of the Earth’s atmosphere. The tool can be run from a command-line or more conveniently through a GUI.

[ascl:1212.004] MOLIERE-5: Forward and inversion model for sub-mm wavelengths

MOLIERE-5 (Microwave Observation LIne Estimation and REtrieval) is a versatile forward and inversion model for the millimeter and submillimeter wavelengths range and includes an inversion model. The MOLIERE-5 forward model includes modules for the calculation of absorption coefficients, radiative transfer, and instrumental characteristics. The radiative transfer model is supplemented by a sensitivity module for estimating the contribution to the spectrum of each catalog line at its center frequency enabling the model to effectively filter for small spectral lines. The instrument model consists of several independent modules, including the calculation of the convolution of spectra and weighting functions with the spectrometer response functions. The instrument module also provides several options for modeling of frequency-switched observations. The MOLIERE-5 inversion model calculates linear Optimal Estimation, a least-squares retrieval method which uses statistical apriori knowledge on the retrieved parameters for the regularization of ill-posed inversion problems and computes diagnostics such as the measurement and smoothing error covariance matrices along with contribution and averaging kernel functions.

[ascl:1206.004] MOLSCAT: MOLecular SCATtering

MOLSCAT is a FORTRAN code for quantum mechanical (coupled channel) solution of the nonreactive molecular scattering problem and was developed to obtain collision rates for molecules in the interstellar gas which are needed to understand microwave and infrared astronomical observations. The code is implemented for various types of collision partners. In addition to the essentially exact close coupling method several approximate methods, including the Coupled States and Infinite Order Sudden approximations, are provided.

[ascl:1010.036] Montage: An Astronomical Image Mosaicking Toolkit

Montage is an open source code toolkit for assembling Flexible Image Transport System (FITS) images into custom mosaics. It runs on all common Linux/Unix platforms, on desktops, clusters and computational grids, and supports all World Coordinate System (WCS) projections and common coordinate systems. Montage preserves spatial and calibration fidelity of input images, processes 40 million pixels in up to 32 minutes on 128 nodes on a Linux cluster, and provides independent engines for analyzing the geometry of images on the sky, re-projecting images, rectifying background emission to a common level, and co-adding images. It offers convenient tools for managing and manipulating large image files.

[ascl:1502.006] Montblanc: GPU accelerated Radio Interferometer Measurement Equations in support of Bayesian Inference for Radio Observations

Montblanc, written in Python, is a GPU implementation of the Radio interferometer measurement equation (RIME) in support of the Bayesian inference for radio observations (BIRO) technique. The parameter space that BIRO explores results in tens of thousands of computationally expensive RIME evaluations before reduction to a single X2 value. The RIME is calculated over four dimensions, time, baseline, channel and source and the values in this 4D space can be independently calculated; therefore, the RIME is particularly amenable to a parallel implementation accelerated by Graphics Programming Units (GPUs). Montblanc is implemented for NVIDIA's CUDA architecture and outperforms MeqTrees (ascl:1209.010) and OSKAR.

[ascl:1307.002] Monte Python: Monte Carlo code for CLASS in Python

Monte Python is a parameter inference code which combines the flexibility of the python language and the robustness of the cosmological code CLASS into a simple and easy to manipulate Monte Carlo Markov Chain code.

[ascl:1202.009] MOOG: LTE line analysis and spectrum synthesis

MOOG performs a variety of LTE line analysis and spectrum synthesis tasks. The typical use of MOOG is to assist in the determination of the chemical composition of a star. The basic equations of LTE stellar line analysis are followed. The coding is in various subroutines that are called from a few driver routines; these routines are written in standard FORTRAN. The standard MOOG version has been developed on unix, linux and macintosh computers.

One of the chief assets of MOOG is its ability to do on-line graphics. The plotting commands are given within the FORTRAN code. MOOG uses the graphics package SM, chosen for its ease of implementation in FORTRAN codes. Plotting calls are concentrated in just a few routines, and it should be possible for users of other graphics packages to substitute other appropriate FORTRAN commands.

[ascl:1308.018] MoogStokes: Zeeman polarized radiative transfer

MOOGStokes is a version of the MOOG one-dimensional local thermodynamic equilibrium radiative transfer code that incorporates a Stokes vector treatment of polarized radiation through a magnetic medium. It consists of three complementary programs that together can synthesize the disk-averaged emergent spectrum of a star with a magnetic field. The MOOGStokes package synthesizes emergent spectra of stars with magnetic fields in a familiar computational framework and produces disk-averaged spectra for all Stokes vectors ( I, Q, U, V ), normalized by the continuum.

[ascl:1111.006] MOPEX: MOsaicker and Point source EXtractor

MOPEX (MOsaicker and Point source EXtractor) is a package for reducing and analyzing imaging data, as well as MIPS SED data. MOPEX includes the point source extraction package, APEX.
MOPEX is designed to allow the user to:

  • perform sophisticated background matching of individual data frames
  • mosaic the individual frames downloaded from the Spitzer archive
  • perform both temporal and spatial outlier rejection during mosaicking
  • apply offline pointing refinement for MIPS data (refinement is already applied to IRAC data)
  • perform source detection on the mosaics using APEX
  • compute aperture photometry or PRF-fitting photometry for point sources
  • perform interpolation, coaddition, and spectrum extraction of MIPS SED images.
MOPEX comes in two different interfaces (GUI and command-line), both of which come packaged together. We recommend that all new users start with the GUI, which is more user-friendly than the command-line interface

[ascl:1303.011] MOPSIC: Extended Version of MOPSI

MOPSIC was created to analyze bolometer data but can be used for much more versatile tasks. It is an extension of MOPSI; this software had been merged with the command interpreter of GILDAS. For data reduction, MOPSIC uses a special method to calculate the chopped signal. This gives much better results than the straight difference of the signals obtained at both chopper positions. In addition there are also scripts to reduce pointings, skydips, and to calculate the RCPs (Receiver Channel Parameters) from calibration maps. MOPSIC offers a much broader range of applications including advanced planning functions for mapping and onoff observations, post-reduction data analysis and processing and even reduction of non-bolometer data (optical, IR, spectroscopy).

[ascl:1611.003] MPDAF: MUSE Python Data Analysis Framework

MPDAF, the MUSE Python Data Analysis Framework, provides tools to work with MUSE-specific data (for example, raw data and pixel tables), and with more general data such as spectra, images, and data cubes. Originally written to work with MUSE data, it can also be used for other data, such as that from the Hubble Space Telescope. MPDAF also provides MUSELET, a SExtractor-based tool to detect emission lines in a data cube, and a format to gather all the information on a source in one FITS file. MPDAF was developed and is maintained by CRAL (Centre de Recherche Astrophysique de Lyon).

[ascl:1208.019] MPFIT: Robust non-linear least squares curve fitting

These IDL routines provide a robust and relatively fast way to perform least-squares curve and surface fitting. The algorithms are translated from MINPACK-1, which is a rugged minimization routine found on Netlib, and distributed with permission. This algorithm is more desirable than CURVEFIT because it is generally more stable and less likely to crash than the brute-force approach taken by CURVEFIT, which is based upon Numerical Recipes.

[ascl:1304.014] MPgrafic: A parallel MPI version of Grafic-1

MPgrafic is a parallel MPI version of Grafic-1 which can produce large cosmological initial conditions on a cluster without requiring shared memory. The real Fourier transforms are carried in place using fftw while minimizing the amount of used memory (at the expense of performance) in the spirit of Grafic-1. The writing of the output file is also carried in parallel. In addition to the technical parallelization, it provides three extensions over Grafic-1:

  • it can produce power spectra with baryon wiggles (DJ Eisenstein and W. Hu, Ap. J. 496);
  • it has the optional ability to load a lower resolution noise map corresponding to the low frequency component which will fix the larger scale modes of the simulation (extra flag 0/1 at the end of the input process) in the spirit of Grafic-2;
  • it can be used in conjunction with constrfield, which generates initial conditions phases from a list of local constraints on density, tidal field density gradient and velocity.

[ascl:1208.014] MPI-AMRVAC: MPI-Adaptive Mesh Refinement-Versatile Advection Code

MPI-AMRVAC is an MPI-parallelized Adaptive Mesh Refinement code, with some heritage (in the solver part) to the Versatile Advection Code or VAC, initiated by Gábor Tóth at the Astronomical Institute at Utrecht in November 1994, with help from Rony Keppens since 1996. Previous incarnations of the Adaptive Mesh Refinement version of VAC were of restricted use only, and have been used for basic research in AMR strategies, or for well-targeted applications. This MPI version uses a full octree block-based approach, and allows for general orthogonal coordinate systems. MPI-AMRVAC aims to advance any system of (primarily hyperbolic) partial differential equations by a number of different numerical schemes. The emphasis is on (near) conservation laws, with shock-dominated problems as a main research target. The actual equations are stored in separate modules, can be added if needed, and they can be selected by a simple configuration of the VACPP preprocessor. The dimensionality of the problem is also set through VACPP. The numerical schemes are able to handle discontinuities and smooth flows as well.

[ascl:1106.022] MPI-Defrost: Extension of Defrost to MPI-based Cluster Environment

MPI-Defrost extends Frolov’s Defrost to an MPI-based cluster environment. This version has been restricted to a single field. Restoring two-field support should be straightforward, but will require some code changes. Some output options may also not be fully supported under MPI.

This code was produced to support our own work, and has been made available for the benefit of anyone interested in either oscillon simulations or an MPI capable version of Defrost, and it is provided on an "as-is" basis. Andrei Frolov is the primary developer of Defrost and we thank him for placing his work under the GPL (GNU Public License), and thus allowing us to distribute this modified version.

[ascl:1212.003] MPWide: Light-weight communication library for distributed computing

MPWide is a light-weight communication library for distributed computing. It is specifically developed to allow message passing over long-distance networks using path-specific optimizations. An early version of MPWide was used in the Gravitational Billion Body Project to allow simulations across multiple supercomputers.

[ascl:1102.005] MRLENS: Multi-Resolution methods for gravitational LENSing

The MRLENS package offers a new method for the reconstruction of weak lensing mass maps. It uses the multiscale entropy concept, which is based on wavelets, and the False Discovery Rate which allows us to derive robust detection levels in wavelet space. We show that this new restoration approach outperforms several standard techniques currently used for weak shear mass reconstruction. This method can also be used to separate E and B modes in the shear field, and thus test for the presence of residual systematic effects. We concentrate on large blind cosmic shear surveys, and illustrate our results using simulated shear maps derived from N-Body Lambda-CDM simulations with added noise corresponding to both ground-based and space-based observations.

[ascl:1504.016] MRrelation: Posterior predictive mass distribution

MRrelation calculates the posterior predictive mass distribution for an individual planet. The probabilistic mass-radius relationship (M-R relation) is evaluated within a Bayesian framework, which both quantifies this intrinsic dispersion and the uncertainties on the M-R relation parameters.

[ascl:1112.010] MRS3D: 3D Spherical Wavelet Transform on the Sphere

Future cosmological surveys will provide 3D large scale structure maps with large sky coverage, for which a 3D Spherical Fourier-Bessel (SFB) analysis is natural. Wavelets are particularly well-suited to the analysis and denoising of cosmological data, but a spherical 3D isotropic wavelet transform does not currently exist to analyse spherical 3D data. We present a new fast Discrete Spherical Fourier-Bessel Transform (DSFBT) based on both a discrete Bessel Transform and the HEALPIX angular pixelisation scheme. We tested the 3D wavelet transform and as a toy-application, applied a denoising algorithm in wavelet space to the Virgo large box cosmological simulations and found we can successfully remove noise without much loss to the large scale structure. The new spherical 3D isotropic wavelet transform, called MRS3D, is ideally suited to analysing and denoising future 3D spherical cosmological surveys; it uses a novel discrete spherical Fourier-Bessel Transform. MRS3D is based on two packages, IDL and Healpix and can be used only if these two packages have been installed.

[ascl:1701.006] MSWAVEF: Momentum-Space Wavefunctions

MSWAVEF calculates hydrogenic and non-hydrogenic momentum-space electronic wavefunctions. Such wavefunctions are often required to calculate various collision processes, such as excitation and line broadening cross sections. The hydrogenic functions are calculated using the standard analytical expressions. The non-hydrogenic functions are calculated within quantum defect theory according to the method of Hoang Binh and van Regemorter (1997). Required Hankel transforms have been determined analytically for angular momentum quantum numbers ranging from zero to 13 using Mathematica. Calculations for higher angular momentum quantum numbers are possible, but slow (since calculated numerically). The code is written in IDL.

[ascl:1506.004] multiband_LS: Multiband Lomb-Scargle Periodograms

The multiband periodogram is a general extension of the well-known Lomb-Scargle approach for detecting periodic signals in time-domain data. In addition to advantages of the Lomb-Scargle method such as treatment of non-uniform sampling and heteroscedastic errors, the multiband periodogram significantly improves period finding for randomly sampled multiband light curves (e.g., Pan-STARRS, DES and LSST). The light curves in each band are modeled as arbitrary truncated Fourier series, with the period and phase shared across all bands.

[ascl:1109.006] MultiNest: Efficient and Robust Bayesian Inference

We present further development and the first public release of our multimodal nested sampling algorithm, called MultiNest. This Bayesian inference tool calculates the evidence, with an associated error estimate, and produces posterior samples from distributions that may contain multiple modes and pronounced (curving) degeneracies in high dimensions. The developments presented here lead to further substantial improvements in sampling efficiency and robustness, as compared to the original algorithm presented in Feroz & Hobson (2008), which itself significantly outperformed existing MCMC techniques in a wide range of astrophysical inference problems. The accuracy and economy of the MultiNest algorithm is demonstrated by application to two toy problems and to a cosmological inference problem focusing on the extension of the vanilla $Lambda$CDM model to include spatial curvature and a varying equation of state for dark energy. The MultiNest software is fully parallelized using MPI and includes an interface to CosmoMC. It will also be released as part of the SuperBayeS package, for the analysis of supersymmetric theories of particle physics, at this http URL.

[ascl:1109.008] Multipole Vectors: Decomposing Functions on a Sphere

We propose a novel representation of cosmic microwave anisotropy maps, where each multipole order l is represented by l unit vectors pointing in directions on the sky and an overall magnitude. These "multipole vectors and scalars" transform as vectors under rotations. Like the usual spherical harmonics, multipole vectors form an irreducible representation of the proper rotation group SO(3). However, they are related to the familiar spherical harmonic coefficients, alm, in a nonlinear way, and are therefore sensitive to different aspects of the CMB anisotropy. Nevertheless, it is straightforward to determine the multipole vectors for a given CMB map and we present an algorithm to compute them. Using the WMAP full-sky maps, we perform several tests of the hypothesis that the CMB anisotropy is statistically isotropic and Gaussian random. We find that the result from comparing the oriented area of planes defined by these vectors between multipole pairs 2<=l1!=l2<=8 is inconsistent with the isotropic Gaussian hypothesis at the 99.4% level for the ILC map and at 98.9% level for the cleaned map of Tegmark et al. A particular correlation is suggested between the l=3 and l=8 multipoles, as well as several other pairs. This effect is entirely different from the now familiar planarity and alignment of the quadrupole and octupole: while the aforementioned is fairly unlikely, the multipole vectors indicate correlations not expected in Gaussian random skies that make them unusually likely. The result persists after accounting for pixel noise and after assuming a residual 10% dust contamination in the cleaned WMAP map. While the definitive analysis of these results will require more work, we hope that multipole vectors will become a valuable tool for various cosmological tests, in particular those of cosmic isotropy.

[ascl:1704.014] Multipoles: Potential gain for binary lens estimation

Multipoles, written in Python, calculates the quadrupole and hexadecapole approximations of the finite-source magnification: quadrupole (Wk,rho,Gamma) and hexadecapole (Wk,rho,Gamma). The code is efficient and faster than previously available methods, and could be generalized for use on large portions of the light curves.

[ascl:1402.006] Munipack: General astronomical image processing software

Munipack provides easy-to-use tools for all astronomical astrometry and photometry, access to Virtual Observatory as well as FITS files operations and a simple user interface along with a powerful processing engine. Its many features include a FITS images viewer that allows for basic (astronomical) operations with frames, advanced image processor supporting an infinite dynamic range and advanced color management, and astrometric calibration of images. The astrometry module uses robust statistical estimators and algorithms. The photometry module provides the classical method detection of stars and implements the aperture photometry, calibrated on the basis of photon statistics, and allows for the automatic detection and aperture photometry of stars; calibration on absolute fluxes is possible. The software also provides a standard way to correct for all the bias, dark and flat-field frames, and many other features.

[ascl:1605.007] MUSCLE: MUltiscale Spherical-ColLapse Evolution

MUSCLE (MUltiscale Spherical ColLapse Evolution) produces low-redshift approximate N-body realizations accurate to few-Megaparsec scales. It applies a spherical-collapse prescription on multiple Gaussian-smoothed scales. It achieves higher accuracy than perturbative schemes (Zel'dovich and second-order Lagrangian perturbation theory - 2LPT), and by including the void-in-cloud process (voids in large-scale collapsing regions), solves problems with a single-scale spherical-collapse scheme.

[ascl:1610.004] MUSE-DRP: MUSE Data Reduction Pipeline

The MUSE pipeline turns the complex raw data of the MUSE integral field spectrograph into a ready-to-use datacube for scientific analysis.

[ascl:1311.011] MUSIC: MUlti-Scale Initial Conditions

MUSIC generates multi-scale initial conditions with multiple levels of refinements for cosmological ‘zoom-in’ simulations. The code uses an adaptive convolution of Gaussian white noise with a real-space transfer function kernel together with an adaptive multi-grid Poisson solver to generate displacements and velocities following first- (1LPT) or second-order Lagrangian perturbation theory (2LPT). MUSIC achieves rms relative errors of the order of 10−4 for displacements and velocities in the refinement region and thus improves in terms of errors by about two orders of magnitude over previous approaches. In addition, errors are localized at coarse-fine boundaries and do not suffer from Fourier space-induced interference ringing.

[ascl:1203.009] MYRIAD: N-body code for simulations of star clusters

MYRIAD is a C++ code for collisional N-body simulations of star clusters. The code uses the Hermite fourth-order scheme with block time steps, for advancing the particles in time, while the forces and neighboring particles are computed using the GRAPE-6 board. Special treatment is used for close encounters, binary and multiple sub-systems that either form dynamically or exist in the initial configuration. The structure of the code is modular and allows the appropriate treatment of more physical phenomena, such as stellar and binary evolution, stellar collisions and evolution of close black-hole binaries. Moreover, it can be easily modified so that the part of the code that uses GRAPE-6 could be replaced by another module that uses other accelerating-hardware like the Graphics Processing Units (GPUs). Appropriate choice of the free parameters give a good accuracy and speed for simulations of star clusters up to and beyond core collapse. The code accuracy becomes comparable and even better than the accuracy of existing codes when a number of close binary systems is dynamically created in a simulation; this is due to the high accuracy of the method that is used for close binary and multiple sub-systems. The code can be used for evolving star clusters containing equal-mass stars or star clusters with an initial mass function (IMF) containing an intermediate mass black hole (IMBH) at the center and/or a fraction of primordial binaries, which are systems of particular astrophysical interest.

[ascl:1102.001] N-MODY: A Code for Collisionless N-body Simulations in Modified Newtonian Dynamics

N-MODY is a parallel particle-mesh code for collisionless N-body simulations in modified Newtonian dynamics (MOND). N-MODY is based on a numerical potential solver in spherical coordinates that solves the non-linear MOND field equation, and is ideally suited to simulate isolated stellar systems. N-MODY can be used also to compute the MOND potential of arbitrary static density distributions. A few applications of N-MODY indicate that some astrophysically relevant dynamical processes are profoundly different in MOND and in Newtonian gravity with dark matter.

[ascl:1411.014] NAFE: Noise Adaptive Fuzzy Equalization

NAFE (Noise Adaptive Fuzzy Equalization) is an image processing method allowing for visualization of fine structures in SDO AIA high dynamic range images. It produces artifact-free images and gives significantly better results than methods based on convolution or Fourier transform.

[ascl:1409.009] Nahoon: Time-dependent gas-phase chemical model

Nahoon is a gas-phase chemical model that computes the chemical evolution in a 1D temperature and density structure. It uses chemical networks downloaded from the KInetic Database for Astrochemistry (KIDA) but the model can be adapted to any network. The program is written in Fortran 90 and uses the DLSODES (double precision) solver from the ODEPACK package to solve the coupled stiff differential equations. The solver computes the chemical evolution of gas-phase species at a fixed temperature and density and can be used in one dimension (1D) if a grid of temperature, density, and visual extinction is provided. Grains, both neutral and negatively charged, and electrons are considered as chemical species and their concentrations are computed at the same time as those of the other species. Nahoon contains a test to check the temperature range of the validity of the rate coefficients and avoid extrapolations outside this range. A test is also included to check for duplication of chemical reactions, defined over complementary ranges of temperature.

[ascl:1102.006] NBODY Codes: Numerical Simulations of Many-body (N-body) Gravitational Interactions

I review the development of direct N-body codes at Cambridge over nearly 40 years, highlighting the main stepping stones. The first code (NBODY1) was based on the simple concepts of a force polynomial combined with individual time steps, where numerical problems due to close encounters were avoided by a softened potential. Fortuitously, the elegant Kustaanheimo-Stiefel two-body regularization soon permitted small star clusters to be studied (NBODY3). Subsequent extensions to unperturbed three-body and four-body regularization proved beneficial in dealing with multiple interactions. Investigations of larger systems became possible with the Ahmad-Cohen neighbor scheme which was used more than 20 years ago for expanding universe models of 4000 galaxies (NBODY2). Combining the neighbor scheme with the regularization procedures enabled more realistic star clusters to be considered (NBODY5). After a period of simulations with no apparent technical progress, chain regularization replaced the treatment of compact subsystems (NBODY3, NBODY5). More recently, the Hermite integration method provided a major advance and has been implemented on the special-purpose HARP computers (NBODY4) together with an alternative version for workstations and supercomputers (NBODY6). These codes also include a variety of algorithms for stellar evolution based on fast lookup functions. The treatment of primordial binaries contains efficient procedures for chaotic two-body motion as well as tidal circularization, and special attention is paid to hierarchical systems and their stability. This family of N-body codes constitutes a powerful tool for dynamical simulations which is freely available to the astronomical community, and the massive effort owes much to collaborators.

[ascl:1502.010] nbody6tt: Tidal tensors in N-body simulations

nbody6tt, based on Aarseth's nbody6 (ascl:1102.006) code, includes the treatment of complex galactic tides in a direct N-body simulation of a star cluster through the use of tidal tensors (tt) and offers two complementary methods. The first allows consideration of any kind of galaxy and orbit, thus offering versatility; this method cannot be used to study tidal debris, as it relies on the tidal approximation (linearization of the tidal force). The second method is not limited by this and does not require a galaxy simulation; the user defines a numerical function which takes position and time as arguments, and the galactic potential is returned. The space and time derivatives of the potential are used to (i) integrate the motion of the cluster on its orbit in the galaxy (starting from user-defined initial position and velocity vector), and (ii) compute the tidal acceleration on the stars.

[ascl:1010.019] NBSymple: A Double Parallel, Symplectic N-body Code Running on Graphic Processing Units

NBSymple is a numerical code which numerically integrates the equation of motions of N 'particles' interacting via Newtonian gravitation and move in an external galactic smooth field. The force evaluation on every particle is done by mean of direct summation of the contribution of all the other system's particle, avoiding truncation error. The time integration is done with second-order and sixth-order symplectic schemes. NBSymple has been parallelized twice, by mean of the Computer Unified Device Architecture to make the all-pair force evaluation as fast as possible on high-performance Graphic Processing Units NVIDIA TESLA C 1060, while the O(N) computations are distributed on various CPUs by mean of OpenMP Application Program. The code works both in single precision floating point arithmetics or in double precision. The use of single precision allows the use at best of the GPU performances but, of course, limits the precision of simulation in some critical situations. We find a good compromise in using a software reconstruction of double precision for those variables that are most critical for the overall precision of the code.

[ascl:1411.023] NDF: Extensible N-dimensional Data Format Library

The Extensible N-Dimensional Data Format (NDF) stores bulk data in the form of N-dimensional arrays of numbers. It is typically used for storing spectra, images and similar datasets with higher dimensionality. The NDF format is based on the Hierarchical Data System (HDS) and is extensible; not only does it provide a comprehensive set of standard ancillary items to describe the data, it can also be extended indefinitely to handle additional user-defined information of any type. The NDF library is used to read and write files in the NDF format. It is distributed with the Starlink software (ascl:1110.012).

[ascl:1101.002] NDSPMHD Smoothed Particle Magnetohydrodynamics Code

This paper presents an overview and introduction to Smoothed Particle Hydrodynamics and Magnetohydrodynamics in theory and in practice. Firstly, we give a basic grounding in the fundamentals of SPH, showing how the equations of motion and energy can be self-consistently derived from the density estimate. We then show how to interpret these equations using the basic SPH interpolation formulae and highlight the subtle difference in approach between SPH and other particle methods. In doing so, we also critique several `urban myths' regarding SPH, in particular the idea that one can simply increase the `neighbour number' more slowly than the total number of particles in order to obtain convergence. We also discuss the origin of numerical instabilities such as the pairing and tensile instabilities. Finally, we give practical advice on how to resolve three of the main issues with SPMHD: removing the tensile instability, formulating dissipative terms for MHD shocks and enforcing the divergence constraint on the particles, and we give the current status of developments in this area. Accompanying the paper is the first public release of the NDSPMHD SPH code, a 1, 2 and 3 dimensional code designed as a testbed for SPH/SPMHD algorithms that can be used to test many of the ideas and used to run all of the numerical examples contained in the paper.

[ascl:1411.013] NEAT: Nebular Empirical Analysis Tool

NEAT is a fully automated code which carries out a complete analysis of lists of emission lines to estimate the amount of interstellar extinction, calculate representative temperatures and densities, compute ionic abundances from both collisionally excited lines and recombination lines, and finally to estimate total elemental abundances using an ionization correction scheme. NEAT uses a Monte Carlo technique to robustly propagate uncertainties from line flux measurements through to the derived abundances.

[ascl:1608.019] NEBULAR: Spectrum synthesis for mixed hydrogen-helium gas in ionization equilibrium

NEBULAR synthesizes the spectrum of a mixed hydrogen helium gas in collisional ionization equilibrium. It is not a spectral fitting code, but it can be used to resample a model spectrum onto the wavelength grid of a real observation. It supports a wide range of temperatures and densities. NEBULAR includes free-free, free-bound, two-photon and line emission from HI, HeI and HeII. The code will either return the composite model spectrum, or, if desired, the unrescaled atomic emission coefficients. It is written in C++ and depends on the GNU Scientific Library (GSL).

[ascl:1010.004] Needatool: A Needlet Analysis Tool for Cosmological Data Processing

NeedATool (Needlet Analysis Tool) performs data analysis based on needlets, a wavelet rendition powerful for the analysis of fields defined on a sphere. Needlets have been applied successfully to the treatment of astrophysical and cosmological observations, particularly to the analysis of cosmic microwave background (CMB) data. Wavelets have emerged as a useful tool for CMB data analysis, as they combine most of the advantages of both pixel space, where it is easier to deal with partial sky coverage and experimental noise, and the harmonic domain, in which beam treatment and comparison with theoretical predictions are more effective due in large part to their sharp localization.

[ascl:1010.051] NEMO: A Stellar Dynamics Toolbox

NEMO is an extendible Stellar Dynamics Toolbox, following an Open-Source Software model. It has various programs to create, integrate, analyze and visualize N-body and SPH like systems, following the pipe and filter architecture. In addition there are various tools to operate on images, tables and orbits, including FITS files to export/import to/from other astronomical data reduction packages. A large growing fraction of NEMO has been contributed by a growing list of authors. The source code consist of a little over 4000 files and a little under 1,000,000 lines of code and documentation, mostly C, and some C++ and Fortran. NEMO development started in 1986 in Princeton (USA) by Barnes, Hut and Teuben. See also ZENO (ascl:1102.027) for the version that Barnes maintains.

[ascl:1307.017] NEST: Noble Element Simulation Technique

NEST (Noble Element Simulation Technique) offers comprehensive, accurate, and precise simulation of the excitation, ionization, and corresponding scintillation and electroluminescence processes in liquid noble elements, useful for direct dark matter detectors, double beta decay searches, PET scans, and general radiation detection technology. Written in C++, NEST is an add-on module for the Geant4 simulation package that incorporates more detailed physics than is currently available into the simulation of scintillation. NEST is of particular use for low-energy nuclear recoils. All available liquid xenon data on nuclear recoils and electron recoils to date have been taken into consideration in arriving at the current models. NEST also handles the magnitude of the light and charge yields of nuclear recoils, including their electric field dependence, thereby shedding light on the possibility of detection or exclusion of a low-mass dark matter WIMP by liquid xenon detectors.

[ascl:1010.085] Network Tools for Astronomical Data Retrieval

The first step in a science project is the acquisition and understanding of the relevant data. The tools range from simple data transfer methods to more complex browser-emulating scripts. When integrated with a defined sample or catalog, these scripts provide seamless techniques to retrieve and store data of varying types. These tools can be used to leapfrog from website to website to acquire multi-wavelength datasets. This project demonstrates the capability to use multiple data websites, in conjunction, to perform the type of calculations once reserved for on-site datasets.

[ascl:1502.003] NGenIC: Cosmological structure initial conditions

NGenIC is an initial conditions code for cosmological structure formation that can be used to set-up random N-body realizations of Gaussian random fields with a prescribed power spectrum in a homogeneously sampled periodic box. The code creates cosmological initial conditions based on the Zeldovich approximation, in a format directly compatible with GADGET or AREPO.

[ascl:1508.008] NGMIX: Gaussian mixture models for 2D images

NGMIX implements Gaussian mixture models for 2D images. Both the PSF profile and the galaxy are modeled using mixtures of Gaussians. Convolutions are thus performed analytically, resulting in fast model generation as compared to methods that perform the convolution in Fourier space. For the galaxy model, NGMIX supports exponential disks and de Vaucouleurs and Sérsic profiles; these are implemented approximately as a sum of Gaussians using the fits from Hogg & Lang (2013). Additionally, any number of Gaussians can be fit, either completely free or constrained to be cocentric and co-elliptical.

[ascl:1608.016] NICIL: Non-Ideal magnetohydrodynamics Coefficients and Ionisation Library

NICIL (Non-Ideal magnetohydrodynamics Coefficients and Ionisation Library) calculates the ionization values and the coefficients of the non-ideal magnetohydrodynamics terms of Ohmic resistivity, the Hall effect, and ambipolar diffusion. Written as a standalone Fortran90 module that can be implemented in existing codes, NICIL is fully parameterizable, allowing the user to choose which processes to include and decide the values of the free parameters. The module includes both cosmic ray and thermal ionization; the former includes two ion species and three species of dust grains (positively charged, negatively charged and neutral), and the latter includes five elements which can be doubly ionized.

[ascl:1508.002] NICOLE: NLTE Stokes Synthesis/Inversion Code

NICOLE, written in Fortran 90, seeks the model atmosphere that provides the best fit to the Stokes profiles (in a least-squares sense) of an arbitrary number of simultaneously-observes spectral lines from solar/stellar atmospheres. The inversion core used for the development of NICOLE is the LORIEN engine (the Lovely Reusable Inversion ENgine), which combines the SVD technique with the Levenberg-Marquardt minimization method to solve the inverse problem.

[ascl:1302.013] NIFTY: A versatile Python library for signal inference

NIFTY (Numerical Information Field TheorY) is a versatile library enables the development of signal inference algorithms that operate regardless of the underlying spatial grid and its resolution. Its object-oriented framework is written in Python, although it accesses libraries written in Cython, C++, and C for efficiency. NIFTY offers a toolkit that abstracts discretized representations of continuous spaces, fields in these spaces, and operators acting on fields into classes. Thereby, the correct normalization of operations on fields is taken care of automatically. This allows for an abstract formulation and programming of inference algorithms, including those derived within information field theory. Thus, NIFTY permits rapid prototyping of algorithms in 1D and then the application of the developed code in higher-dimensional settings of real world problems. NIFTY operates on point sets, n-dimensional regular grids, spherical spaces, their harmonic counterparts, and product spaces constructed as combinations of those.

[ascl:1106.016] Nightfall: Animated Views of Eclipsing Binary Stars

Nightfall is an astronomy application for fun, education, and science. It can produce animated views of eclipsing binary stars, calculate synthetic lightcurves and radial velocity curves, and eventually determine the best-fit model for a given set of observational data of an eclipsing binary star system.

Nightfall comes with a user guide, and a set of observational data for several eclipsing binary star systems.

[ascl:1501.002] NIGO: Numerical Integrator of Galactic Orbits

NIGO (Numerical Integrator of Galactic Orbits) predicts the orbital evolution of test particles moving within a fully-analytical gravitational potential generated by a multi-component galaxy. The code can simulate the orbits of stars in elliptical and disc galaxies, including non-axisymmetric components represented by a spiral pattern and/or rotating bar(s).

[ascl:1101.006] NIRVANA: A Numerical Tool for Astrophysical Gas Dynamics

The NIRVANA code is capable of the simulation of multi-scale self-gravitational magnetohydrodynamics problems in three space dimensions employing the technique of adaptive mesh refinement. The building blocks of NIRVANA are (i) a fully conservative, divergence-free Godunov-type central scheme for the solution of the equations of magnetohydrodynamics; (ii) a block-structured mesh refinement algorithm which automatically adds and removes elementary grid blocks whenever necessary to achieve adequate resolution and; (iii) an adaptive mesh Poisson solver based on multigrid philosophy which incorporates the so-called elliptic matching condition to keep the gradient of the gravitational potential continous at fine/coarse mesh interfaces.

[ascl:1305.013] Non-Gaussian Realisations

Non-Gaussian Realisations provides code based on a spectral distortion/quantile transformation that generates a realization of a field on a cubic grid that has a specified probability distribution function and a specified power spectrum.

[ascl:1011.016] Non-LTE Models and Theoretical Spectra of Accretion Disks in Active Galactic Nuclei. III. Integrated Spectra for Hydrogen-Helium Disks

We have constructed a grid of non-LTE disk models for a wide range of black hole mass and mass accretion rate, for several values of viscosity parameter alpha, and for two extreme values of the black hole spin: the maximum-rotation Kerr black hole, and the Schwarzschild (non-rotating) black hole. Our procedure calculates self-consistently the vertical structure of all disk annuli together with the radiation field, without any approximations imposed on the optical thickness of the disk, and without any ad hoc approximations to the behavior of the radiation intensity. The total spectrum of a disk is computed by summing the spectra of the individual annuli, taking into account the general relativistic transfer function. The grid covers nine values of the black hole mass between M = 1/8 and 32 billion solar masses with a two-fold increase of mass for each subsequent value; and eleven values of the mass accretion rate, each a power of 2 times 1 solar mass/year. The highest value of the accretion rate corresponds to 0.3 Eddington. We show the vertical structure of individual annuli within the set of accretion disk models, along with their local emergent flux, and discuss the internal physical self-consistency of the models. We then present the full disk-integrated spectra, and discuss a number of observationally interesting properties of the models, such as optical/ultraviolet colors, the behavior of the hydrogen Lyman limit region, polarization, and number of ionizing photons. Our calculations are far from definitive in terms of the input physics, but generally we find that our models exhibit rather red optical/UV colors. Flux discontinuities in the region of the hydrogen Lyman limit are only present in cool, low luminosity models, while hotter models exhibit blueshifted changes in spectral slope.

[ascl:1202.003] NOVAS: Naval Observatory Vector Astrometry Software

NOVAS is an integrated package of subroutines and functions for computing various commonly needed quantities in positional astronomy. The package can provide, in one or two subroutine or function calls, the instantaneous coordinates of any star or planet in a variety of coordinate systems. At a lower level, NOVAS also supplies astrometric utility transformations, such as those for precession, nutation, aberration, parallax, and the gravitational deflection of light. The computations are accurate to better than one milliarcsecond. The NOVAS package is an easy-to-use facility that can be incorporated into data reduction programs, telescope control systems, and simulations. The U.S. parts of The Astronomical Almanac are prepared using NOVAS. Three editions of NOVAS are available: Fortran, C, and Python.

[ascl:1705.014] NPTFit: Non-Poissonian Template Fitting

NPTFit is a specialized Python/Cython package that implements Non-Poissonian Template Fitting (NPTF), originally developed for characterizing populations of unresolved point sources. It offers fast evaluation of likelihoods for NPTF analyses and has an easy-to-use interface for performing non-Poissonian (as well as standard Poissonian) template fits using MultiNest (ascl:1109.006) or other inference tools. It allows inclusion of an arbitrary number of point source templates, with an arbitrary number of degrees of freedom in the modeled flux distribution, and has modules for analyzing and plotting the results of an NPTF.

[ascl:1609.009] NSCool: Neutron star cooling code

NSCool is a 1D (i.e., spherically symmetric) neutron star cooling code written in Fortran 77. The package also contains a series of EOSs (equation of state) to build stars, a series of pre-built stars, and a TOV (Tolman- Oppenheimer-Volkoff) integrator to build stars from an EOS. It can also handle “strange stars” that have a huge density discontinuity between the quark matter and the covering thin baryonic crust. NSCool solves the heat transport and energy balance equations in whole GR, resulting in a time sequence of temperature profiles (and, in particular, a Teff - age curve). Several heating processes are included, and more can easily be incorporated. In particular it can evolve a star undergoing accretion with the resulting deep crustal heating, under a steady or time-variable accretion rate. NSCool is robust, very fast, and highly modular, making it easy to add new subroutines for new processes.

[ascl:1602.008] NuCraft: Oscillation probabilities for atmospheric neutrinos calculator

NuCraft calculates oscillation probabilities for atmospheric neutrinos, taking into account matter effects and the Earth's atmosphere, and supports an arbitrary number of sterile neutrino flavors with easily configurable continuous Earth models. Continuous modeling of the Earth instead of the often-used approximation of four layers with constant density and consideration of the smearing of baseline lengths due to the variable neutrino production heights in Earth's atmosphere each lead to deviations of 10% or more for conventional neutrinos between 1 and 10 GeV.

Would you like to view a random code?