Light Curves Classifier uses data mining and machine learning to obtain and classify desired objects. This task can be accomplished by attributes of light curves or any time series, including shapes, histograms, or variograms, or by other available information about the inspected objects, such as color indices, temperatures, and abundances. After specifying features which describe the objects to be searched, the software trains on a given training sample, and can then be used for unsupervised clustering for visualizing the natural separation of the sample. The package can be also used for automatic tuning parameters of used methods (for example, number of hidden neurons or binning ratio).
Trained classifiers can be used for filtering outputs from astronomical databases or data stored locally. The Light Curve Classifier can also be used for simple downloading of light curves and all available information of queried stars. It natively can connect to OgleII, OgleIII, ASAS, CoRoT, Kepler, Catalina and MACHO, and new connectors or descriptors can be implemented. In addition to direct usage of the package and command line UI, the program can be used through a web interface. Users can create jobs for ”training” methods on given objects, querying databases and filtering outputs by trained filters. Preimplemented descriptors, classifier and connectors can be picked by simple clicks and their parameters can be tuned by giving ranges of these values. All combinations are then calculated and the best one is used for creating the filter. Natural separation of the data can be visualized by unsupervised clustering.
lcps searches for transit-like features (i.e., dips) in photometric data. Its main purpose is to restrict large sets of light curves to a number of files that show interesting behavior, such as drops in flux. While lcps is adaptable to any format of time series, its I/O module is designed specifically for photometry of the Kepler spacecraft. It extracts the pre-conditioned PDCSAP data from light curves files created by the standard Kepler pipeline. It can also handle csv-formatted ascii files. lcps uses a sliding window technique to compare a section of flux time series with its surroundings. A dip is detected if the flux within the window is lower than a threshold fraction of the surrounding fluxes.
lcsim creates artificial light curves using two algorithms. The first simulates Gaussian distributed light curves following a specific power spectral density (PSD) freely selectable by the user. The second algorithm simulates light curves following a specific PSD and matching a specific probability density function (PDF). The package provides methods to resample the simulated light curves and add "observational" noise. Furthermore, the package provides an interface to a SQLite3-based database to store and access the simulations.
ld-exosim selects the optimal (i.e. best estimator in a MSE sense) limb-darkening law for a given transiting exoplanet lightcurve and calculates the limb-darkening induced biases on various exoplanet parameters. Limb-darkening laws include linear, quadratic, logarithmic, square-root and three-parameter laws.
LDC3 samples physically permissible limb darkening coefficients for the Sing et al. (2009) three-parameter law. It defines the physically permissible intensity profile as being everywhere-positive, monotonically decreasing from center to limb and having a curl at the limb. The approximate sampling method is analytic and thus very fast, reproducing physically permissible samples in 97.3% of random draws (high validity) and encompassing 94.4% of the physically permissible parameter volume (high completeness).
Least Asymmetry finds the center of a distribution of light in an image using the least asymmetry method; the code also contains center of light and fitting a Gaussian routines. All functions in Least Asymmetry are designed to take optional weights.
LECTOR is a Fortran 77 code that measures line-strengths in one dimensional ascii spectra. The code returns the values of the Lick indices as well as those of Vazdekis & Arimoto 1999, Vazdekis et al. 2001, Rose 1994, Jones & Worthey 1995 and Cenarro et al. 2001. The code measures as many indices as you wish if the limits of two pseudocontinua (at each side of the feature) and the feature itself (i.e. Lick-style index definition) are provided. The Lick-style indices could be either expressed in pseudo-equivalent widths or in magnitudes. If requested the program provides index error estimates on the basis of photon statistics.
LEFTfield forward models cosmological matter density fields and biased tracers of large-scale structure. The model, written in C++ code, is centered around classes encapsulating scalar, vector, and tensor grids. It includes the complete bias expansion at any order in perturbations and captures general expansion histories without relying on the EdS approximation; however, the latter is also implemented and results in substantially smaller computational demands. LEFTfield includes a subset of the nonlinear higher-derivative terms in the bias expansion of general tracers.
The Python module legacystamps provides easy retrieval, both standalone and scripted, of FITS and JPEG cutouts from the DESI Legacy Imaging Surveys through URLs provided by the Legacy Survey viewer.
Legolas (Large Eigensystem Generator for One-dimensional pLASmas) is a finite element code for MHD spectroscopy of 1D Cartesian/cylindrical equilibria with flow that balance pressure gradients, enriched with various non-adiabatic effects. The code's capabilities range from full spectrum calculations to eigenfunctions of specific modes to full-on parametric studies of various equilibrium configurations in different geometries.
LEGWORK (LISA Evolution and Gravitational Wave ORbit Kit) is a simple package for gravitational wave calculations. It evolves binaries and computes signal-to-noise ratios for binary systems potentially observable with LISA; it also visualizes the results. LEGWORK can also compare different detector sensitivity curves, compute the horizon distance for a collection of sources, and tracks signal-to-noise evolution over time.
LeHaMoC simulates high-energy astrophysical sources. It simulates the behavior of relativistic pairs, protons interacting with magnetic fields, and photons in a spherical region. The package contains numerous physical processes, including synchrotron emission and self-absorption, inverse Compton scattering, photon-photon pair production, and adiabatic losses. It also includes proton-photon pion production, proton-photon (Bethe-Heitler) pair production, and proton-proton collisions. LeHaMoC can model expanding spherical sources with a variable magnetic field strength. In addition, three types of external radiation fields can be defined: grey body or black body, power-law, and tabulated.
LEMON is a differential-photometry pipeline, written in Python, that determines the changes in the brightness of astronomical objects over time and compiles their measurements into light curves. This code makes it possible to completely reduce thousands of FITS images of time series in a matter of only a few hours, requiring minimal user interaction.
Lemon solves the radiative transfer (RT) processes that contain scattering. These processes are described by differentio-integral equations with given initial or boundary conditions; Lemon solves these differentio-integral equations, which can be converted into the second kind integral equations of Fredholm. The code then obtains the Neumman solution (a series that consists of infinite terms of multiple integrals) from the Fredholm integral equation, and uses the Monte Carlo (MC) method to evaluate these integrals. Lemon is written in Fortran; IDL programs are included for plotting the results.
The LensCNN (Convolutional Neural Network) identifies images containing gravitational lensing systems after being trained and tested on simulated images, recovering most systems that are identifiable by eye.
Lensed performs forward parametric modelling of strong lenses. Using a provided model, Lensed renders the expected image of the lensing event for a large number of parameter settings, thereby exploring the space of possible realizations of the observation. It compares the expectation to the observed image by calculating the likelihood that the observation was indeed produced by the assumed model, thus reconstructing the probability distribution over the parameter space of the model. Written in C, the code uses a massively parallel ray-tracing kernel to perform the necessary calculations on a graphics processing unit (GPU), making the precise rendering of the background lensed sources fast and allowing the simultaneous optimization of tens of parameters for the selected model.
LensEnt2 is a maximum entropy reconstructor of weak lensing mass maps. The method takes each galaxy shape as an independent estimator of the reduced shear field and incorporates an intrinsic smoothness, determined by Bayesian methods, into the reconstruction. The uncertainties from both the intrinsic distribution of galaxy shapes and galaxy shape estimation are carried through to the final mass reconstruction, and the mass within arbitrarily shaped apertures are calculated with corresponding uncertainties. The input is a galaxy ellipticity catalog with each measured galaxy shape treated as a noisy tracer of the reduced shear field, which is inferred on a fine pixel grid assuming positivity, and smoothness on scales of w arcsec where w is an input parameter. The ICF width w can be chosen by computing the evidence for it.
Lenser estimates weak gravitational lensing signals, particularly flexion, from real survey data or realistically simulated images. Lenser employs a hybrid of image moment analysis and an Analytic Image Modeling (AIM) analysis. In addition to extracting flexion measurements by fitting a (modified Sérsic) model to a single image of a galaxy, Lenser can do multi-band, multi-epoch fitting. In multi-band mode, Lenser fits a single model to multiple postage stamps, each representing an exposure of a single galaxy in a particular band.
LensingETC optimizes observing strategies for multi-filter imaging campaigns of galaxy-scale strong lensing systems. It uses the lens modelling software lenstronomy (ascl:1804.012) to simulate and model mock imaging data, forecasts the lens model parameter uncertainties, and optimizes observing strategies.
lensingGW simulates lensed gravitational waves in ground-based interferometers from arbitrary compact binaries and lens models. Its algorithm resolves strongly lensed images and microimages simultaneously, such as the images resulting from hundreds of microlenses embedded in galaxies and galaxy clusters. It is based on Lenstronomy (ascl:1804.012),
LensIt enables CMB lensing and CMB delensing using the flat-sky approximation. The package can find the maximum posterior estimation of CMB lensing deflection maps from temperature and/or polarization maps and perform Wiener filtering of masked CMB data and allow for inhomogenous noise, including lensing deflections, using a multigrid preconditioner. It contains fast and accurate simulation libraries for lensed CMB skies, and standard quadratic estimator lensing reconstruction tools. LensIt also includes CMB internal delensing tools, including internal delensing biases calculation for temperature and/or polarization maps.
lensitbiases is an rFFT-based N1 lensing bias calculation and tests. It is tuned for TT, P-only or MV (GMV) like quadratic estimators. It performs rFFT-based N1 and N1 matrix calculations in ~ O(ms) time per lensing multipole for Planck-like config, which allows on-the-fly evaluation of the bias. It also calculates 5 rFFT's of moderate size per L for N1 TT, 20 for PP, and 45 for MV or GMV. lensitbiases is not particularly efficient for low lensing L's, since in this case one must use large boxes.
Given a model for the Galaxy, this program computes the microlensing rate in any direction. Program features include the ability to include the brightness of the lens and to compute the probability of lens detection at any level of lensing amplification. The program limits itself to lensing by single stars of single sources. The program is currently setup to accept input from the Galactic models of Bahcall and Soniera (1982, 1986).
There are three files needed for LENSKY, the Fortran file lensky.for and two input files: galmod.dsk (15 Megs) and galmod.sph (22 Megs). The zip file available below contains all three files. The program generates output to the file lensky.out. The program is pretty self-explanatory past that.
LensPerfect is a new approach to the massmap reconstruction of strong gravitational lenses. Conventional methods iterate over possible lens models which reproduce the observed multiple image positions well but not exactly. LensPerfect only produces solutions which fit all of the data exactly. Magnifications and shears of the multiple images can also be perfectly constrained to match observations.
Modelling of the weak lensing of the CMB will be crucial to obtain correct cosmological parameter constraints from forthcoming precision CMB anisotropy observations. The lensing affects the power spectrum as well as inducing non-Gaussianities. We discuss the simulation of full sky CMB maps in the weak lensing approximation and describe a fast numerical code. The series expansion in the deflection angle cannot be used to simulate accurate CMB maps, so a pixel remapping must be used. For parameter estimation accounting for the change in the power spectrum but assuming Gaussianity is sufficient to obtain accurate results up to Planck sensitivity using current tools. A fuller analysis may be required to obtain accurate error estimates and for more sensitive observations. We demonstrate a simple full sky simulation and subsequent parameter estimation at Planck-like sensitivity.
LensPop simulates observations of the galaxy-galaxy strong lensing population in the Dark Energy Survey (DES), the Large Synoptic Survey Telescope (LSST), and Euclid surveys.
lenspyx creates curved-sky python lensed CMB maps simulations; the software allows those familiar with healpy (ascl:2008.022) to build very easily lensed CMB simulations. Parallelization is done with openmp. The numerical cost is approximately that of an high-res harmonic transform. lenspyx provides two methods to build a simulation; one method computes a deflected spin-0 healpix map from its alm and deflection field alm, and the other computes a deflected spin-weight Healpix map from its gradient and curl modes and deflection field alm. lenspyx can be used in conjunction with the Planck 2018 CMB lensing pipeline plancklens (ascl:2010.009) to reproduce the published map and band-powers.
LensQuEst forecasts the signal-to-noise of CMB lensing estimators (standard, shear-only, magnification-only), generates mock maps, lenses them, and applies various lensing estimators to them. It can manipulate flat sky maps in various ways, including FFT, filtering, power spectrum, generating Gaussian random field, and applying lensing to a map, and evaluate these estimators on flat sky maps.
We describe a procedure for modelling strong lensing galaxy clusters with parametric methods, and to rank models quantitatively using the Bayesian evidence. We use a publicly available Markov chain Monte-Carlo (MCMC) sampler ('Bayesys'), allowing us to avoid local minima in the likelihood functions. To illustrate the power of the MCMC technique, we simulate three clusters of galaxies, each composed of a cluster-scale halo and a set of perturbing galaxy-scale subhalos. We ray-trace three light beams through each model to produce a catalogue of multiple images, and then use the MCMC sampler to recover the model parameters in the three different lensing configurations. We find that, for typical Hubble Space Telescope (HST)-quality imaging data, the total mass in the Einstein radius is recovered with ~1-5% error according to the considered lensing configuration. However, we find that the mass of the galaxies is strongly degenerated with the cluster mass when no multiple images appear in the cluster centre. The mass of the galaxies is generally recovered with a 20% error, largely due to the poorly constrained cut-off radius. Finally, we describe how to rank models quantitatively using the Bayesian evidence. We confirm the ability of strong lensing to constrain the mass profile in the central region of galaxy clusters in this way. Ultimately, such a method applied to strong lensing clusters with a very large number of multiple images may provide unique geometrical constraints on cosmology.
LensTools implements a wide range of routines frequently used in Weak Gravitational Lensing, including tools for image analysis, statistical processing and numerical theory predictions. The package offers many useful features, including complete flexibility and easy customization of input/output formats; efficient measurements of power spectrum, PDF, Minkowski functionals and peak counts of convergence maps; survey masks; artificial noise generation engines; easy to compute parameter statistical inferences; ray tracing simulations; and many others. It requires standard numpy and scipy, and depending on tools used, may require Astropy (ascl:1304.002), emcee (ascl:1303.002), matplotlib, and mpi4py.
Lenstronomy is a multi-purpose open-source gravitational lens modeling python package. Lenstronomy reconstructs the lens mass and surface brightness distributions of strong lensing systems using forward modelling and supports a wide range of analytic lens and light models in arbitrary combination. The software is also able to reconstruct complex extended sources as well as point sources. Lenstronomy is flexible and numerically accurate, with a clear user interface that could be deployed across different platforms. Lenstronomy has been used to derive constraints on dark matter properties in strong lenses, measure the expansion history of the universe with time-delay cosmography, measure cosmic shear with Einstein rings, and decompose quasar and host galaxy light.
Lensview models resolved gravitational lens systems based on LensMEM but using the Skilling & Bryan MEM algorithm. Though its primary purpose is to find statistically acceptable lens models for lensed images and to reconstruct the surface brightness profile of the source, LENSVIEW can also be used for more simple tasks such as projecting a given source through a lens model to generate a “true” image by conserving surface brightness. The user can specify complicated lens models based on one or more components, such as softened isothermal ellipsoids, point masses, exponential discs, and external shears; LENSVIEW generates a best-fitting source matching the observed data for each specific combination of model parameters.
LEO-Py uses a novel technique to compute the likelihood function for data sets with uncertain, missing, censored, and correlated values. It uses Gaussian copulas to decouple the correlation structure of variables and their marginal distributions to compute likelihood functions, thus mitigating inconsistent parameter estimates and accounting for non-normal distributions in variables of interest or their errors.
LEO-vetter automatically vets transit signals found in light curve data. Inspired by the Kepler Robovetter (ascl:2012.006), LEO-vetter computes vetting metrics to be compared to a series of pass-fail thresholds. If a signal passes all tests, it is considered a planet candidate (PC). If a signal fails at least one test, it may be either an astrophysical false positive (FP; e.g., eclipsing binary, nearby eclipsing signal) or false alarm (FA; e.g., systematic, stellar variability). Pass-fail thresholds can be changed to suit individual research purposes, and LEO-vetter produces vetting reports for manual inspection of signals. Flux-level vetting can be applied to any light curve dataset (such as Kepler, K2, and TESS), including light curves with mixes of cadences, while pixel-level vetting has been implemented for TESS.
LePHARE is a set of Fortran commands to compute photometric redshifts and to perform SED fitting. The latest version includes new features with FIR fitting and a more complete treatment of physical parameters and uncertainties based on PÉGASE and Bruzual & Charlot population synthesis models. The program is based on a simple chi2 fitting method between the theoretical and observed photometric catalogue. A simulation program is also available in order to generate realistic multi-colour catalogues taking into account observational effects.
LeXInt (Leja interpolation for eXponential Integrators) is a temporal exponential integration package using the method of polynomial interpolation at Leja points. Exponential Rosenbrock (EXPRB) and Exponential Propagation Iterative Runge-Kutta (EPIRK) methods use the Leja interpolation method to compute the functions. For linear PDEs, one can get the exact solution (in time) by directly computing the matrix exponential.
LExTeS (Link Extraction and Testing Suite) extracts hyperlinks from PDF documents, tests the extracted links to see which are broken, and tabulates the results. Though written to support a particular set of PDF documents, the dataset and scripts can be edited for use on other documents.
LFlGRB models the luminosity function (LF) of long Gamma Ray Bursts (lGRBs) by using a sample of Swift and Fermi lGRBs to re-derive the parameters of the Yonetoku correlation and self-consistently estimate pseudo-redshifts of all the bursts with unknown redshifts. The GRB formation rate is modeled as the product of the cosmic star formation rate and a GRB formation efficiency for a given stellar mass.
LFsGRB models the luminosity function (LF) of short Gamma Ray Bursts (sGRBs) by using the available catalog data of all short GRBs (sGRBs) detected till 2017 October, estimating the luminosities via pseudo-redshifts obtained from the Yonetoku correlation, and then assuming a standard delay distribution between the cosmic star formation rate and the production rate of their progenitors. The data are fit well both by exponential cutoff powerlaw and broken powerlaw models. Using the derived parameters of these models along with conservative values in the jet opening angles seen from afterglow observations, the true rate of short GRBs is derived. Assuming a short GRB is produced from each binary neutron star merger (BNSM), the rate of gravitational wave (GW) detections from these mergers are derived for the past, present and future configurations of the GW detector networks.
LGMCA (Local-Generalized Morphological Component Analysis) is an extension to GMCA (ascl:1710.015). Similarly to GMCA, it is a Blind Source Separation method which enforces sparsity. The novel aspect of LGMCA, however, is that the mixing matrix changes across pixels allowing LGMCA to deal with emissions sources which vary spatially. These IDL scripts compute the CMB map from WMAP and Planck data; running LGMCA on the WMAP9 temperature products requires the main script and a selection of mandatory files, algorithm parameters and map parameters.
LgrbWorldModel is written in Fortran 90 and attempts to model the population distribution of the Long-duration class of Gamma-Ray Bursts (LGRBs) as detected by the NASA's now-defunct Burst And Transient Source Experiment (BATSE) onboard the Compton Gamma Ray Observatory (CGRO). It is assumed that the population distribution of LGRBs is well fit by a multivariate log-normal distribution. The best-fit parameters of the distribution are then found by maximizing the likelihood of the observed data by BATSE detectors via a native built-in Adaptive Metropolis-Hastings Markov-Chain Monte Carlo (AMH-MCMC) Sampler.
The Long Wavelength Spectrometer (LWS) was one of two complementary spectrometers on the Infrared Space Observatory (ISO). LIA (LWS Interactive Analysis) is used for processing data from the LWS. It provides access to the different processing steps, including visualization of intermediate products and interactive manipulation of the data at each stage.
Libimf provides a collection of programming functions based on the general IMF-algorithm by Pflamm-Altenburg & Kroupa (2006).
libnova is a general purpose, double precision, celestial mechanics, astrometry and astrodynamics library. Among many other calculations, it can calculate aberration, apparent position, proper motion, planetary positions, orbit velocities and lengths, angular separation of bodies, and hyperbolic motion of bodies.
Libpolycomp compresses and decompresses one-dimensional streams of numbers by means of several algorithms. It is well-suited for time-ordered data acquired by astronomical instruments or simulations. One of the algorithms, called "polynomial compression", combines two widely-used ideas (namely, polynomial approximation and filtering of Fourier series) to achieve substantial compression ratios for datasets characterized by smoothness and lack of noise. Notable examples are the ephemerides of astronomical objects and the pointing information of astronomical telescopes. Other algorithms implemented in this C library are well known and already widely used, e.g., RLE, quantization, deflate (via libz) and Burrows-Wheeler transform (via libbzip2). Libpolycomp can compress the timelines acquired by the Planck/LFI instrument with an overall compression ratio of ~9, while other widely known programs (gzip, bzip2) reach compression ratios less than 1.5.
libprofit is a C++ library for image creation based on different luminosity profiles. It offers fast and accurate two-dimensional integration for a useful number of profiles, including Sersic, Core-Sersic, broken-exponential, Ferrer, Moffat, empirical King, point-source and sky, with a simple mechanism for adding new profiles. libprofit provides a utility to read the model and profile parameters from the command-line and generate the corresponding image. It can output the resulting image as text values, a binary stream, or as a simple FITS file. It also provides a shared library exposing an API that can be used by any third-party application. R and Python interfaces are available: ProFit (ascl:1612.004) and PyProfit (ascl:1612.005).
Libpsht (or "library for Performing Spherical Harmonic Transforms") is a collection of algorithms for efficient conversion between spatial-domain and spectral-domain representations of data defined on the sphere. The package supports transforms of scalars as well as spin-1 and spin-2 quantities, and can be used for a wide range of pixelisations (including HEALPix, GLESP and ECP). It will take advantage of hardware features like multiple processor cores and floating-point vector operations, if available. Even without this additional acceleration, the employed algorithms are among the most efficient (in terms of CPU time as well as memory consumption) currently being used in the astronomical community.
The library is written in strictly standard-conforming C90, ensuring portability to many different hard- and software platforms, and allowing straightforward integration with codes written in various programming languages like C, C++, Fortran, Python etc.
Libpsht is distributed under the terms of the GNU General Public License (GPL) version 2.
Development on this project has ended; its successor is libsharp (ascl:1402.033).
The HERA Librarian system keeps track of all the primary data products for the telescope at a given site. The Librarian supports large data volumes and automated data processing capabilities. A web-based application handles human user and automatic requests and interfaces with a backing database and data storage servers. The system supports the long-term data storage of all relevant telescope data, as well as staging data to individual users' directories for processing.
Libsharp is a collection of algorithms for efficient conversion between maps on the sphere and their spherical harmonic coefficients. It supports a wide range of pixelisations (including HEALPix, GLESP, and ECP). This library is a successor of libpsht (ascl:1010.020); it adds MPI support for distributed memory systems and SHTs of fields with arbitrary spin, and also supports new developments in CPU instruction sets like the Advanced Vector Extensions (AVX) or fused multiply-accumulate (FMA) instructions. libsharp is written in portable C99; it provides an interface accessible to other programming languages such as C++, Fortran, and Python.
libstempo uses the Tempo2 library (ascl:1210.015) to load a pulsar's tim/par files, providing Python access to the TOAs, the residuals, the timing-model parameters, the fit procedure, and more.
libTheSky compute the positions of celestial bodies, such as the Moon, planets, and stars, and events, including conjunctions and eclipses, with great accuracy. Written in Fortran, libTheSky can use different reference frames (heliocentric, geocentric, topocentric) and coordinate systems (ecliptic, equatorial, galactic; spherical, rectangular), and the user can choose low- or high-accuracy calculations, depending on need.
LIFELINE (LIne proFiles in massivE coLliding wInd biNariEs) simulates the X-ray lines profile in colliding wind binaries. The code is self-consistent and computes the distribution of the wind velocity, the characterization of the wind shock region, and the line profile. In addition to perform the overall computation, LIFELINE can use a pre-computed velocity distribution to compute the shock characteristics and the line profile, or use pre-computed shock characteristics and velocity distributions to compute only the line profile.
light-curve implements the extraction of numerous light curve features suitable for processing alert and archival data for the current ZTF and future Vera Rubin Observatory LSST photometric surveys. These high-performance irregular time series processing tools are written in Rust and Python.
Lightbeam simulates the 3D propagation of light through waveguides of arbitrary geometries. This code package is based off of the finite-differences beam propagation method, and employs a transverse adaptive mesh for extra computational efficiency. Also included are tools to simulate adaptive optics systems for use in conjunction with waveguides, useful in astronomical contexts for simulating coupling devices which transfer telescope light to the science instrument.
Lightcone works with simulated galaxy data stored in a relational database to rearrange the data in a shape of a light-cone; simulated galaxy data is expected to be in a box volume. The light-cone constructing script works with output from the SAGE semi-analytic model (ascl:1601.006), but will work with any other model that has galaxy positions (and other properties) saved per snapshots of the simulation volume distributed in time. The database configuration file is set up for PostgreSQL RDBMS, but can be modified for use with any other SQL database.
LightcurveMC is a versatile and easily extended simulation suite for testing the performance of time series analysis tools under controlled conditions. It is designed to be highly modular, allowing new lightcurve types or new analysis tools to be introduced without excessive development overhead. The statistical tools are completely agnostic to how the lightcurve data is generated, and the lightcurve generators are completely agnostic to how the data will be analyzed. The use of fixed random seeds throughout guarantees that the program generates consistent results from run to run.
LightcurveMC can generate periodic light curves having a variety of shapes and stochastic light curves having a variety of correlation properties. It features two error models (Gaussian measurement and signal injection using a randomized sample of base light curves), testing of C1 shape statistic, periodograms, ΔmΔt plots, autocorrelation function plots, peak-finding plots, and Gaussian process regression. The code is written in C++ and R.
Lightkurve analyzes astronomical flux time series data, in particular the pixels and light curves obtained by NASA’s Kepler, K2, and TESS exoplanet missions. This community-developed Python package is designed to be user friendly to lower the barrier for students, astronomers, and citizen scientists interested in analyzing data from these missions. Lightkurve provides easy tools to download, inspect, and analyze time series data and its documentation is supported by a large syllabus of tutorials.
Lightning is a spectral energy distribution (SED) fitting procedure that quickly and reliably recovers star formation history (SFH) and extinction parameters. The SFH is modeled as discrete steps in time. The code consists of a fully vectorized inversion algorithm to determine SFH step intensities and combines this with a grid-based approach to determine three extinction parameters.
Limb-darkening generates limb-darkening coefficients from ATLAS and PHOENIX model atmospheres using arbitrary response functions. The code uses PyFITS (ascl:1207.009) and has several other dependencies, and produces a folder of results with descriptions of the columns contained in each file.
LimberJack.jl performs cosmological analyses of 2 point auto- and cross-correlation measurements from galaxy clustering, CMB lensing and weak lensing data. Written in Julia, it obtains gradients for its outputs faster than traditional finite difference methods, making the code greatly synergistic with gradient-based sampling methods such as Hamiltonian Monte Carlo. LimberJack.jl can efficiently exploring parameter spaces with hundreds of dimensions.
LIME solves the molecular and atomic excitation and radiation transfer problem in a molecular gas and predicting emergent spectra. The code works in arbitrary three dimensional geometry using unstructured Delaunay latices for the transport of photons. Various physical models can be used as input, ranging from analytical descriptions over tabulated models to SPH simulations. To generate the Delaunay grid we sample the input model randomly, but weigh the sample probability with the molecular density and other parameters, and thereby we obtain an average grid point separation that scales with the local opacity. Slow convergence of opaque models becomes traceable; when convergence between the level populations, the radiation field, and the point separation has been obtained, the grid is ray-traced to produced images that can readily be compared to observations. LIME is particularly well suited for modeling of ALMA data because of the high dynamic range in scales that can be resolved using this type of grid, and can furthermore deal with overlapping lines of multiple molecular and atomic species.
LIMEPY solves distribution function (DF) based lowered isothermal models. It solves Poisson's equation used on input parameters and offers fast solutions for isotropic/anisotropic, single/multi-mass models, normalized DF values, density and velocity moments, projected properties, and generates discrete samples.
LIMpy models and analyzes multi-line intensity maps of CII (158 µ), OIII (88 µ), and CO (1-0) to CO (13-12) transitions. It can be used as an analytic model for star formation rate, to simulate line intensity maps based on halo catalogs, and to calculate the power spectrum from simulated maps and the cross-correlated signal between two separate lines. Among other things, LIMpy can also create multi-line luminosity models and determine the multi-line intensity power spectrum.
The Python code line_selections reads synthetic "full" spectra and elemental spectra, automatically identifies the detectable lines at a given resolution (provided the linelist used to compute the spectra), and returns a table containing various properties of the lines (e.g., purity, central wavelength, and depth). The code then stores the information in a pandas DataFrame. line_selections demonstrates where chemical information is present in a stellar spectrum, and allows the user to optimize observational strategies, such as choosing resolution and spectra windows, as well as analysis codes with the application of high-quality masks.
Line-Stacker stacks both 3D cubes or already extracted spectra and is an extension of Stacker (ascl:1912.019). It is an ensemble of both CASA tasks and native python tasks. Line-Stacker supports image stacking and some additional tools, allowing further analysis of the stack product, are also included in the module.
linemake generates formatted and curated atomic and molecular line lists suitable for spectral synthesis work. It is lightweight and easy-to-use. The code requires that the requested beginning and ending wavelengths not bridge the divide between two files of atomic line data; in such cases, run the code twice, once on either side of the divide, to generate the desired lists.
LineProf implements a series of line-profile analysis indicators and evaluates its correlation with RV data. It receives as input a list of Cross-Correlation Functions and an optional list of associated RV. It evaluates the line-profile according to the indicators and compares it with the computed RV if no associated RV is provided, or with the provided RV otherwise.
lintsampler performs linear interpolant sampling to create a set of sample points from a density function. The code uses the evaluation of the density at the two endpoints of 1D interval, or the four corners of a 2D rectangle, or generally the 2k vertices of a dimensional hyperbox (or a series of such hyperboxes, e.g., the cells of a k-dimensional grid) to draw random samples within the hyperbox. lintsampler works by evaluating a given PDF on the nodes of a grid (or grid-like structure, such as a tree); the number of evaluations (and memory occupancy) grows exponentially with the number of dimensions.
LIRA (LInear Regression in Astronomy) performs Bayesian linear regression that accounts for heteroscedastic errors in both the independent and the dependent variables, intrinsic scatters (in both variables), time evolution of slopes, normalization and scatters, Malmquist and Eddington bias, and break of linearity. The posterior distribution of the regression parameters is sampled with a Gibbs method exploiting the JAGS (ascl:1209.002) library.
LIRA (Low-counts Image Reconstruction and Analysis) deconvolves any unknown sky components, provides a fully Poisson 'goodness-of-fit' for any best-fit model, and quantifies uncertainties on the existence and shape of unknown sky. It does this without resorting to χ2 or rebinning, which can lose high-resolution information. It is written in R and requires the FITSio package.
The LIghtweight Source finding Algorithms (LiSA) library finds HI sources in next generation radio surveys. LiSA can analyze input data cubes of any size with pipelines that automatically decompose data into different domains for parallel distributed analysis. For source finding, the library contains python modules for wavelet denoising of 3D spatial and spectral data, and robust automatic source finding using null-hypothesis testing. The source-finding algorithms all have options to automatically choose parameters, minimizing the need for manual fine tuning. Finally, LiSA also contains neural network architectures for classification and characterization of 3D spectral data.
LISACode is a simulator of the LISA mission. Its ambition is to achieve a new degree of sophistication allowing to map, as closely as possible, the impact of the different subsystems on the measurements. Its also a useful tool for generating realistic data including several kind of sources (Massive Black Hole binaries, EMRIs, cosmic string cusp, stochastic background, etc) and for preparing their analysis. It’s fully integrated to the Mock LISA Data Challenge. LISACode is not a detailed simulator at the engineering level but rather a tool whose purpose is to bridge the gap between the basic principles of LISA and a future, sophisticated end-to-end simulator.
LiveData is a multibeam single-dish data reduction system for bandpass calibration and gridding. It is used for processing Parkes multibeam and Mopra data.
Lizard is an extensible Cyclomatic Complexity Analyzer for imperative programming languages including C/C++/C#, Python, Java, and Javascript. It counts the nloc (lines of code without comments) and CCN (cyclomatic complexity number), and takes a token count of functions and a parameter count of functions. It also does copy-paste detection (code clone detection/code duplicate detection) and many other forms of static code analysis. Lizard is often used in software-related research and calculates how complex the code looks rather than how complex the code really is; thought it's often very hard to get all the included folders and files right when they are complicated, that accuracy is not needed to determine cyclomatic complexity, which can be useful for measuring the maintainability of a software package.
LIZARD (Lagrangian Initialization of Zeldovich Amplitudes for Resimulations of Displacements) creates particle initial conditions for cosmological simulations using the Zel'dovich approximation for the matter and velocity power spectrum.
LMC is a Markov Chain Monte Carlo engine in Python that implements adaptive Metropolis-Hastings and slice sampling, as well as the affine-invariant method of Goodman & Weare, in a flexible framework. It can be used for simple problems, but the main use case is problems where expensive likelihood evaluations are provided by less flexible third-party software, which benefit from parallelization across many nodes at the sampling level. The parallel/adaptive methods use communication through MPI, or alternatively by writing/reading files, and mostly follow the approaches pioneered by CosmoMC (ascl:1106.025).
Lmfit provides a high-level interface to non-linear optimization and curve fitting problems for Python. Lmfit builds on and extends many of the optimization algorithm of scipy.optimize, especially the Levenberg-Marquardt method from optimize.leastsq. Its enhancements to optimization and data fitting problems include using Parameter objects instead of plain floats as variables, the ability to easily change fitting algorithms, and improved estimation of confidence intervals and curve-fitting with the Model class. Lmfit includes many pre-built models for common lineshapes.
loci is a shared library for interpolations in up to 4 dimensions. It is written in C and can be used with C/C++, Python and others. In order to calculate the coefficients of the cubic polynom, only local values are used: The data itself and all combinations of first-order derivatives, i.e. in 2D f_x, f_y and f_xy. This is in contrast to splines, where the coefficients are not calculated using derivatives, but non-local data, which can lead to over-smoothing the result.
Locus implements the Locus Algorithm, which maximizes the performance of differential photometry systems by optimizing the number and quality of reference stars in the Field of View with the target.
Calibration solutions for the LOFAR radio telescope are stored in a 5-dimensional (time, frequency, station, polarisation and direction in the sky) HDF5 table. H5plot is a GUI application focussing on interactive visual inspection of these calibration solutions.
Lofti_gaia fits orbital parameters for one wide stellar binary relative to the other, when both objects are resolved in Gaia DR2. It takes as input only the Gaia DR2 source id of the two components, and their masses. It retrieves the relevant parameters from the Gaia archive, computes observational constraints for them, and fits orbital parameters to those measurements. It assumes the two components are bound in an elliptical orbit.
LoLLiPoP is a Planck low-l polarization likelihood based on cross-power-spectra for which the bias is zero when the noise is uncorrelated between maps. It uses a modified approximation to apply to cross-power spectra and is interfaced with the Cobaya (ascl:1910.019) MCMC sampler. Cross-spectra are computed on the CMB maps from Commander component separation applied on each detset-split Planck frequency maps.
LoRD (Locate Reconnection Distribution) identifies the locations and structures of 3D magnetic reconnection within discrete magnetic field data. The toolkit contains three main functions; the first, ARD (Analyze Reconnection Distribution) locates the grids undergoing reconnection without null points and also recognizes the local configurations of reconnection sites. ANP (Analyze Null Points) locates and classifies the 3D null points, and APNP (Analyze Projected Null Points) analyzes the 2D neutral points projected on a plane near a cell. LoRD is written in Matlab and the toolkit contains demo scripts.
LORENE (Langage Objet pour la RElativité NumériquE) solves various problems arising in numerical relativity, and more generally in computational astrophysics. It is a set of C++ classes and provides tools to solve partial differential equations by means of multi-domain spectral methods. LORENE classes implement basic structures such as arrays and matrices, but also abstract mathematical objects, such as tensors, and astrophysical objects, such as stars and black holes.
LoSoTo (LOFAR Solution Tool) performs a variety of operations on H5parm data, which is based on the HDF5 format; it isolates direction independent systematic effects and can therefore be transferred to the target field. Subsets of data can be selected for each operation using lists of axes values, regular expressions, or intervals. The LoSoTo package stores solutions in arrays organized in a hierarchical fashion; this provides flexibility and preserves performance. The code can, for example, extract Faraday rotation from RR/LL phase solutions or a rotation matrix, clip solutions around the median, and calculate the ionospheric structure function. LoSoTo includes an outlier flagging procedure, normalizes solutions to a given value, and offers an advanced plotting routine, and many other operations.
LOSP is a FORTRAN77 numerical package that computes the orbital parameters of spectroscopic binaries. The package deals with SB1 and SB2 systems and is able to adjust either circular or eccentric orbits through a weighted fit.
LOSSCONE computes the rates of capture of stars by supermassive black holes. It uses a stationary and time-dependent solutions for the Fokker-Planck equation describing the evolution of the distribution function of stars due to two-body relaxation, and works for arbitrary spherical and axisymmetric galactic models that are provided by the user in the form of M(r), the cumulative mass as a function of radius.
LOTUS (non-LTE Optimization Tool Utilized for the derivation of atmospheric Stellar parameters) derives stellar parameters via Equivalent Width (EW) method with the assumption of 1D non-local thermodynamic equilibrium. It mainly applies on the spectroscopic data from high resolution spectral survey. It can provide extremely accurate measurement of stellar parameters compared with non-spectroscopic analysis from benchmark stars. LOTUS provides a fast optimizer for obtaining stellar parameters based on Differential Evolution algorithm, well constrained uncertainty of derived stellar parameters from slice-sampling MCMC from PyMC3 (ascl:1610.016), and can interpolate the Curve of Growth from theoretical EW grid under the assumptions of LTE and Non-LTE. It also visualizes excitation and ionization balance when at the optimal combination of stellar parameters.
We present a set of low resolution empirical SED templates for AGNs and galaxies in the wavelength range from 0.03 to 30 microns based on the multi-wavelength photometric observations of the NOAO Deep-Wide Field Survey Bootes field and the spectroscopic observations of the AGN and Galaxy Evolution Survey. Our training sample is comprised of 14448 galaxies in the redshift range 0<~z<~1 and 5347 likely AGNs in the range 0<~z<~5.58. We use our templates to determine photometric redshifts for galaxies and AGNs. While they are relatively accurate for galaxies, their accuracies for AGNs are a strong function of the luminosity ratio between the AGN and galaxy components. Somewhat surprisingly, the relative luminosities of the AGN and its host are well determined even when the photometric redshift is significantly in error. We also use our templates to study the mid-IR AGN selection criteria developed by Stern et al.(2005) and Lacy et al.(2004). We find that the Stern et al.(2005) criteria suffers from significant incompleteness when there is a strong host galaxy component and at z =~ 4.5, when the broad Halpha emission line is redshifted into the [3.6] band, but that it is little contaminated by low and intermediate redshift galaxies. The Lacy et al.(2004) criterion is not affected by incompleteness at z =~ 4.5 and is somewhat less affected by strong galaxy host components, but is heavily contaminated by low redshift star forming galaxies. Finally, we use our templates to predict the color-color distribution of sources in the upcoming WISE mission and define a color criterion to select AGNs analogous to those developed for IRAC photometry. We estimate that in between 640,000 and 1,700,000 AGNs will be identified by these criteria, but will have serious completeness problems for z >~ 3.4.
LP-VIcode computes variational chaos indicators (CIs) quickly and easily. The following CIs are included:
LPF (Live Pulse Finder) provides real-time automated analysis of the radio image data stream at multiple frequencies. The fully automated GPU-based machine-learning backed pipeline performs source detection, association, flux measurement and physical parameter inference. At the end of the pipeline, an alert of a significant detection of a transient event can be sent out and the data saved for further investigation.
The Limited Post-Newtonian N-body code (LPNN) simulates post-Newtonian interactions between a massive object and many low-mass objects. The interaction between one massive object and low-mass objects is calculated by post-Newtonian approximation, and the interaction between low-mass objects is calculated by Newtonian gravity. This code is based on the sticky9 code, and can be accelerated with the use of GPU in a CUDA (version 4.2 or earlier) environment.
This software computes likelihoods for the Luminous Red Galaxies (LRG) data from the Sloan Digital Sky Survey (SDSS). It includes a patch to the existing CAMB software (ascl:1102.026; the February 2009 release) to calculate the theoretical LRG halo power spectrum for various models. The code is written in Fortran 90 and has been tested with the Intel Fortran 90 and GFortran compilers.
LRGS (Linear Regression by Gibbs Sampling) implements a Gibbs sampler to solve the problem of multivariate linear regression with uncertainties in all measured quantities and intrinsic scatter. LRGS extends an algorithm by Kelly (2007) that used Gibbs sampling for performing linear regression in fairly general cases in two ways: generalizing the procedure for multiple response variables, and modeling the prior distribution of covariates using a Dirichlet process.
LSC (LINEAR Supervised Classification) trains a number of classifiers, including random forest and K-nearest neighbor, to classify variable stars and compares the results to determine which classifier is most successful. Written in R, the package includes anomaly detection code for testing the application of the selected classifier to new data, thus enabling the creation of highly reliable data sets of classified variable stars.
The Large Survey Database (LSD) is a Python framework and DBMS for distributed storage, cross-matching and querying of large survey catalogs (>10^9 rows, >1 TB). The primary driver behind its development is the analysis of Pan-STARRS PS1 data. It is specifically optimized for fast queries and parallel sweeps of positionally and temporally indexed datasets. It transparently scales to more than >10^2 nodes, and can be made to function in "shared nothing" architectures.
LSDCat is a conceptually simple but robust and efficient detection package for emission lines in wide-field integral-field spectroscopic datacubes. The detection utilizes a 3D matched-filtering approach for compact single emission line objects. Furthermore, the software measures fluxes and extents of detected lines. LSDCat is implemented in Python, with a focus on fast processing of large data-volumes.
LSSGALPY provides visualization tools to compare the 3D positions of a sample (or samples) of isolated systems with respect to the locations of the large-scale structures galaxies in their local and/or large scale environments. The interactive tools use different projections in the 3D space (right ascension, declination, and redshift) to study the relation of the galaxies with the LSS. The tools permit visualization of the locations of the galaxies for different values of redshifts and redshift ranges; the relationship of isolated galaxies, isolated pairs, and isolated triplets to the galaxies in the LSS can be visualized for different values of the declinations and declination ranges.
LTdwarfIndices studies spectral indices to determine whether one or more brown dwarfs are photometric variable candidates. For a single brown dwarf, it analyzes a given set of indices and outputs the number of graphs the object appears in in the variable area, whether it is a variable or non-variable candidate, and, optionally, an index-index or histogram plot. Using another code module, LTdwarftIndices can also analyze a set of sample indices for many brown dwarfs.
LTL provides dynamic arrays of up to 7-dimensions, subarrays and slicing, support for fixed-size vectors and matrices including basic linear algebra operations, expression templates-based evaluation, and I/O facilities for ascii and FITS format files. Utility classes for command-line processing and configuration-file processing are provided as well.
LTS_LINEFIT and LTS_PLANEFIT are IDL programs to robustly fit lines and planes to data with intrinsic scatter. The code combines the Least Trimmed Squares (LTS) robust technique, proposed by Rousseeuw (1984) and optimized in Rousseeuw & Driessen (2006), into a least-squares fitting algorithm which allows for intrinsic scatter. This method makes the fit converge to the correct solution even in the presence of a large number of catastrophic outliers, where the much simpler σ-clipping approach can converge to the wrong solution. The code is also available in Python as ltsfit.
LtU-ILI (Learning the Universe Implicit Likelihood Inference) performs machine learning parameter inference. Given labeled training data or a stochastic simulator, the LtU-ILI piepline automatically trains state-of-the-art neural networks to learn the data-parameter relationship and produces robust, well-calibrated posterior inference. The package comes with a wide range of customizable complexity, including posterior-, likelihood-, and ratio-estimation methods for ILI, including sequential learning analogs, and various neural density estimators, including mixture density networks, conditional normalizing flows, and ResNet-like ratio classifiers. It offers fully-customizable, exotic embedding networks, including CNNs and Graph Neural Networks, and a unified interface for multiple ILI backends such as sbi, pydelfi, and lampe. LtU-ILI also handles multiple marginal and multivariate posterior coverage metrics, and offers Jupyter and command-line interfaces and a parallelizable configuration framework for efficient hyperparameter tuning and production runs.
LumFunc is a numerical code to model the Luminosity Function based on central galaxy luminosity-halo mass and total galaxy luminosity-halo mass relations. The code can handle rest b_J-band (2dFGRS), r'-band (SDSS), and K-band luminosities, and any redshift with redshift dependences specified by the user. It separates the luminosity function (LF) to conditional luminosity functions, LF as a function of halo mass, and also to galaxy types. By specifying a narrow mass range, the code will return the conditional luminosity functions. The code returns luminosity functions for galaxy types as well (broadly divided to early-type and late-type). The code also models the cluster luminosity function, either mass averaged or for individual clusters.
LUNA generates dynamically accurate lightcurves from a planet-moon pair, analytically accounting for shadow overlaps, stellar limb darkening, and planet-moon dynamical motion. The code takes transit timing/duration variations and ingress/egress asymmetries into consideration not only for the planet, but also the moon. LUNA was designed to be analytical and dynamical and to incorporate limb darkening (including non-linear laws) and account for all orbital elements, including eccentricity and longitude of the ascending node. Because the software is precise and analytic, LUNA is a highly potent tool for exomoon detection.
Long Wavelength Propagation Capability (LWPC), written as a collection of separate programs that perform unique actions, generates geographical maps of signal availability for coverage analysis. The program makes it easy to set up these displays by automating most of the required steps. The user specifies the transmitter location and frequency, the orientation of the transmitting and receiving antennae, and the boundaries of the operating area. The program automatically selects paths along geographic bearing angles to ensure that the operating area is fully covered. The diurnal conditions and other relevant geophysical parameters are then determined along each path. After the mode parameters along each path are determined, the signal strength along each path is computed. The signal strength along the paths is then interpolated onto a grid overlying the operating area. The final grid of signal strength values is used to display the signal-strength in a geographic display. The LWPC uses character strings to control programs and to specify options. The control strings have the same meaning and use among all the programs.
LyaCoLoRe uses CoLoRe (ascl:2111.009) simulations to generate simulated Lyman alpha forest spectra. The code takes the output files from CoLoRe as an input, carries out several stages of processing, and produces realistic skewers of transmitted flux fraction as an output. The repository includes tools to tune the parameters within LyaCoLoRe's transformation, and to measure the 1D power spectrum of output skewers quickly.
LZIFU (LaZy-IFU) is an emission line fitting pipeline for integral field spectroscopy (IFS) data. Written in IDL, the pipeline turns IFS data to 2D emission line flux and kinematic maps for further analysis. LZIFU has been applied and tested extensively to various IFS data, including the SAMI Galaxy Survey, the Wide-Field Spectrograph (WiFeS), the CALIFA survey, the S7 survey and the MUSE instrument on the VLT.
M_SMiLe computes an approximation of the probability of magnification for a lens system consisting of microlensing by compact objects within a galaxy cluster. It specifically focuses on the scenario where the galaxy cluster is strongly lensing a background galaxy and the compact objects, such as stars, are sensitive to this microlensing effect. The microlenses responsible for this effect are stars and stellar remnants, though exotic objects such as compact dark matter candidates (including PBHs and axion mini-halos) can contribute to this effect.
m2mcluster performs made-to-measure modeling of star clusters, and can fit target observations of a Galactic globular cluster's 3D density profile and individual kinematic properties, including proper motion velocity dispersion, and line of sight velocity dispersion. The code uses AMUSE (ascl:1107.007) to model the gravitational N-body evolution of the system between time steps; GalPy (ascl:1411.008) is also required.
The MATLAB Astronomy and Astrophysics Toolbox (MAAT) is a collection of software tools and modular functions for astronomy and astrophysics written in the MATLAB environment. It includes over 700 MATLAB functions and a few tens of data files and astronomical catalogs. The scripts cover a wide range of subjects including: astronomical image processing, ds9 control, astronomical spectra, optics and diffraction phenomena, catalog retrieval and searches, celestial maps and projections, Solar System ephemerides, planar and spherical geometry, time and coordinates conversion and manipulation, cosmology, gravitational lensing, function fitting, general utilities, plotting utilities, statistics, and time series analysis.
Photometric rotational modulations due to starspots remain the most common and accessible way to study stellar activity. Modelling rotational modulations allows one to invert the observations into several basic parameters, such as the rotation period, spot coverage, stellar inclination and differential rotation rate. The most widely used analytic model for this inversion comes from Budding (1977) and Dorren (1987), who considered circular, grey starspots for a linearly limb darkened star. That model is extended to be more suitable in the analysis of high precision photometry such as that by Kepler. Macula, a Fortran 90 code, provides several improvements, such as non-linear limb darkening of the star and spot, a single-domain analytic function, partial derivatives for all input parameters, temporal partial derivatives, diluted light compensation, instrumental offset normalisations, differential rotation, starspot evolution and predictions of transit depth variations due to unocculted spots. The inclusion of non-linear limb darkening means macula has a maximum photometric error an order-of-magnitude less than that of Dorren (1987) for Sun-like stars observed in the Kepler-bandpass. The code executes three orders-of-magnitude faster than comparable numerical codes making it well-suited for inference problems.
MADCOW is a set of parallelized programs written in ANSI C and Fortran 77 that perform a maximum likelihood analysis of visibility data from interferometers observing the cosmic microwave background (CMB) radiation. This software has been used to produce power spectra of the CMB with the Very Small Array (VSA) telescope.
MADCUBA analyzes astronomical datacubes and multiple spectra from various astronomical facilities, including ALMA, Herschel, VLA, IRAM 30m, APEX, GBT, and others. These telescopes, and in particular ALMA, generate extremely large datacubes (spatial, spectral and polarization). This software combines a user-friendly interface and powerful data analysis system to derive the physical conditions of molecular gas, its chemical complexity and the kinematics from datacubes. Built using the ImageJ (ascl:1206.013) infrastructure, MADCUBA visualizes astronomical datacubes with thousands on spectral channels, and datasets with thousands of spectra; it also identifies molecular species using publicly available molecular catalogs. It can automatically derive the physical parameters of the molecular species: column density, excitation temperature, velocity and linewidths and provides the best non-linear least-squared fit using the Levenberg-Marquardt algorithm, among other tasks.
MadDM computes dark matter relic abundance and dark matter nucleus scattering rates in a generic model. The code is based on the existing MadGraph 5 architecture and as such is easily integrable into any MadGraph collider study. A simple Python interface offers a level of user-friendliness characteristic of MadGraph 5 without sacrificing functionality. MadDM is able to calculate the dark matter relic abundance in models which include a multi-component dark sector, resonance annihilation channels and co-annihilations. The direct detection module of MadDM calculates spin independent / spin dependent dark matter-nucleon cross sections and differential recoil rates as a function of recoil energy, angle and time. The code provides a simplified simulation of detector effects for a wide range of target materials and volumes.
MADHAT (Model-Agnostic Dark Halo Analysis Tool) analyzes gamma-ray emission from dwarf satellite galaxies and dwarf galaxy candidates due to dark matter annihilation, dark matter decay, or other nonstandard or unknown astrophysics. The tool is data-driven and model-independent, and provides statistical upper bounds on the number of observed photons in excess of the number expected using a stacked analysis of any selected set of dwarf targets. MADHAT also calculates the resulting bounds on the properties of dark matter under any assumptions the user makes regarding dark sector particle physics or astrophysics.
MADLens produces non-Gaussian cosmic shear maps at arbitrary source redshifts. A MADLens simulation with only 256^3 particles produces convergence maps whose power agree with theoretical lensing power spectra up to scales of L=10000. The code is based on a highly parallelizable particle-mesh algorithm and employs a sub-evolution scheme in the lensing projection and a machine-learning inspired sharpening step to achieve these high accuracies.
MADmap produces maximum-likelihood images of the sky from time-ordered data which include correlated noise, such as those gathered by Cosmic Microwave Background (CMB) experiments. It works efficiently on platforms ranging from small workstations to the most massively parallel supercomputers. Map-making is a critical step in the analysis of all CMB data sets, and the maximum-likelihood approach is the most accurate and widely applicable algorithm; however, it is a computationally challenging task. This challenge will only increase with the next generation of ground-based, balloon-borne and satellite CMB polarization experiments. The faintness of the B-mode signal that these experiments seek to measure requires them to gather enormous data sets. MADmap has the ability to address problems typically encountered in the analysis of realistic CMB data sets. The massively parallel and distributed implementation is detailed and scaling complexities are given for the resources required. MADmap is capable of analyzing the largest data sets now being collected on computing resources currently available.
MADYS (Manifold Age Determination for Young Stars) determines the age and mass of young stellar and substellar objects. The code automatically retrieves and cross-matches photometry from several catalogs, estimates interstellar extinction, and derives age and mass estimates for individual objects through isochronal fitting. MADYS harmonizes the heterogeneity of publicly-available isochrone grids and the user can choose amongst several models, some of which have customizable astrophysical parameters. Particular attention has been dedicated to the categorization of these models, labeled through a four-level taxonomical classification.
maelstrom models binary orbits through the phase modulation technique. This set of custom PyMC3 models and solvers fit each individual datapoint in the time series by forward modeling the time delay onto the light curve. This approach fully captures variations in a light curve caused by an orbital companion.
MAESTRO, a low Mach number stellar hydrodynamics code, simulates long-time, low-speed flows that would be prohibitively expensive to model using traditional compressible codes. MAESTRO is based on an equation set derived using low Mach number asymptotics; this equation set does not explicitly track acoustic waves and thus allows a significant increase in the time step. MAESTRO is suitable for two- and three-dimensional local atmospheric flows as well as three-dimensional full-star flows, and adaptive mesh refinement (AMR) has been incorporated into the code. The expansion of the base state for full-star flows using a novel mapping technique between the one-dimensional base state and the Cartesian grid is also available.
NOTE: MAESTRO is no longer being actively developed. Users should switch to MAESTROeX (ascl:1908.019) to take advantage of the latest capabilities.
MAESTROeX solves the equations of low Mach number hydrodynamics for stratified atmospheres or stars with a general equation of state. It includes reactions and thermal diffusion and can be used on anything from a single core to 100,000s of processor cores with MPI + OpenMP. MAESTROeX maintains the accuracy of its predecessor MAESTRO (ascl:1010.044) while taking advantage of a simplified temporal integration scheme and leveraging the AMReX software framework for block-structured adaptive mesh refinement (AMR) applications.
MAGI (MAny-component Galaxy Initializer) generates initial conditions for numerical simulations of galaxies that resemble observed galaxies and are dynamically stable for time-scales longer than their characteristic dynamical times, taking into account galaxy bulges, discs, and haloes. MAGI adopts a distribution-function-based method and supports various kinds of density models, including custom-tabulated inputs and the presence of more than one disc, and is fast and easy to use.
MagIC simulates fluid dynamics in a spherical shell. It solves for the Navier-Stokes equation including Coriolis force, optionally coupled with an induction equation for Magneto-Hydro Dynamics (MHD), a temperature (or entropy) equation and an equation for chemical composition under both the anelastic and the Boussinesq approximations. MagIC uses either Chebyshev polynomials or finite differences in the radial direction and spherical harmonic decomposition in the azimuthal and latitudinal directions. The time-stepping scheme relies on a semi-implicit Crank-Nicolson for the linear terms of the MHD equations and a Adams-Bashforth scheme for the non-linear terms and the Coriolis force.
The R suite magicaxis makes useful and pretty plots for scientific plotting and includes functions for base plotting, with particular emphasis on pretty axis labelling in a number of circumstances that are often used in scientific plotting. It also includes functions for generating images and contours that reflect the 2D quantile levels of the data designed particularly for output of MCMC posteriors where visualizing the location of the 68% and 95% 2D quantiles for covariant parameters is a necessary part of the post MCMC analysis, can generate low and high error bars, and allows clipping of values, rejection of bad values, and log stretching.
MAGIX provides an interface between existing codes and an iterating engine that minimizes deviations of the model results from available observational data; it constrains the values of the model parameters and provides corresponding error estimates. Many models (and, in principle, not only astrophysical models) can be plugged into MAGIX to explore their parameter space and find the set of parameter values that best fits observational/experimental data. MAGIX complies with the data structures and reduction tools of Atacama Large Millimeter Array (ALMA), but can be used with other astronomical and with non-astronomical data.
MAGNETAR is a set of tools for the study of the magnetic field in simulations of MHD turbulence and polarization observations. It calculates the histogram of relative orientation between density structure in the magnetic field in data cubes from simulations of MHD turbulence and observations of polarization using the method of histogram of relative orientations (HRO).
Large-scale coherent magnetic fields are observed in galaxies and clusters, but their ultimate origin remains a mystery. We reconsider the prospects for primordial magnetogenesis by a cosmic string network. We show that the magnetic flux produced by long strings has been overestimated in the past, and give improved estimates. We also compute the fields created by the loop population, and find that it gives the dominant contribution to the total magnetic field strength on present-day galactic scales. We present numerical results obtained by evolving semi-analytic models of string networks (including both one-scale and velocity-dependent one-scale models) in a Lambda-CDM cosmology, including the forces and torques on loops from Hubble redshifting, dynamical friction, and gravitational wave emission. Our predictions include the magnetic field strength as a function of correlation length, as well as the volume covered by magnetic fields. We conclude that string networks could account for magnetic fields on galactic scales, but only if coupled with an efficient dynamo amplification mechanism.
Magnetizer computes time and radial dependent magnetic fields for a sample of galaxies in the output of a semi-analytic model of galaxy formation. The magnetic field is obtained by numerically solving the galactic dynamo equations throughout history of each galaxy. Stokes parameters and Faraday rotation measure can also be computed along a random line-of-sight for each galaxy.
Magnetron, written in Python, decomposes magnetar bursts into a superposition of small spike-like features with a simple functional form, where the number of model components is itself part of the inference problem. Markov Chain Monte Carlo (MCMC) sampling and reversible jumps between models with different numbers of parameters are used to characterize the posterior distributions of the model parameters and the number of components per burst.
MAGPHYS is a self-contained, user-friendly model package to interpret observed spectral energy distributions of galaxies in terms of galaxy-wide physical parameters pertaining to the stars and the interstellar medium. MAGPHYS is optimized to derive statistical constraints of fundamental parameters related to star formation activity and dust content (e.g. star formation rate, stellar mass, dust attenuation, dust temperatures) of large samples of galaxies using a wide range of multi-wavelength observations. A Bayesian approach is used to interpret the SEDs all the way from the ultraviolet/optical to the far-infrared.
MAGPy-RV (Modelling stellar Activity with Gaussian Processes in Radial Velocity) models data with Gaussian Process regression and affine invariant Monte Carlo Markov Chain parameter searching. Developed to model intrinsic, quasi-periodic variations induced by the host star in radial velocity (RV) surveys for the detection of exoplanets and the accurate measurements of their orbital parameters and masses, it now includes a variety of kernels and models and can be applied to any timeseries analysis. MAGPy-RV includes publication level plotting, efficient posterior extraction, and export-ready LaTeX results tables. It also handles multiple datasets at once and can model offsets and systematics from multiple instruments. MAGPy-RV requires no external dependencies besides basic python libraries and corner (ascl:1702.002).
Magrathea-Pathfinder propagates photons within cosmological simulations to construct observables. This high-performance framework uses a 3D Adaptive-Mesh Refinement and is built on top of the MAGRATHEA metalibrary (ascl:2203.023).
MAGRATHEA (Multi-processor Adaptive Grid Refinement Analysis for THEoretical Astrophysics) is a foundational cosmological library and a relativistic raytracing code. Classical linear algebra libraries come with their own operations and can be difficult to leverage for new data types. Instead of providing basic types, MAGRATHEA provides tools to generate base types such as scalar quantities, points, vectors, or tensors.
MAGRATHEA solves planet interiors and considers the case of fully differentiated interiors. The code integrates the hydrostatic equation in order to determine the correct planet radius given the mass in each layer. The code returns the pressure, temperature, density, phase, and radius at steps of enclosed mass. The code support four layers: core, mantle, hydrosphere, and atmosphere. Each layer has a phase diagram with equations of state chosen for each phase.
Magritte performs 3D radiative transfer modeling; though focused on astrophysics and cosmology, the techniques can also be applied more generally. The code uses a deterministic ray-tracer with a formal solver that currently focuses on line radiative transfer. Magritte can either be used as a C++ library or as a Python package.
MAH calculates the posterior distribution of the "minimum atmospheric height" (MAH) of an exoplanet by inputting the joint posterior distribution of the mass and radius. The code collapses the two dimensions of mass and radius into a one dimensional term that most directly speaks to whether the planet has an atmosphere or not. The joint mass-radius posteriors derived from a fit of some exoplanet data (likely using MCMC) can be used by MAH to evaluate the posterior distribution of R_MAH, from which the significance of a non-zero R_MAH (i.e. an atmosphere is present) is calculated.
MakeCloud makes turbulent giant molecular cloud (GMC) initial conditions for GIZMO (ascl:1410.003). It generates turbulent velocity fields on the fly and stores that data in a user-specified path for efficiency. The code is flexible, allowing the user control through various parameters, including the radius of the cloud, number of gas particles, type of initial turbulent velocity (Gaussian or full), and magnetic energy as a fraction of the binding energy, among other options. With an additional file, it can also create glassy initial conditions.
MAKEE (MAuna Kea Echelle Extraction) reduces data from the HIRES and ESI instruments at Keck Observatory. It is optimized for the spectral extraction of single, unresolved point sources and is designed to run non-interactively using a set of default parameters. Taking the raw HIRES FITS files as input, the code determines the position (or trace) of each echelle order, defines the object and background extraction boundaries, optimally extracts a spectrum for each order, and computes wavelength calibrations. MAKEE produces FITS format "spectral images" (each row is a separate echelle order spectrum) and the data values are in arbitrary (relative) flux units. MAKEE will reduce data from all HIRES formats, including the single CCD format, the single CCD with Red and UV cross dispersers, and the current 3 CCD system. It can handle a variety of pixel binnings, including 1x1, 1x2, 1x4 (column x row).
MaLTPyNT (Matteo's Libraries and Tools in Python for NuSTAR Timing) provides a quick-look timing analysis of NuSTAR data, properly treating orbital gaps and exploiting the presence of two independent detectors by using the cospectrum as a proxy for the power density spectrum. The output of the analysis is a cospectrum, or a power density spectrum, that can be fitted with XSPEC (ascl:9910.005) or ISIS (ascl:1302.002). The software also calculates time lags. Though written for NuSTAR data, MaLTPyNT can also perform standard spectral analysis on X-ray data from other satellite such as XMM-Newton and RXTE.
MALU visualizes integral field spectroscopy (IFS) data such as CALIFA, MANGA, SAMI or MUSE data producing fully interactive plots. The tool is not specific to any instrument. It is available in Python and no installation is required.
MAMPOSSt (Modeling Anisotropy and Mass Profiles of Observed Spherical Systems) is a Bayesian code to perform mass/orbit modeling of spherical systems. It determines marginal parameter distributions and parameter covariances of parametrized radial distributions of dark or total matter, as well as the mass of a possible central black hole, and the radial profiles of density and velocity anisotropy of one or several tracer components, all of which are jointly fit to the discrete data in projected phase space. It is based upon the MAMPOSSt likelihood function for the distribution of individual tracers in projected phase space (projected radius and line-of-sight velocity) and the CosmoMC Markov Chain Monte Carlo code (ascl:1106.025), run in generic mode. MAMPOSSt is not based on the 6D distribution function (which would require triple integrals), but on the assumption that the local 3D velocity distribution is an (anisotropic) Gaussian (requiring only a single integral).
The Maneage (Managing data lineage; ending pronounced like "lineage") framework produces fully reproducible computational research. It provides full control on building the necessary software environment from a low-level C compiler, the shell and LaTeX, all the way up to the high-level science software in languages such as Python without a third-party package manager. Once the software environment is built, adding analysis steps is as easy as defining "Make" rules to allow parallelized operations, and not repeating operations that do not need to be recreated. Make provides control over data provenance. A Maneage'd project also contains the narrative description of the project in LaTeX, which helps prepare the research for publication. All results from the analysis are passed into the report through LaTeX macros, allowing immediate dynamic updates to the PDF paper when any part of the analysis has changed. All information is stored in plain text and is version-controlled in Git. Maneage itself is actually a Git branch; new projects start by defining a new Git branch over it and customizing it for a new project. Through Git merging of branches, it is possible to import infrastructure updates to projects.
The MaNGA data analysis pipeline (MaNGA DAP) analyzes the data produced by the MaNGA data-reduction pipeline (ascl:2203.016) to produced physical properties derived from the MaNGA spectroscopy. All survey-provided properties are currently derived from the log-linear binned datacubes (i.e., the LOGCUBE files).
The MaNGA Data Reduction Pipeline (DRP) processes the raw data to produce flux calibrated, sky subtracted, coadded data cubes from each of the individual exposures for a given galaxy. The DRP consists of two primary parts: the 2d stage that produces flux calibrated fiber spectra from raw individual exposures, and the 3d stage that combines multiple flux calibrated exposures with astrometric information to produce stacked data cubes. These science-grade data cubes are then processed by the MaNGA Data Analysis Pipeline (ascl:2203.017), which measures the shape and location of various spectral features, fits stellar population models, and performs a variety of other analyses necessary to derive astrophysically meaningful quantities from the calibrated data cubes.
Mangle deals accurately and efficiently with complex angular masks, such as occur typically in galaxy surveys. Mangle performs the following tasks: converts masks between many handy formats (including HEALPix); rapidly finds the polygons containing a given point on the sphere; rapidly decomposes a set of polygons into disjoint parts; expands masks in spherical harmonics; generates random points with weights given by the mask; and implements computations for correlation function analysis. To mangle, a mask is an arbitrary union of arbitrarily weighted angular regions bounded by arbitrary numbers of edges. The restrictions on the mask are only (1) that each edge must be part of some circle on the sphere (but not necessarily a great circle), and (2) that the weight within each subregion of the mask must be constant. Mangle is complementary to and integrated with the HEALPix package (ascl:1107.018); mangle works with vector graphics whereas HEALPix works with pixels.
Mangrove uses Graph Neural Networks to regress baryonic properties directly from full dark matter merger trees to infer galaxy properties. The package includes code for preprocessing the merger tree, and training the model can be done either as single experiments or as a sweep. Mangrove provides loss functions, learning rate schedulers, models, and a script for doing the training on a GPU.
The MapCUMBA package applies a multigrid fast iterative Jacobi algorithm for map-making in the context of CMB experiments.
MapCurvature, written in IDL, can create map projections with Goldberg-Gott indicatrices. These indicatrices measure the flexion and skewness of a map, and are useful for determining whether features are faithfully reproduced on a particular projection.
MAPPINGS III is a general purpose astrophysical plasma modelling code. It is principally intended to predict emission line spectra of medium and low density plasmas subjected to different levels of photoionization and ionization by shockwaves. MAPPINGS III tracks up to 16 atomic species in all stages of ionization, over a useful range of 102 to 108 K. It treats spherical and plane parallel geometries in equilibrium and time-dependent models. MAPPINGS III is useful for computing models of HI and HII regions, planetary nebulae, novae, supernova remnants, Herbig-Haro shocks, active galaxies, the intergalactic medium and the interstellar medium in general. The present version of MAPPINGS III is a large FORTRAN program that runs with a simple TTY interface for historical and portability reasons. A newer version of this software, MAPPINGS V (ascl:1807.005), is available.
MAPPINGS V is a update of the MAPPINGS code (ascl:1306.008) and provides new cooling function computations for optically thin plasmas based on the greatly expanded atomic data of the CHIANTI 8 database. The number of cooling and recombination lines has been expanded from ~2000 to over 80,000, and temperature-dependent spline-based collisional data have been adopted for the majority of transitions. The expanded atomic data set provides improved modeling of both thermally ionized and photoionized plasmas; the code is now capable of predicting detailed X-ray spectra of nonequilibrium plasmas over the full nonrelativistic temperature range, increasing its utility in cosmological simulations, in modeling cooling flows, and in generating accurate models for the X-ray emission from shocks in supernova remnants.
MAPS (Multi-frequency Angular Power Spectrum) extracts two-point statistical information from Epoch of Reionization (EoR) signals observed in three dimensions, with two directions on the sky and the wavelength (or frequency) constituting the third dimension. Rather than assume that the signal has the same statistical properties in all three directions, as the spherically averaged power spectrum (SAPS) does, MAPS does not make these assumptions, making it more natural for radio interferometric observations than SAPS.
The visualization tool MARDIGRAS (Mass-Radius DIaGRAm with Sliders) enables simple and intuitive manipulation of mass-radius relationships (also known as iso-composition curves) using interactive sliders. It infers composition based on mass and radius (and other parameters). As a result, it requires use of actual measurements of mass and radius; values that are upper/lower limits, derived from empirical mass-radius relations, or are somewhat controversial should not be used. MARDIGRAS screen captures can be used for general scientific communication but are not of suitable quality for article publication.
Margarine computes marginal bayesian statistics given a set of samples from an MCMC or nested sampling run. Specifically, the code calculates marginal Kullback-Leibler divergences and Bayesian dimensionalities using Masked Autoregressive Flows and Kernel Density Estimators to learn and sample posterior distributions of signal subspaces in high dimensional data models, and determines the properties of cosmological subspaces, such as their log-probability densities and how well constrained they are, independent of nuisance parameters. Margarine thus allows for direct and specific comparison of the constraining ability of different experimental approaches, which can in turn lead to improvements in experimental design.
MARGE (Machine learning Algorithm for Radiative transfer of Generated Exoplanets) generates exoplanet spectra across a defined parameter space, processes the output, and trains, validates, and tests machine learning models as a fast approximation to radiative transfer. It uses BART (ascl:1608.004) for spectra generation and modifies BART’s Bayesian sampler (MC3, ascl:1610.013) with a random uniform sampler to propose models within a defined parameter space. More generally, MARGE provides a framework for training neural network models to approximate a forward, deterministic process.
With the commissioning of the second MAGIC gamma-ray Cherenkov telescope situated close to MAGIC-I, the standard analysis package of the MAGIC collaboration, MARS, has been upgraded in order to perform the stereoscopic reconstruction of the detected atmospheric showers. MARS is a ROOT-based code written in C++, which includes all the necessary algorithms to transform the raw data recorded by the telescopes into information about the physics parameters of the observed targets. An overview of the methods for extracting the basic shower parameters is presented, together with a description of the tools used in the background discrimination and in the estimation of the gamma-ray source spectra.
MarsLux generates illumination maps of Mars from Digital Terrain Model (DTM), permitting users to investigate in detail the illumination conditions on Mars based on its topography and the relative position of the Sun. MarsLux consists of two Python codes, SolaPar and MarsLux. SolaPar calculates the matrix with solar parameters for one date or a range between the two. The Marslux code generates the illumination maps using the same DTM and the files generated by SolaPar. The resulting illumination maps show areas that are fully illuminated, areas in total shadow, and areas with partial shade, and can be used for geomorphological studies to examine gullies, thermal weathering, or mass wasting processes as well as for producing energy budget maps for future exploration missions.
MARTINI (Mock APERTIF-like Radio Telescope Interferometry of the Neutal ISM) creates synthetic resolved HI line observations (data cubes) of smoothed-particle hydrodynamics simulations of galaxies. The various aspects of the mock-observing process are divided logically into sub-modules handling the data cube, source, beam, noise, spectral model and SPH kernel. MARTINI is object-oriented: each sub-module provides a class (or classes) which can be configured as desired. For most sub-modules, base classes are provided to allow for straightforward customization. Instances of each sub-module class are then given as parameters to the Martini class. A mock observation is then constructed by calling a handful of functions to execute the desired steps in the mock-observing process.
Marvin searches, accesses, and visualizes data from the Mapping Nearby Galaxies at Apache Point Observatory (MaNGA) survey. Written in Python, it provides tools for easy efficient interaction with the MaNGA data via local files, files retrieved from the Science Archive Server, or data directly grabbed from the database. The tools come mainly in the form of convenience functions and classes for interacting with the data. Also available is a web app, Marvin-web, offers an easily accessible interface for searching the MaNGA data and visual exploration of individual MaNGA galaxies or of the entire sample, and a powerful query functionality that uses the API to query the MaNGA databases and return the search results to your python session. Marvin-API is the critical link that allows Marvin-tools and Marvin-web to interact with the databases, which enables users to harness the statistical power of the MaNGA data set.
MARX (Model of AXAF Response to X-rays) is a suite of programs designed to enable the user to simulate the on-orbit performance of the Chandra satellite. MARX provides a detailed ray-trace simulation of how Chandra responds to a variety of astrophysical sources and can generate standard FITS events files and images as output. It contains models for the HRMA mirror system onboard Chandra as well as the HETG and LETG gratings and all focal plane detectors.
MARXS (Multi-Architecture-Raytrace-Xraymission-Simulator) simulates X-ray observatories. Primarily designed to simulate X-ray instruments on astronomical X-ray satellites and sounding rocket payloads, it can also be used to ray-trace experiments in the laboratory. MARXS performs polarization Monte-Carlo ray-trace simulations from a source (astronomical or lab) through a collection of optical elements such as mirrors, baffles, and gratings to a detector.
MARZ analyzes objects and produces high quality spectroscopic redshift measurements. Spectra not matched correctly by the automatic algorithm can be redshifted manually by cycling automatic results, manual template comparison, or marking spectral features. The software has an intuitive interface and powerful automatic matching capabilities on spectra, and can be run interactively or from the command line, and runs as a Web application. MARZ can be run on a local server; it is also available for use on a public server.
Mask galaxy is an automatic machine learning pipeline for detection, segmentation and morphological classification of galaxies. The model is based on the Mask R-CNN Deep Learning architecture. This model of instance segmentation also performs image segmentation at the pixel level, and has shown a Mean Average Precision (mAP) of 0.93 in morphological classification of spiral or elliptical galaxies.
maskfill inward extrapolates edge pixels just outside masked regions, using iterative median filtering and the full information contained in the edge pixels. This provides seamless transitions between masked pixels and good pixels, and allows high fidelity reconstruction of gaps in continuous narrow features. An image and a mask the only required inputs.
MasQU extracts polarization information in the CMB by reducing contamination from so-called "ambiguous modes" on a masked sky, which contain leakage from the larger E-mode signal and utilizing derivative operators on the real-space Stokes Q and U parameters. In particular, the package can perform finite differences on masked, irregular grids and is applied to a semi-regular spherical pixellization, the HEALPix grid. The formalism reduces to the known finite-difference solutions in the case of a regular grid. On a masked sphere, the software represents a considerable reduction in B-mode noise from limited sky coverage.
MASSCLEAN is a sophisticated and robust stellar cluster image and photometry simulation package. This package is able to create color-magnitude diagrams and standard FITS images in any of the traditional optical and near-infrared bands based on cluster characteristics input by the user, including but not limited to distance, age, mass, radius and extinction. At the limit of very distant, unresolved clusters, we have checked the integrated colors created in MASSCLEAN against those from other simple stellar population (SSP) models with consistent results. Because the algorithm populates the cluster with a discrete number of tenable stars, it can be used as part of a Monte Carlo Method to derive the probabilistic range of characteristics (integrated colors, for example) consistent with a given cluster mass and age.
massconvert, written in Fortran, provides driver and fitting routines for converting halo mass definitions from one spherical overdensity to another assuming an NFW density profile. In surveys that probe ever lower cluster masses and temperatures, sample variance is generally comparable to or greater than shot noise and thus cannot be neglected in deriving precision cosmological constraints; massconvert offers an accurate fitting formula for the conversion between different definitions of halo mass.
massmappy recovers convergence mass maps on the celestial sphere from weak lensing cosmic shear observations. It relies on SSHT (ascl:2207.034) and HEALPix (ascl:1107.018) to handle sampled data on the sphere. The spherical Kaiser-Squires estimator is implemented.
maszcal calibrates the observable-mass relation for galaxy clusters, with a focus on the thermal Sunyaev-Zeldovich signal's relation to mass. maszcal explicitly models baryonic matter density profiles, differing from most previous approaches that treat galaxy clusters as purely dark matter. To do this, it uses a generalized Nararro-Frenk-White (GNFW) density to represent the baryons, while using the more typical NFW profile to represent dark matter.
MATCH matches up items in two different lists, which can have two different systems of coordinates. The program allows the two sets of coordinates to be related by a linear, quadratic, or cubic transformation. MATCH was designed and written to work on lists of stars and other astronomical objects but can be applied to other types of data. In order to match two lists of N points, the main algorithm calls for O(N^6) operations; though not the most efficient choice, it does allow for arbitrary translation, rotation, and scaling.
A discrete Point Spread Function (PSF) is a sampled version of a continuous two-dimensional PSF. The shape information about the photon scattering pattern of a discrete PSF is typically encoded using a numerical table (matrix) or a FITS image file. MATPHOT shifts discrete PSFs within an observational model using a 21-pixel- wide damped sinc function and position partial derivatives are computed using a five-point numerical differentiation formula. MATPHOT achieves accurate and precise stellar photometry and astrometry of undersampled CCD observations by using supersampled discrete PSFs that are sampled two, three, or more times more finely than the observational data.
The injection-recovery MATRIX (Multi-phAse Transits Recovery from Injected eXoplanets) Toolkit creates grids of scenarios with a set of periods, radii, and epochs of synthetic transiting exoplanet signals in a provided light curve. Typical injection-recovery executions consist of 2-dimensional scenarios, where only one epoch (random or hardcoded) was used for each period and radius, which may reduce accuracy. MATRIX performs multi-phase analyses needing only a few parameters in a configuration file and running one line of code.
matvis simulates radio interferometric visibilities at the necessary scale with both CPU and GPU implementations. It is matrix-based and applicable to wide field-of-view instruments such as the Hydrogen Epoch of Reionization Array (HERA) and the Square Kilometre Array (SKA), as it does not make any approximations of the visibility integral (such as the flat-sky approximation). The only approximation made is that the sky is a collection of point sources, which is valid for sky models that intrinsically consist of point-sources, but is an approximation for diffuse sky models. The matvix matrix-based algorithm is fast and scales well to large numbers of antennas. The code supports both CPU and GPU implementations as drop-in replacements for each other and also supports both dense and sparse sky models.
maxsmooth fits derivative constrained functions (DCF) such as Maximally Smooth Functions (MSFs) to data sets. MSFs are functions for which there are no zero crossings in derivatives of order m >= 2 within the domain of interest. They are designed to prevent the loss of signals when fitting out dominant smooth foregrounds or large magnitude signals that mask signals of interest. Here "smooth" means that the foregrounds follow power law structures and do not feature turning points in the band of interest. maxsmooth uses quadratic programming implemented with CVXOPT (ascl:2008.017) to fit data subject to a fixed linear constraint, Ga <= 0, where the product Ga is a matrix of derivatives. The code tests the <= 0 constraint multiplied by a positive or negative sign and can test every available sign combination but by default, it implements a sign navigating algorithm.
Mayavi provides general-purpose 3D scientific visualizations. It offers easy interactive tools for data visualization that fit with the scientific user's workflow. Mayavi provides several entry points: a full-blown interactive application; a Python library with both a MATLAB-like interface focused on easy scripting and a feature-rich object hierarchy; widgets associated with these objects for assembling in a domain-specific application, and plugins that work with a general purpose application-building framework.
MAYONNAISE (Morphological Analysis Yielding separated Objects iN Near infrAred usIng Sources Estimation), or MAYO for short, is a pipeline for exoplanet and disk high-contrast imaging from ADI datasets. The pipeline is mostly automated; the package also loads the data and injects synthetic data if needed. MAYONNAISE parameters are written in a json file called parameters_algo.json and placed in a working_directory.
MBASC (Multi-Band AGN-SFG Classifier) classifies sources as Active Galactic Nuclei (AGNs) and Star Forming Galaxies (SFGs). The algorithm is based on the light gradient-boosting machine ML technique. MBASC can use a wide range of multi-wavelength data and redshifts to predict a classification for sources.
Mbb_emcee fits modified blackbodies to photometry data using an affine invariant MCMC. It has large number of options which, for example, allow computation of the IR luminosity or dustmass as part of the fit. Carrying out a fit produces a HDF5 output file containing the results, which can either be read directly, or read back into a mbb_results object for analysis. Upper and lower limits can be imposed as well as Gaussian priors on the model parameters. These additions are useful for analyzing poorly constrained data. In addition to standard Python packages scipy, numpy, and cython, mbb_emcee requires emcee (ascl:1303.002), Astropy (ascl:1304.002), h5py, and for unit tests, nose.
Magnification bias estimation estimates magnification bias for a galaxy sample with a complex photometric selection for the example of SDSS BOSS. The code works for CMASS and the LOWZ, z1 and z3 samples. A template for applying the approach to other surveys is included; requirements include a galaxy catalog that provides magnitudes (used for photometric selection) and the exact conditions used for the photometric selection.
MOLSCAT, which supercedes MOLSCAT version 14 (ascl:1206.004), performs non-reactive quantum scattering calculations for atomic and molecular collisions using coupled-channel methods. Simple atom-molecule and molecule-molecule collision types are coded internally and additional ones may be handled with plug-in routines. Plug-in routines may include external magnetic, electric or photon fields (and combinations of them).
The package also includes BOUND, which performs calculations of bound-state energies in weakly bound atomic and molecular systems using coupled-channel methods, and FIELD, a development of BOUND that locates values of external fields at which a bound state exists with a specified energy. Though the three programs have different applications, they use closely related methods, share many subroutines, and are released with a single code base.
MBProj2 obtains thermodynamic profiles of galaxy clusters. It forward-models cluster X-ray surface brightness profiles in multiple bands, optionally assuming hydrostatic equilibrium. The code is a set of Python classes the user can use or extend. When modelling a cluster assuming hydrostatic equilibrium, the user chooses a form for the density profile (e.g. binning or a beta model), the metallicity profile, and the dark matter profile (e.g. NFW). If hydrostatic equilibrium is not assumed, a temperature profile model is used instead of the dark matter profile. The code uses the emcee Markov Chain Monte Carlo code (ascl:1303.002) to sample the model parameters, using these to produce chains of thermodynamic profiles.
MC-SPAM (Monte-Carlo Synthetic-Photometry/Atmosphere-Model) generates limb-darkening coefficients from models that are comparable to transit photometry; it extends the original SPAM algorithm by Howarth (2011) by taking in consideration the uncertainty on the stellar and transit parameters of the system under analysis.
MC3 (Multi-core Markov-chain Monte Carlo) is a Bayesian statistics tool that can be executed from the shell prompt or interactively through the Python interpreter with single- or multiple-CPU parallel computing. It offers Markov-chain Monte Carlo (MCMC) posterior-distribution sampling for several algorithms, Levenberg-Marquardt least-squares optimization, and uniform non-informative, Jeffreys non-informative, or Gaussian-informative priors. MC3 can share the same value among multiple parameters and fix the value of parameters to constant values, and offers Gelman-Rubin convergence testing and correlated-noise estimation with time-averaging or wavelet-based likelihood estimation methods.
MC3D is a 3D continuum radiative transfer code; it is based on the Monte-Carlo method and solves the radiative transfer problem self-consistently. It is designed for the simulation of dust temperatures in arbitrary geometric configurations and the resulting observables: spectral energy distributions, wavelength-dependent images, and polarization maps. The main objective is the investigation of "dust-dominated" astrophysical systems such as young stellar objects surrounded by an optically thick circumstellar disk and an optically thin(ner) envelope, debris disks around more evolved stars, asymptotic giant branch stars, the dust component of the interstellar medium, and active galactic nuclei.
MCAL calculates high precision metallicities and effective temperatures for M dwarfs; the method behaves properly down to R = 40 000 and S/N = 25, and results were validated against a sample of stars in common with SOPHIE high resolution spectra.
MCALF (Multi-Component Atmospheric Line Fitting) accurately constrains velocity information from spectral imaging observations using machine learning techniques. It is useful for solar physicists trying to extract line-of-sight (LOS) Doppler velocity information from spectral imaging observations (Stokes I measurements) of the Sun. A toolkit is provided that can be used to define a spectral model optimized for a particular dataset. MCALF is particularly suited for extracting velocity information from spectral imaging observations where the individual spectra can contain multiple spectral components. Such multiple components are typically present when active solar phenomenon occur within an isolated region of the solar disk. Spectra within such a region will often have a large emission component superimposed on top of the underlying absorption spectral profile from the quiescent solar atmosphere.
MCCD (Multi-CCD) generates a Point Spread Function (PSF) model based on stars observations in the field of view. After defining the MCCD model parameters and running and validating the training, the model can recover the PSF at any position in the field of view. Written in Python, MCCD also calculates various statistics and can plot a random test star and its model reconstruction.
McFine performs complex, multi-component hyperfine spectra fitting in astronomical data. It turns line intensities into gas conditions using a fully automated Bayesian method. Written in Python, the code uses Markov chain Monte Carlo (MCMC) to characterize model denegeracies. It handles local thermodynamic equilibrium (LTE) and radiative-transfer (RT) models and can fit individual spectra and data cubes; given a data cube, it can also use the neighboring information to attempt a better fit. McFine also fits the minimum number of distinct components to avoid overfitting.
mcfit computes integral transforms, inverse transforms without analytic inversion, and integral kernels as derivatives. It can also transform input array along any axis, output the matrix form, an is easily extensible for other kernels.
MCFOST is a 3D continuum and line radiative transfer code based on an hybrid Monte Carlo and ray-tracing method. It is mainly designed to study the circumstellar environment of young stellar objects, but has been used for a wide range of astrophysical problems. The calculations are done exactly within the limitations of the Monte Carlo noise and machine precision, i.e., no approximation are used in the calculations. The code has been strongly optimized for speed.
MCFOST is primarily designed to study protoplanetary disks. The code can reproduce most of the observations of disks, including SEDs, scattered light images, IR and mm visibilities, and atomic and molecular line maps. As the Monte Carlo method is generic, any complex structure can be handled by MCFOST and its use can be extended to other astrophysical objects. For instance, calculations have succesfully been performed on infalling envelopes and AGB stars. MCFOST also includes a non-LTE line transfer module, and NLTE level population are obtained via iterations between Monte Carlo radiative transfer calculations and statistical equilibrium.
The tool McLuster is an open source code that can be used to either set up initial conditions for N-body computations or, alternatively, to generate artificial star clusters for direct investigation. There are two different versions of the code, one basic version for generating all kinds of unevolved clusters (in the following called mcluster) and one for setting up evolved stellar populations at a given age. The former is completely contained in the C file main.c. The latter (dubbed mcluster_sse) is more complex and requires additional FORTRAN routines, namely the Single-Star Evolution (SSE) routines by Hurley, Pols & Tout (ascl:1303.015) that are provided with the McLuster code.
Monte Carlo Merger Analysis Code (MCMAC) aids in the study of merging clusters. It takes observed priors on each subcluster's mass, radial velocity, and projected separation, draws randomly from those priors, and uses them in a analytic model to get posterior PDF's for merger dynamic properties of interest (e.g. collision velocity, time since collision).
MCMCDiagnostics contains two diagnostics, written in Julia, for Markov Chain Monte Carlo. The first is potential_scale_reduction(chains...), which estimates the potential scale reduction factor, also known as Rhat, for multiple scalar chains . The second, effective_sample_size(chain), calculates the effective sample size for scalar chains. These diagnostics are intended as building blocks for use by other libraries.
MCMCI (Markov chain Monte Carlo + isochrones) characterizes a whole exoplanetary system directly by modeling the star and its planets simultaneously. The code, written in Fortran, uses light curves and basic stellar parameters with a transit analysis algorithm that interacts with stellar evolutionary models, thus using both model-dependent and empirical age indicators to characterize the system.
MCMole3D (Monte-Carlo MOlecular Line Emission) simulates the 3D molecular cloud emission in the Milky Way. In particular, it can simulate both the unpolarized and polarized emission coming from the first rotational line of Carbon Monoxide (CO, J=1-0). MCMole3D seeks to compare the simulated emission with that observed by full sky surveys from the Planck satellite.
The McGill Planar Hydrogen Atmosphere Code (McPHAC) v1.1 calculates the hydrostatic equilibrium structure and emergent spectrum of an unmagnetized hydrogen atmosphere in the plane-parallel approximation at surface gravities appropriate for neutron stars. McPHAC incorporates several improvements over previous codes for which tabulated model spectra are available: (1) Thomson scattering is treated anisotropically, which is shown to result in a 0.2%-3% correction in the emergent spectral flux across the 0.1-5 keV passband; (2) the McPHAC source code is made available to the community, allowing it to be scrutinized and modified by other researchers wishing to study or extend its capabilities; and (3) the numerical uncertainty resulting from the discrete and iterative solution is studied as a function of photon energy, indicating that McPHAC is capable of producing spectra with numerical uncertainties <0.01%. The accuracy of the spectra may at present be limited to ~1%, but McPHAC enables researchers to study the impact of uncertain inputs and additional physical effects, thereby supporting future efforts to reduce those inaccuracies. Comparison of McPHAC results with spectra from one of the previous model atmosphere codes (NSA) shows agreement to lsim1% near the peaks of the emergent spectra. However, in the Wien tail a significant deficit of flux in the spectra of the previous model is revealed, determined to be due to the previous work not considering large enough optical depths at the highest photon frequencies. The deficit is most significant for spectra with T eff < 105.6 K, though even there it may not be of much practical importance for most observations.
MCPM extracts K2 photometry in dense stellar regions; the code is a modification and extension of the K2-CPM package (ascl:2107.024), which was developed for less-crowded fields. MCPM uses the pixel response function together with accurate astrometric grids, combining signals from a few pixels, and simultaneously fits for an astrophysical model to produce extracted more precise K2 photometry.
MCRaT (Monte Carlo Radiation Transfer) analyzes the radiation signature expected from astrophysical outflows. MCRaT injects photons in a FLASH (ascl:1010.082) simulation and individually propagates and compton scatters the photons through the fluid until the end of the simulation. This process of injection and propagating occurs for a user specified number of times until there are no more photons to be injected. Users can then construct light curves and spectra with the MCRaT calculated results. The hydrodynamic simulations used with this version of MCRaT must be in 2D; however, the photon propagation and scattering is done in 3D by assuming cylindrical symmetry. Additionally, MCRaT uses the full Klein–Nishina cross section including the effects of polarization, which can be fully simulated in the code. MCRaT works with FLASH hydrodynamic simulations and PLUTO (ascl:1010.045) AMR simulations, with both 2D spherical (r, equation) and 2D cartesian ((x,y) and (r,z)).
MCRGNet (Morphological Classification of Radio Galaxy Network) classifies radio galaxies of different morphologies. It is based on the Convolutional Neural Network (CNN), which is trained and applied under a three-step framework: 1.) pretraining the network unsupervisedly with unlabeled samples, 2.) fine-tuning the pretrained network parameters supervisedly with labeled samples, and 3.) classifying a new radio galaxy by the trained network. The code uses a dichotomous tree classifier composed of cascaded CNN based subclassifiers.
McScatter illustrates a method of combining stellar dynamics with stellar evolution. The method is intended for elaborate applications, especially the dynamical evolution of rich star clusters. The dynamics is based on binary scattering in a multi-mass field of stars with uniform density and velocity dispersion, using the scattering cross section of Giersz (MNRAS, 2001, 324, 218-30).
MCSED models the optical, near-infrared and infrared spectral energy distribution (SED) of galactic systems. Its modularity and options make it flexible and able to address the varying physical properties of galaxies over cosmic time and environment and adjust to changes in understanding of stellar evolution, the details of mass loss, and the products of binary evolution through substitution or addition of new datasets or algorithms. MCSED is built to fit a galaxy’s full SED, from the far-UV to the far-IR. Among other physical processes, it can model continuum emission from stars, continuum and line-emission from ionized gas, attenuation from dust, and mid- and far-IR emission from dust and polycyclic aromatic hydrocarbons (PAHs). MCSED performs its calculations by creating a complex stellar population (CSP) out of a linear combination of simple-stellar populations (SSPs) using an efficient Markov Chain Monte Carlo algorithm. It is very quick, and takes advantage of parallel processing.
Spearman’s rank correlation test is commonly used in astronomy to discern whether a set of two variables are correlated or not. Unlike most other quantities quoted in astronomical literature, the Spearman’s rank correlation coefficient is generally quoted with no attempt to estimate the errors on its value. This code implements a number of Monte Carlo based methods to estimate the uncertainty on the Spearman’s rank correlation coefficient.
ME(SSY)**2 stands for “Monte-carlo Experiments with Spherically SYmmetric Stellar SYstems." This code simulates the long term evolution of spherical clusters of stars; it was devised specifically to treat dense galactic nuclei. It is based on the pioneering Monte Carlo scheme proposed by Hénon in the 70's and includes all relevant physical ingredients (2-body relaxation, stellar mass spectrum, collisions, tidal disruption, ldots). It is basically a Monte Carlo resolution of the Fokker-Planck equation. It can cope with any stellar mass spectrum or velocity distribution. Being a particle-based method, it also allows one to take stellar collisions into account in a very realistic way. This unique code, featuring most important physical processes, allows million particle simulations, spanning a Hubble time, in a few CPU days on standard personal computers and provides a wealth of data only rivalized by N-body simulations. The current version of the software requires the use of routines from the "Numerical Recipes in Fortran 77" (http://www.nrbook.com/a/bookfpdf.php).
Site with collection of codes and fundamental references on mean motion resonances.
Meanoffset performs astronomical image alignment. The code uses the means of the rows and columns of an original image for alignment and finds the optimal offset corresponding to the maximum similarity by comparing different offsets between images. The similarity is evaluated by the standard deviation of the quotient divided by the means. The code is fast and robust.
measure_extinction measures extinction due to dust absorbing photons or scattering photons out of the line-of-sight. Extinction applies to the case for a star seen behind a foreground screen of dust. This package provides the tools to measure dust extinction curves using observations of two effectively identical stars, differing only in that one is seen through more dust than the other.
The Mechanic package is a numerical framework for dynamical astronomy, designed to help in massive numerical simulations by efficient task management and unified data storage. The code is built on top of the Message Passing Interface (MPI) and Hierarchical Data Format (HDF5) standards and uses the Task Farm approach to manage numerical tasks. It relies on the core-module approach. The numerical problem implemented in the user-supplied module is separated from the host code (core). The core is designed to handle basic setup, data storage and communication between nodes in a computing pool. It has been tested on large CPU-clusters, as well as desktop computers. The Mechanic may be used in computing dynamical maps, data optimization or numerical integration.
We describe an automated method for assigning the most probable physical parameters to the components of an eclipsing binary, using only its photometric light curve and combined colors. With traditional methods, one attempts to optimize a multi-parameter model over many iterations, so as to minimize the chi-squared value. We suggest an alternative method, where one selects pairs of coeval stars from a set of theoretical stellar models, and compares their simulated light curves and combined colors with the observations. This approach greatly reduces the parameter space over which one needs to search, and allows one to estimate the components' masses, radii and absolute magnitudes, without spectroscopic data. We have implemented this method in an automated program using published theoretical isochrones and limb-darkening coefficients. Since it is easy to automate, this method lends itself to systematic analyses of datasets consisting of photometric time series of large numbers of stars, such as those produced by OGLE, MACHO, TrES, HAT, and many others surveys.
MeerCRAB (MeerLICHT Classification of Real and Bogus Transients using Deep Learning) filters out false detections of transients from true astrophysical sources in the transient detection pipeline of the MeerLICHT telescope. It uses a deep learning model based on Convolutional Neural Network.
The Medium Energy Gamma-ray Astronomy library (MEGAlib) simulates, calibrates, and analyzes data of hard X-ray and gamma-ray detectors, with a specialization on Compton telescopes. The library comprises all necessary data analysis steps for these telescopes, from simulation/measurements via calibration, event reconstruction to image reconstruction.
MEGAlib contains a geometry and detector description tool for the detailed modeling of different detector types and characteristics, and provides an easy to use simulation program based on Geant4 (ascl:1010.079). For different Compton telescope detector types (electron tracking, multiple Compton or time of flight based), specialized Compton event reconstruction algorithms are implemented in different approaches (Chi-square and Bayesian). The high level data analysis tools calculate response matrices, perform image deconvolution (specialized in list-mode-likelihood-based Compton image reconstruction), determine detector resolutions and sensitivities, retrieve spectra, and determine polarization modulations.
MegaLUT is a simple and fast method to correct ellipticity measurements of galaxies from the distortion by the instrumental and atmospheric point spread function (PSF), in view of weak lensing shear measurements. The method performs a classification of galaxies and associated PSFs according to measured shape parameters, and builds a lookup table of ellipticity corrections by supervised learning. This new method has been applied to the GREAT10 image analysis challenge, and demonstrates a refined solution that obtains the highly competitive quality factor of Q = 142, without any power spectrum denoising or training. Of particular interest is the efficiency of the method, with a processing time below 3 ms per galaxy on an ordinary CPU.
megaman is a scalable manifold learning package implemented in python. It has a front-end API designed to be familiar to scikit-learn but harnesses the C++ Fast Library for Approximate Nearest Neighbors (FLANN) and the Sparse Symmetric Positive Definite (SSPD) solver Locally Optimal Block Precodition Gradient (LOBPCG) method to scale manifold learning algorithms to large data sets. It is designed for researchers and as such caches intermediary steps and indices to allow for fast re-computation with new parameters.
Menura simulates the interaction between a fully turbulent solar wind and various bodies of the solar system using a novel two-step approach. It is an advanced numerical tool for self-consistent modeling that bridges planetary science and plasma physics. Menura is built around a hybrid Particle-In-Cell solver, treating electrons as a charge-neutralising fluid, and ions as massive particles. It solves iteratively the particles’ dynamics, gathers particle moments at the nodes of a grid, at which the magnetic field is also computed, and then solves the Maxwell equations. This solver uses the popular Current Advance Method (CAM).
MEPSA (Multiple Excess Peak Search Algorithm) identifies peaks within a uniformly sampled time series affected by uncorrelated Gaussian noise. MEPSA scans the time series at different timescales by comparing a given peak candidate with a variable number of adjacent bins. While this has originally been conceived for the analysis of gamma-ray burst light (GRB) curves, its usage can be readily extended to other astrophysical transient phenomena whose activity is recorded through different surveys. MEPSA's high flexibility permits the mask of excess patterns it uses to be tailored and optimized without modifying the code.
MeqTrees is a software package for implementing Measurement Equations. This makes it uniquely suited for simulation and calibration of radioastronomical data, especially that involving new radiotelescopes and observational regimes. MeqTrees is implemented as a Python-based front-end called the meqbrowser, and an efficient (C++-based) computational back-end called the meqserver. Numerical models are defined on the front-end via a Python-based Tree Definition Language (TDL), then rapidly executed on the back-end. The use of TDL facilitates an extremely short turn-around time for experimentation with new ideas. This is also helped by unprecedented visualization capabilities for all final and intermediate results. A flexible data model and a number of important optimizations in the back-end ensures that the numerical performance is comparable to that of hand-written code.
MeqTrees includes a highly capable FITS viewer and sky model manager called Tigger, which can also work as a standalone tool.
MERA works with large 3D AMR/uniform-grid and N-body particle data sets from astrophysical simulations such as those produced by the hydrodynamic code RAMSES (ascl:1011.007) and is written entirely in the Julia language. The package provides essential functions for efficient and memory lightweight data loading and analysis. The core of MERA is a database framework.
Mercury-T calculates the evolution of semi-major axis, eccentricity, inclination, rotation period and obliquity of the planets as well as the rotation period evolution of the host body; it is based on the N-body code Mercury (Chambers 1999, ascl:1201.008). It is flexible, allowing computation of the tidal evolution of systems orbiting any non-evolving object (if its mass, radius, dissipation factor and rotation period are known), but also evolving brown dwarfs (BDs) of mass between 0.01 and 0.08 M⊙, an evolving M-dwarf of 0.1 M⊙, an evolving Sun-like star, and an evolving Jupiter.
Mercury is a new general-purpose software package for carrying out orbital integrations for problems in solar-system dynamics. Suitable applications include studying the long-term stability of the planetary system, investigating the orbital evolution of comets, asteroids or meteoroids, and simulating planetary accretion. Mercury is designed to be versatile and easy to use, accepting initial conditions in either Cartesian coordinates or Keplerian elements in "cometary" or "asteroidal" format, with different epochs of osculation for different objects. Output from an integration consists of osculating elements, written in a machine-independent compressed format, which allows the results of a calculation performed on one platform to be transferred (e.g. via FTP) and decoded on another.
During an integration, Mercury monitors and records details of close encounters, sungrazing events, ejections and collisions between objects. The effects of non-gravitational forces on comets can also be modeled. The package supports integrations using a mixed-variable symplectic routine, the Bulirsch-Stoer method, and a hybrid code for planetary accretion calculations.
Merger Trees uses a Monte Carlo algorithm to generate merger trees describing the formation history of dark matter haloes; the algorithm is implemented in Fortran. The algorithm is a modification of the algorithm of Cole et al. used in the GALFORM semi-analytic galaxy formation model (ascl:1510.005) based on the Extended Press–Schechter theory. It should be applicable to hierarchical models with a wide range of power spectra and cosmological models. It is tuned to be in accurate agreement with the conditional mass functions found in the analysis of merger trees extracted from the Λ cold dark matter Millennium N-body simulation. The code should be a useful tool for semi-analytic models of galaxy formation and for modelling hierarchical structure formation in general.
Stellar physics and evolution calculations enable a broad range of research in astrophysics. Modules for Experiments in Stellar Astrophysics (MESA) is a suite of open source libraries for a wide range of applications in computational stellar astrophysics. A newly designed 1-D stellar evolution module, MESA star, combines many of the numerical and physics modules for simulations of a wide range of stellar evolution scenarios ranging from very-low mass to massive stars, including advanced evolutionary phases. MESA star solves the fully coupled structure and composition equations simultaneously. It uses adaptive mesh refinement and sophisticated timestep controls, and supports shared memory parallelism based on OpenMP. Independently usable modules provide equation of state, opacity, nuclear reaction rates, and atmosphere boundary conditions. Each module is constructed as a separate Fortran 95 library with its own public interface. Examples include comparisons to other codes and show evolutionary tracks of very low mass stars, brown dwarfs, and gas giant planets; the complete evolution of a 1 Msun star from the pre-main sequence to a cooling white dwarf; the Solar sound speed profile; the evolution of intermediate mass stars through the thermal pulses on the He-shell burning AGB phase; the interior structure of slowly pulsating B Stars and Beta Cepheids; evolutionary tracks of massive stars from the pre-main sequence to the onset of core collapse; stars undergoing Roche lobe overflow; and accretion onto a neutron star.
MeshLab processes and edits 3D triangular meshes. It includes tools for editing, cleaning, healing, inspecting, rendering, texturing and converting meshes, and offers features for processing raw data produced by 3D digitization tools and devices and for preparing models for 3D printing.
Meso-NH is the non-hydrostatic mesoscale atmospheric model of the French research community jointly developed by the Laboratoire d'Aérologie (UMR 5560 UPS/CNRS) and by CNRM (UMR 3589 CNRS/Météo-France). Meso-NH incorporates a non-hydrostatic system of equations for dealing with scales ranging from large (synoptic) to small (large eddy) scales while calculating budgets and has a complete set of physical parameterizations for the representation of clouds and precipitation. It is coupled to the surface model SURFEX for representation of surface atmosphere interactions by considering different surface types (vegetation, city, ocean, lake) and allows a multi-scale approach through a grid-nesting technique. Meso-NH is versatile, vectorized, parallelized, and operates in 1D, 2D or 3D; it is coupled with a chemistry module (including gas-phase, aerosol, and aqua-phase components) and a lightning module, and has observation operators that compare model output directly with satellite observations, radar, lidar and GPS.
MESS is a Monte Carlo simulation IDL code which uses either the results of the statistical analysis of the properties of discovered planets, or the results of the planet formation theories, to build synthetic planet populations fully described in terms of frequency, orbital elements and physical properties. They can then be used to either test the consistency of their properties with the observed population of planets given different detection techniques or to actually predict the expected number of planets for future surveys. It can be used to probe the physical and orbital properties of a putative companion within the circumstellar disk of a given star and to test constrain the orbital distribution properties of a potential planet population around the members of the TW Hydrae association. Finally, using in its predictive mode, the synergy of future space and ground-based telescopes instrumentation has been investigated to identify the mass-period parameter space that will be probed in future surveys for giant and rocky planets. A Python version of this code, Exo-DMC (ascl:2010.008), is available.
MeSsI performs an automatic classification between merging and relaxed clusters. This method was calibrated using mock catalogues constructed from the millennium simulation, and performs the classification using some machine learning techniques, namely random forest for classification and mixture of gaussians for the substructure identification.
The Meudon PDR code computes the atomic and molecular structure of interstellar clouds. It can be used to study the physics and chemistry of diffuse clouds, photodissociation regions (PDRs), dark clouds, or circumstellar regions. The model computes the thermal balance of a stationary plane-parallel slab of gas and dust illuminated by a radiation field and takes into account heating processes such as the photoelectric effect on dust, chemistry, cosmic rays, etc. and cooling resulting from infrared and millimeter emission of the abundant species. Chemistry is solved for any number of species and reactions. Once abundances of atoms and molecules and level excitation of the most important species have been computed at each point, line intensities and column densities can be deduced.
MG-MAMPOSSt extends the MAMPOSSt code (ascl:2203.020), which performs Bayesian fits of models of mass and velocity anisotropy profiles to the distribution of tracers in projected phase space, to handle modified gravity models and constrain its parameters. It implements two distinct types of gravity modifications: general chameleon (including $f(\mathcal{R})$ models), and beyond Horndeski gravity (Vainshtein screening). MG-MAMPOSSt efficently explores the parameter space either by computing the likelihood over a multi-dimensional grid of points or by performing a simple Metropolis-Hastings MCMC. The code requires a Fortran90 compiler or higher and makes use of the getdist package (ascl:1910.018) to plot the marginalized distributions in the MCMC mode.
MG-PICOLA is a modified version of L-PICOLA (ascl:1507.004) that extends the COLA approach for simulating cosmological structure formation to theories that exhibit scale-dependent growth. It can compute matter power-spectra (CDM and total), redshift-space multipole power-spectra P0,P2,P4 and do halofinding on the fly.
MGB (Marxist Ghost Buster) attacks spectral classification by using an interactive comparison with spectral libraries. It allows the user to move along the two traditional dimensions of spectral classification (spectral subtype and luminosity classification) plus the two additional ones of rotation index and spectral peculiarities. Double-lined spectroscopic binaries can also be fitted using a combination of two standards. The code includes OB2500 v2.0, a standard grid of blue-violet R ~ 2500 spectra of O stars from the Galactic O-Star Spectroscopic Survey, but other grids can be added to MGB.
CAMB is a public Fortran 90 code written by Antony Lewis and Anthony Challinor for evaluating cosmological observables. MGCAMB is a modified version of CAMB in which the linearized Einstein equations of General Relativity (GR) are modified. MGCAMB can also be used in CosmoMC to fit different modified-gravity (MG) models to data.
mgcnn is a Convolutional Neural Network (CNN) architecture for classifying standard and modified gravity (MG) cosmological models based on the weak-lensing convergence maps they produce. It is implemented in Keras using TensorFlow as the backend. The code offers three options for the noise flag, which correspond to noise standard deviations, and additional options for the number of training iterations and epochs. Confusion matrices and evaluation metrics (loss function and validation accuracy) are saved as numpy arrays in the generated output/ directory after each iteration.
MGCosmoPop implements a hierarchical Bayesian inference method for constraining the background cosmological history, in particular the Hubble constant, together with modified gravitational-wave propagation and binary black holes population models (mass, redshift and spin distributions) with gravitational-wave data. It includes support for loading and analyzing data from the GWTC-3 catalog as well as for generating injections to evaluate selection effects, and features a module to run in parallel on clusters.
MGE_FIT_SECTORS performs Multi-Gaussian Expansion (MGE) fits to galaxy images. The MGE parameterizations are useful in the construction of realistic dynamical models of galaxies, PSF deconvolution of images, the correction and estimation of dust absorption effects, and galaxy photometry. The algorithm is well suited for use with multiple-resolution images (e.g. Hubble Space Telescope (HST) and ground-based images).
We have developed MGGPOD, a user-friendly suite of Monte Carlo codes built around the widely used GEANT (Version 3.21) package. The MGGPOD Monte Carlo suite and documentation are publicly available for download. MGGPOD is an ideal tool for supporting the various stages of gamma-ray astronomy missions, ranging from the design, development, and performance prediction through calibration and response generation to data reduction. In particular, MGGPOD is capable of simulating ab initio the physical processes relevant for the production of instrumental backgrounds. These include the build-up and delayed decay of radioactive isotopes as well as the prompt de-excitation of excited nuclei, both of which give rise to a plethora of instrumental gamma-ray background lines in addition to continuum backgrounds.
MGHalofit is a modified gravity extension of the fitting formula for the matter power spectrum of HALOFIT and its improvement by Takahashi et al. MGHalofit is implemented in MGCAMB, which is based on CAMB. MGHalofit calculates the nonlinear matter power spectrum P(k) for the Hu-Sawicki model. Comparing MGHalofit predictions at various redshifts (z<=1) to the f(R) simulations, the accuracy on P(k) is 6% at k<1 h/Mpc and 12% at 1<k<10 h/Mpc respectively.
MGPT (Modified Gravity Perturbation Theory) computes 2-point statistics for LCDM model, DGP and Hu-Sawicky f(R) gravity. Written in C, the code can be easily modified to include other models. Specifically, it computes the SPT matter power spectrum, SPT Lagrangian-biased tracers power spectrum, and the CLPT matter correlation function. MGPT also computes the CLPT Lagrangian-biased tracers correlation function and a set of Q and R functionsfrom which other statistics, as leading order bispectrum, can be constructed.
The 2-D wavelet transformation code MGwave detects kinematic moving groups in astronomical data; it can also investigate underdensities which can eventually provide further information about the MW's non-axisymmetric features. The code creates a histogram of the input data, then performs the wavelet transformation at the specified scales, returning the wavelet coefficients across the entire histogram in addition to information about the detected extrema. MGwave can also run Monte Carlo simulations to propagate uncertainties. It runs the wavelet transformation on simulated data (pulled from Gaussian distributions) many times and tracks the percentage of the simulations in which a given extrema is detected. This quantifies whether a detected overdensity or underdensity is robust to variations of the data within the provided errors.
mhealpy extends the functionalities of the HEALPix (ascl:1107.018) wrapper healpy (ascl:2008.022) to handle single and multi-resolution maps (a.k.a. multi-order coverage maps or MOC maps). In addition to creating and analyzes MOC maps, it supports arithmetic operations, adaptive grids, resampling of existing multi-resolution maps, and plotting, among other functions, and reads and writes to FITS, which enables sharing spatial information for multiwavelength and multimessenger analyses.
MHF is a Dark Matter halo finder that is based on the refinement grids of MLAPM. The grid structure of MLAPM adaptively refines around high-density regions with an automated refinement algorithm, thus naturally "surrounding" the Dark Matter halos, as they are simply manifestations of over-densities within (and exterior) to the underlying host halo. Using this grid structure, MHF restructures the hierarchy of nested isolated MLAPM grids into a "grid tree". The densest cell in the end of a tree branch marks center of a prospective Dark Matter halo. All gravitationally bound particles about this center are collected to obtain the final halo catalog. MHF automatically finds halos within halos within halos.
MIA+EWS is a package of two data reduction tools for MIDI data which uses power-spectrum analysis or the information contained in the spectrally-dispersed fringe measurements in order to estimate the correlated flux and the visibility as function of wavelength in the N-band. MIA, which stands for MIDI Interactive Analysis, uses a Fast Fourier Transformation to calculate the Fourier amplitudes of the fringe packets to calculate the correlated flux and visibility. EWS stands for Expert Work-Station, which is a collection of IDL tools to apply coherent visibility analysis to reduce MIDI data. The EWS package allows the user to control and examine almost every aspect of MIDI data and its reduction. The usual data products are the correlated fluxes, total fluxes and differential phase.
michi2 fits combinations of arbitrary numbers of libraries/components to a given observational data. Written in C++ and Python, this chi-square fitting tool can fit a galaxy's spectral energy distribution (SED) with stellar, active galactic nuclear, dust and radio SED templates, and fit a galaxy's spectral line energy distribution (SLED) with one or more gas components using radiative transfer LVG model grid libraries.
michi2 first samples the high-dimensional parameter space (N1*N2*N3*..., where N is the number of independent templates in each library, and 1/2/3 is the ID of components) in an optimized way for a few thousand or tens of thousand times to compute the chi-square to the input observational data, then uses Python scripts to analyze the chi-square distribution and derive the best-fit, median, lower and higher 1-sigma values for each parameter in each library/component. This tool is useful for fitting larger number of templates and arbitrary combinations of libraries/components, including some constraining of one library/component onto another.
Occultation and microlensing are different limits of the same phenomena of one body passing in front of another body. We derive a general exact analytic expression which describes both microlensing and occultation in the case of spherical bodies with a source of uniform brightness and a non-relativistic foreground body. We also compute numerically the case of a source with quadratic limb-darkening. In the limit that the gravitational deflection angle is comparable to the angular size of the foreground body, both microlensing and occultation occur as the objects align. Such events may be used to constrain the size ratio of the lens and source stars, the limb-darkening coefficients of the source star, and the surface gravity of the lens star (if the lens and source distances are known). Application of these results to microlensing during transits in binaries and giant-star microlensing are discussed. These results unify the microlensing and occultation limits and should be useful for rapid model fitting of microlensing, eclipse, and "microccultation" events.
micrOMEGAs calculates the properties of cold dark matter in a generic model of particle physics. First developed to compute the relic density of dark matter, the code also computes the rates for dark matter direct and indirect detection. The code provides the mass spectrum, cross-sections, relic density and exotic fluxes of gamma rays, positrons and antiprotons. The propagation of charged particles in the Galactic halo is handled with a module that allows to easily modify the propagation parameters. The cross-sections for both spin dependent and spin independent interactions of WIMPS on protons are computed automatically as well as the rates for WIMP scattering on nuclei in a large detector. Annihilation cross-sections of the dark matter candidate at zero velocity, relevant for indirect detection of dark matter, are computed automatically, and the propagation of charged particles in the Galactic halo is also handled.
midIR_sensitivity is IDL code that calculates the sensitivity of a ground-based mid-infrared instrument for astronomy. The code was written for the Phase A study of the instrument METIS (http://www.strw.leidenuniv.nl/metis), the Mid-Infrared E-ELT Imager and Spectrograph, for the 42-m European Extremely Large Telescope. The model uses a detailed set of input parameters for site characteristics and atmospheric profiles, optical design, and thermal background. The code and all input parameters are highly tailored for the particular design parameters of the E-ELT and METIS, however, the program is structured in such a way that the parameters can easily be adjusted for a different system, or alternative input files used.
The Markwardt IDL Library contains routines for curve fitting and function minimization, including MPFIT (ascl:1208.019), statistical tests, and non-linear optimization (TNMIN); graphics programs including plotting three-dimensional data as a cube and fixed- or variable-width histograms; adaptive numerical integration (Quadpack), Chebyshev approximation and interpolation, and other mathematical tools; many ephemeris and timing routines; and array and set operations, such as computing the fast product of a large array, efficiently inserting or deleting elements in an array, and performing set operations on numbers and strings; and many other useful and varied routines.
Miex calculates Mie scattering coefficients and efficiency factors for broad grain size distributions and a very wide wavelength range (λ ≈ 10-10-10-2m) of the interacting radiation and incorporates standard solutions of the scattering amplitude functions. The code handles arbitrary size parameters, and single scattering by particle ensembles is calculated by proper averaging of the respective parameters.
This triggering code calculates the correlation function between two astrophysical data catalogs using the Landy-Szalay approximator generalized for heterogeneous datasets (Landy & Szalay, 1993; Bradshaw et al, 2011) or the auto-correlation function of one dataset. It assumes that one catalog has positional information as well as an object size (effective radius), and the other only positional information.
MillCgs clusters galaxies from the semi-analytic models run on top of the Millennium Simulation to identify Compact Groups. MillCgs uses a machine learning clustering algorithm to find the groups and then runs analytics to filter out the groups that do not fit the user specified criteria. The package downloads the data, processes it, and then creates graphs of the data.
millennium-tap-query is a simple wrapper for the Python package requests to deal with connections to the Millennium TAP Web Client. With this tool you can perform basic or advanced queries to the Millennium Simulation database and download the data products. millennium-tap-query is similar to the TAP query tool in the German Astrophysical Virtual Observatory (GAVO) VOtables package.
The millisearch.for code was used to generate a new search for the gravitational lens effects of a significant cosmological density of supermassive compact objects (SCOs) on gamma-ray bursts. No signal attributable to millilensing was found. We inspected the timing data of 774 BATSE-triggered GRBs for evidence of millilensing: repeated peaks similar in light-curve shape and spectra. Our null detection leads us to conclude that, in all candidate universes simulated, OmegaSCO < 0.1 is favored for 105 < MSCO/Modot < 109, while in some universes and mass ranges the density limits are as much as 10 times lower. Therefore, a cosmologically significant population of SCOs near globular cluster mass neither came out of the primordial universe, nor condensed at recombination.
miluphcuda is the CUDA port of the original miluph code; it runs on single Nvidia GPUs with compute capability 5.0 and higher and provides fast and efficient computation. The code can be used for hydrodynamical simulations and collision and impact physics, and features self-gravity via Barnes-Hut trees and porosity models such as P-alpha and epsilon-alpha. It can model solid bodies, including ductile and brittle materials, as well as non-viscous fluids, granular media, and porous continua.
Min-CaLM performs automated mineral compositional analysis on debris disk spectra. The user inputs the debris disk spectrum, and using Min-CaLM's built-in mineralogical library, Min-CaLM calculates the relative mineral abundances within the disk. To do this calculation, Min-CaLM converts the debris disk spectrum and the mineralogical library spectra into a system of linear equations, which it then solves using non-negative least square minimization. This code comes with a GitHub tutorial on how to use the Min-CaLM package.
Would you like to view a random code?