ASCL.net

Astrophysics Source Code Library

Making codes discoverable since 1999

Browsing Codes

Results 1501-1600 of 2032 (2002 ASCL, 30 submitted)

Order
Title Date
 
Mode
Abstract Compact
Per Page
50100250All
[ascl:1108.016] RADMC: A 2-D Continuum Radiative Transfer Tool

RADMC is a 2-D Monte-Carlo code for dust continuum radiative transfer circumstellar disks and envelopes. It is based on the method of Bjorkman & Wood (ApJ 2001, 554, 615), but with several modifications to produce smoother results with fewer photon packages.

[ascl:1811.015] radon: Streak detection using the Fast Radon Transform

radon performs a Fast Radon Transform (FRT) on image data for streak detection. The software finds short streaks and multiple streaks, convolves the images with a given PSF, and tracks the best S/N results and find a automatic threshold. It also calculates the streak parameters in the input image and the streak parameters in the input image. radon has a simulator that can make multiple streaks of different intensities and coordinates, and can simulate random streaks with parameters chosen uniformly in a user-defined range.

[ascl:9910.009] RADPACK: A RADical compression analysis PACKage for fitting to the CMB

The RADPACK package, written in IDL, contains both data and software. The data are the constraints on the cosmic microwave background (CMB) angular power spectrum from all published data as of 9/99. A unique aspect of this compilation is that the non-Gaussianity of the uncertainties has been characterized. The most important program in the package, written in the IDL language, is called chisq.pro and calculates $chi^2$, for an input power spectrum, according to the offset log-normal form of Bond, Jaffe and Knox (astro-ph/9808264). chisq.pro also outputs files that are useful for examining the residuals (the difference between the predictions of the model and the data). There is an sm macro for plotting up the residuals, and a histogram of the residuals. The histogram is actually for the 'whitenend' residuals ---a linear combination of the residuals which leaves them uncorrelated and with unit variance. The expectation is that the whitened residuals will be distributed as a Gaussian with unit variance.

[ascl:1801.012] RadVel: General toolkit for modeling Radial Velocities

RadVel models Keplerian orbits in radial velocity (RV) time series. The code is written in Python with a fast Kepler's equation solver written in C. It provides a framework for fitting RVs using maximum a posteriori optimization and computing robust confidence intervals by sampling the posterior probability density via Markov Chain Monte Carlo (MCMC). RadVel can perform Bayesian model comparison and produces publication quality plots and LaTeX tables.

[ascl:1902.008] Radynversion: Solar atmospheric properties during a solar flare

Radynversion infers solar atmospheric properties during a solar flare. The code is based on an Invertible Neural Network (INN) that is trained to learn an approximate bijective mapping between the atmospheric properties of electron density, temperature, and bulk velocity (all as a function of altitude), and the observed Hα and Ca II λ8542 line profiles. As information is lost in the forward process of radiation transfer, this information is injected back into the model during the inverse process by means of a latent space; the training allows this latent space to be filled using an n-dimensional unit Gaussian distribution, where n is the dimensionality of the latent space. The code is based on a model trained by simulations made by RADYN, a 1D non-equilibrium radiation hydrodynamic model with good optically thick radiation treatment that does not consider magnetic effects.

[ascl:1411.010] Raga: Monte Carlo simulations of gravitational dynamics of non-spherical stellar systems

Raga (Relaxation in Any Geometry) is a Monte Carlo simulation method for gravitational dynamics of non-spherical stellar systems. It is based on the SMILE software (ascl:1308.001) for orbit analysis. It can simulate stellar systems with a much smaller number of particles N than the number of stars in the actual system, represent an arbitrary non-spherical potential with a basis-set or spline spherical-harmonic expansion with the coefficients of expansion computed from particle trajectories, and compute particle trajectories independently and in parallel using a high-accuracy adaptive-timestep integrator. Raga can also model two-body relaxation by local (position-dependent) velocity diffusion coefficients (as in Spitzer's Monte Carlo formulation) and adjust the magnitude of relaxation to the actual number of stars in the target system, and model the effect of a central massive black hole.

[ascl:1710.013] Ramses-GPU: Second order MUSCL-Handcock finite volume fluid solver

RamsesGPU is a reimplementation of RAMSES (ascl:1011.007) which drops the adaptive mesh refinement (AMR) features to optimize 3D uniform grid algorithms for modern graphics processor units (GPU) to provide an efficient software package for astrophysics applications that do not need AMR features but do require a very large number of integration time steps. RamsesGPU provides an very efficient C++/CUDA/MPI software implementation of a second order MUSCL-Handcock finite volume fluid solver for compressible hydrodynamics as a magnetohydrodynamics solver based on the constraint transport technique. Other useful modules includes static gravity, dissipative terms (viscosity, resistivity), and forcing source term for turbulence studies, and special care was taken to enhance parallel input/output performance by using state-of-the-art libraries such as HDF5 and parallel-netcdf.

[ascl:1011.007] RAMSES: A new N-body and hydrodynamical code

A new N-body and hydrodynamical code, called RAMSES, is presented. It has been designed to study structure formation in the universe with high spatial resolution. The code is based on Adaptive Mesh Refinement (AMR) technique, with a tree based data structure allowing recursive grid refinements on a cell-by-cell basis. The N-body solver is very similar to the one developed for the ART code (Kravtsov et al. 97), with minor differences in the exact implementation. The hydrodynamical solver is based on a second-order Godunov method, a modern shock-capturing scheme known to compute accurately the thermal history of the fluid component. The accuracy of the code is carefully estimated using various test cases, from pure gas dynamical tests to cosmological ones. The specific refinement strategy used in cosmological simulations is described, and potential spurious effects associated to shock waves propagation in the resulting AMR grid are discussed and found to be negligible. Results obtained in a large N-body and hydrodynamical simulation of structure formation in a low density LCDM universe are finally reported, with 256^3 particles and 4.1 10^7 cells in the AMR grid, reaching a formal resolution of 8192^3. A convergence analysis of different quantities, such as dark matter density power spectrum, gas pressure power spectrum and individual haloes temperature profiles, shows that numerical results are converging down to the actual resolution limit of the code, and are well reproduced by recent analytical predictions in the framework of the halo model.

[ascl:1803.015] RAPTOR: Imaging code for relativistic plasmas in strong gravity

RAPTOR produces accurate images, animations, and spectra of relativistic plasmas in strong gravity by numerically integrating the equations of motion of light rays and performing time-dependent radiative transfer calculations along the rays. The code is compatible with any analytical or numerical spacetime, is hardware-agnostic and may be compiled and run on both GPUs and CPUs. RAPTOR is useful for studying accretion models of supermassive black holes, performing time-dependent radiative transfer through general relativistic magneto-hydrodynamical (GRMHD) simulations and investigating the expected observational differences between the so-called fastlight and slow-light paradigms.

[ascl:1904.014] rate: Reliable Analytic Thermochemical Equilibrium

rate computes thermochemical-equilibrium abundances for a H-C-N-O system with known pressure, temperature, and elemental abundances. The output abundances are H2O, CH4, CO, CO2, NH3, C2H2, C2H4, HCN, and N2, H2, H, and He.

[ascl:0008.002] RATRAN: Radiative Transfer and Molecular Excitation in One and Two Dimensions

RATRAN is a numerical method and computer code to calculate the radiative transfer and excitation of molecular lines. The approach is based on the Monte Carlo method, and incorporates elements from Accelerated Lambda Iteration. It combines the flexibility of the former with the speed and accuracy of the latter. Convergence problems known to plague Monte Carlo methods at large optical depth (>100) are avoided by separating local contributions to the radiation field from the overall transfer problem. The random nature of the Monte Carlo method serves to verify the independence of the solution to the angular, spatial, and frequency sampling of the radiation field. This allows the method to be used in a wide variety of astrophysical problems without specific adaptations. Moreover, the code can be applied to all atoms or molecules for which collisional rate coefficients are available and any axially symmetric source model. Continuum emission and absorption by dust is explicitly taken into account but scattering is neglected. We expect this program to be an important tool in analyzing data from present and future infrared and (sub-)millimeter telescopes.

[ascl:1105.009] Ray Tracing Codes: run_tau, run_raypath, and ray_kernel

Time-distance helioseismology aims to measure and interpret the travel times of waves propagating between two points located on the solar surface. The travel times are then inverted to infer sub-surface properties that are encoded in the measurements. The trajectory of the waves generally follows that of the infinite-frequency ray path, although they are sensitive to perturbations off of this path. Finite-frequency sensitivity kernels are thus needed to give more accurate inversion results.

Ray tracing codes calculate travel time kernels for a ray. There are three main codes which calculate the group time as a function of distance, the ray paths as well as the phase and group times along the path, and the ray kernels for the sound speed squared.

[ascl:1411.006] RC3 mosaicking pipeline: Creating mosaics for the RC3 Catalogue

The RC3 mosaicking pipeline creates color composite images and scientifically-calibrated FITS mosaics in all SDSS imaging bands for all the RC3 galaxies that lie within the survey’s footprint and on photographic plates taken by the Digitized Palomar Observatory Sky Survey (DPOSS) for the B, R, IR bands. The pipeline uses SExtractor (ascl:1010.064) for extraction and STIFF (ascl:1110.006) to generating color images. The mosaicking program uses a recursive algorithm for positional update first to correct the positional inaccuracy inherent in the RC3 catalog, then conducts the mosaicking procedure using the Astropy (ascl:1304.002) wrapper to IPAC's Montage (ascl:1010.036) software. The program is generalized into a pipeline that can be easily extended to future survey data or other source catalogs; an online interface is available at
http://lcdm.astro.illinois.edu/data/rc3/search.html.

[ascl:1408.017] RDGEN: Routines for data handling, display, and adjusting

RDGEN is a collection of routines for data handling, display, and adjusting, with a facility which helps to set up files for using with VPFIT (ascl:1408.015); it is included in the VPFIT distribution file. It is useful for setting region boundaries and initial guesses for VPFIT, for displaying the accumulated results, for examining by eye particular redshift systems and fits to them, testing that the error array is a true reflection of the rms scatter in the data, comparing spectra and generally examining and even modifying the data.

[ascl:1506.007] REALMAF: Magnetic power spectra from Faraday rotation maps

REALMAF is a maximum-a-posteriori code to measure magnetic power spectra from Faraday rotation data. It uses a sophisticated model for the magnetic autocorrelation in real space, thus alleviating the need for simplifying assumptions in the processing. REALMAF treats the divergence relation of the magnetic field with a multiplicative factor in Fourier space, which allows modeling the magnetic autocorrelation as a spherically symmetric function.

[ascl:1107.009] REAS3: Modeling Radio Emission from Cosmic Ray Air Showers

The freely available Monte Carlo code REAS for modelling radio emission from cosmic ray air showers has evolved to include the full complexity of air shower physics. REAS3 improves the calculation of the emission contributions, which was not fully consistent in earlier versions of REAS, by incorporating the missing radio emission due to the variation of the number of charged particles during the air shower evolution using an "end-point formalism". With the inclusion of these emission contributions, the structure of the simulated radio pulses changes from unipolar to bipolar, and the azimuthal emission pattern becomes nearly symmetric. Remaining asymmetries can be explained by radio emission due to the variation of the net charge excess in air showers, which is automatically taken into account in the new implementation. REAS3 constitutes the first self-consistent time-domain implementation based on single particle emission taking the full complexity of air shower physics into account, and is freely available for all interested users. REAS3 has been superseded by CoREAS (ascl:1406.003).

[ascl:1110.016] REBOUND: Multi-purpose N-body code for collisional dynamics

REBOUND is a multi-purpose N-body code which is freely available under an open-source license. It was designed for collisional dynamics such as planetary rings but can also solve the classical N-body problem. It is highly modular and can be customized easily to work on a wide variety of different problems in astrophysics and beyond.

REBOUND comes with three symplectic integrators: leap-frog, the symplectic epicycle integrator (SEI) and a Wisdom-Holman mapping (WH). It supports open, periodic and shearing-sheet boundary conditions. REBOUND can use a Barnes-Hut tree to calculate both self-gravity and collisions. These modules are fully parallelized with MPI as well as OpenMP. The former makes use of a static domain decomposition and a distributed essential tree. Two new collision detection modules based on a plane-sweep algorithm are also implemented. The performance of the plane-sweep algorithm is superior to a tree code for simulations in which one dimension is much longer than the other two and in simulations which are quasi-two dimensional with less than one million particles.

[ascl:1106.026] RECFAST: Calculate the Recombination History of the Universe

RECFAST calculates the recombination of H, HeI, and HeII in the early Universe; this involves a line-by-line treatment of each atomic level. It differs in comparison to previous calculations in two major ways: firstly, the ionization fraction x_e is approximately 10% smaller for redshifts <~800, due to non-equilibrium processes in the excited states of H, and secondly, HeI recombination is much slower than previously thought, and is delayed until just before H recombines. RECFAST enables fast computation of the ionization history (and quantities such as the power spectrum of CMB anisotropies which depend on it) for arbitrary cosmologies.

[ascl:1507.017] REDSPEC: NIRSPEC data reduction

REDSPEC is an IDL based reduction package designed with NIRSPEC in mind though can be used to reduce data from other spectrographs as well. REDSPEC accomplishes spatial rectification by summing an A+B pair of a calibration star to produce an image with two spectra; the image is remapped on the basis of polynomial fits to the spectral traces and calculation of gaussian centroids to define their separation, producing straight spectral traces with respect to the detector rows. The raw images are remapped onto a coordinate system with uniform intervals in spatial extent along the slit and in wavelength along the dispersion axis.

[ascl:1508.003] REDUCEME: Long-slit spectroscopic data reduction and analysis

The astronomical data reduction package REDUCEME reduces and analyzes long-slit spectroscopic data. The package uses the unformatted FORTRAN raw data format, so requires FITS files be transformed to REDUCEME format; the reverse operation (from REDUCEME to FITS format) is also available. The package is a set of programs written in FORTRAN 77 and includes shell scripts (using the C shell syntax) to perform routine tasks; it can be extended by the inclusion of external programs. REDUCEME uses PGPLOT (ascl:1103.002) for line plots and images, and a subset of subroutines, called BUTTON, enables the user to communicate interactively with the image display employing graphic buttons. One advantage of using REDUCEME is that for each image an associated error image can also be processed throughout the reduction process, allowing for a careful control of the error propagation.

[ascl:1401.004] Reflex: Graphical workflow engine for data reduction

Reflex provides an easy and flexible way to reduce VLT/VLTI science data using the ESO pipelines. It allows graphically specifying the sequence in which the data reduction steps are executed, including conditional stops, loops and conditional branches. It eases inspection of the intermediate and final data products and allows repetition of selected processing steps to optimize the data reduction. The data organization necessary to reduce the data is built into the system and is fully automatic; advanced users can plug their own modules and steps into the data reduction sequence. Reflex supports the development of data reduction workflows based on the ESO Common Pipeline Library. Reflex is based on the concept of a scientific workflow, whereby the data reduction cascade is rendered graphically and data seamlessly flow from one processing step to the next. It is distributed with a number of complete test datasets so users can immediately start experimenting and familiarize themselves with the system.

[ascl:1206.001] RegiStax: Alignment, stacking and processing of images

RegiStax is software for alignment/stacking/processing of images; it was released over 10 years ago and continues to be developed and improved. The current version is RegiStax 6, which supports the following formats: AVI, SER, RFL (RegiStax Framelist), BMP, JPG, TIF, and FIT. This version has a shorter and simpler processing sequence than its predecessor, and optimizing isn't necessary anymore as a new image alignment method optimizes directly. The interface of RegiStax 6 has been simplified to look more uniform in appearance and functionality, and RegiStax 6 now uses Multi-core processing, allowing the user to have up to have multiple cores(recommended to use maximally 4) working simultaneous during alignment/stacking.

[ascl:1404.012] RegPT: Regularized cosmological power spectrum

RegPT computes the power spectrum in flat wCDM class models based on the RegPT treatment when provided with either of transfer function or matter power spectrum. It then gives the multiple-redshift outputs for power spectrum, and optionally provides correlation function data. The Fortran code has two major options for power spectrum calculations; -fast, which quickly computes the power spectrum at two-loop level (typically a few seconds) using the pre-computed data set of PT kernels for fiducial cosmological models, and -direct, in which the code first applies the fast method, and then follows the regularized expression for power spectrum to directly evaluate the multi-dimensional integrals. The output results are the power spectrum of direct calculation and difference of the results between fast and direct method. The code also gives the data set of PT diagrams necessary for power spectrum calculations from which the power spectrum can be constructed.

[ascl:1505.021] relline: Relativistic line profiles calculation

relline calculates relativistic line profiles; it is compatible with the common X-ray data analysis software XSPEC (ascl:9910.005) and ISIS (ascl:1302.002). The two basic forms are an additive line model (RELLINE) and a convolution model to calculate relativistic smearing (RELCONV).

[ascl:1904.008] repack: Repack and compress line-transition data

repack re-packs and compresses line-transition data for radiative-transfer calculations. It identifies the strong lines that dominate the spectrum from the large-majority of weaker lines, returning a binary line-by-line (LBL) file with the strong lines info (wavenumber, Elow, gf, and isotope ID), and an ASCII file with the combined contribution of the weaker lines compressed into a continuum extinction coefficient (in cm-1 amagat-1) as function of wavenumber and temperature.

[ascl:1612.022] REPS: REscaled Power Spectra for initial conditions with massive neutrinos

REPS (REscaled Power Spectra) provides accurate, one-percent level, numerical simulations of the initial conditions for massive neutrino cosmologies, rescaling the late-time linear power spectra to the simulation initial redshift.

[ascl:1809.016] RequiSim: Variance weighted overlap calculator

RequiSim computes the Variance Weighted Overlap, which is a measure of the bias on the lensing signal from power spectrum modelling bias for any non-linear model. It assumes that the bias on the power spectrum is Gaussian with a covariance described by a user-provided knowledge matrix that describes the covariance in the bias on the power spectrum. The data from the Euclid wide-field survey are included.

[ascl:1505.028] RESOLVE: Bayesian algorithm for aperture synthesis imaging in radio astronomy

RESOLVE is a Bayesian inference algorithm for image reconstruction in radio interferometry. It is optimized for extended and diffuse sources. Features include parameter-free Bayesian reconstruction of radio continuum data with a focus on extended and weak diffuse sources, reconstruction with uncertainty propagation dependent on measurement noise, and estimation of the spatial correlation structure of the radio astronomical source. RESOLVE provides full support for measurement sets and includes a simulation tool (if uv-coverage is provided).

[ascl:1907.023] REVOLVER: REal-space VOid Locations from suVEy Reconstruction

REVOLVER reconstructs real space positions from redshift-space tracer data by subtracting RSD through FFT-based reconstruction (optional) and applies void-finding algorithms to create a catalogue of voids in these tracers. The tracers are normally galaxies from a redshift survey but could also be halos or dark matter particles from a simulation box. Two void-finding routines are provided. The first is based on ZOBOV (ascl:1304.005) and uses Voronoi tessellation of the tracer field to estimate the local density, followed by a watershed void-finding step. The second is a voxel-based method, which uses a particle-mesh interpolation to estimate the tracer density, and then uses a similar watershed algorithm. Input data files can be in FITS format, or ASCII- or NPY-formatted data arrays.

[ascl:1710.002] rfpipe: Radio interferometric transient search pipeline

rfpipe supports Python-based analysis of radio interferometric data (especially from the Very Large Array) and searches for fast radio transients. This extends on the rtpipe library (ascl:1706.002) with new approaches to parallelization, acceleration, and more portable data products. rfpipe can run in standalone mode or be in a cluster environment.

[ascl:1711.006] RGW: Goodman-Weare Affine-Invariant Sampling

RGW is a lightweight R-language implementation of the affine-invariant Markov Chain Monte Carlo sampling method of Goodman & Weare (2010). The implementation is based on the description of the python package emcee (ascl:1303.002).

[ascl:1502.001] RH 1.5D: Polarized multi-level radiative transfer with partial frequency distribution

RH 1.5D performs Zeeman multi-level non-local thermodynamical equilibrium calculations with partial frequency redistribution for an arbitrary amount of chemical species. Derived from the RH code and written in C, it calculates spectra from 3D, 2D or 1D atmospheric models on a column-by-column basis (or 1.5D). It includes optimization features to speed up or improve convergence, which are particularly useful in dynamic models of chromospheres. While one should be aware of its limitations, the calculation of spectra using the 1.5D or column-by-column is a good approximation in many cases, and generally allows for faster convergence and more flexible methods of improving convergence. RH 1.5D scales well to at least tens of thousands of CPU cores.

[ascl:1611.009] RHOCUBE: 3D density distributions modeling code

RHOCUBE models 3D density distributions on a discrete Cartesian grid and their integrated 2D maps. It can be used for a range of applications, including modeling the electron number density in LBV shells and computing the emission measure. The RHOCUBE Python package provides several 3D density distributions, including a powerlaw shell, truncated Gaussian shell, constant-density torus, dual cones, and spiralling helical tubes, and can accept additional distributions. RHOCUBE provides convenient methods for shifts and rotations in 3D, and if necessary, an arbitrary number of density distributions can be combined into the same model cube and the integration ∫ dz performed through the joint density field.

[ascl:1410.005] RICH: Numerical simulation of compressible hydrodynamics on a moving Voronoi mesh

RICH (Racah Institute Computational Hydrodynamics) is a 2D hydrodynamic code based on Godunov's method. The code, largely based on AREPO, acts on an unstructured moving mesh. It differs from AREPO in the interpolation and time advancement scheme as well as a novel parallelization scheme based on Voronoi tessellation. Though not universally true, in many cases a moving mesh gives better results than a static mesh: where matter moves one way and a sound wave is traveling in the other way (such that relative to the grid the wave is not moving), a static mesh gives better results than a moving mesh. RICH is designed in an object oriented, user friendly way that facilitates incorporation of new algorithms and physical processes.

[ascl:1811.009] RLOS: Time-resolved imaging of model astrophysical jets

RLOS (Relativistic Line Of Sight) uses hydrocode output data, such as that from PLUTO (ascl:1010.045), to create synthetic images depicting what a model relativistic astrophysical jet looks like to a stationary observer. The approximate time-delayed imaging algorithm used is implemented within existing line-of-sight code. The software has the potential to study a variety of dynamical astrophysical phenomena in collaboration with other imaging and simulation tools.

[ascl:1708.011] RM-CLEAN: RM spectra cleaner

RM-CLEAN reads in dirty Q and U cubes, generates rmtf based on the frequencies given in an ASCII file, and cleans the RM spectra following the algorithm given by Brentjens (2007). The output cubes contain the clean model components and the CLEANed RM spectra. The input cubes must be reordered with mode=312, and the output cubes will have the same ordering and thus must be reordered after being written to disk. RM-CLEAN runs as a MIRIAD (ascl:1106.007) task and a Python wrapper is included with the code.

[ascl:1806.024] RMextract: Ionospheric Faraday Rotation calculator

RMextract calculates Ionospheric Faraday Rotation for a given epoch, location and line of sight. This Python code extracts TEC, vTEC, Earthmagnetic field and Rotation Measures from GPS and WMM data for radio interferometry observations.

[ascl:1409.011] rmfit: Forward-folding spectral analysis software

Rmfit uses a forward-folding technique to obtain the best-fit parameters for a chosen model given user-selected source and background time intervals from data files containing observed count rates and a corresponding detector response matrix. rmfit displays lightcurves and spectra using a graphical interface that enables user-defined integrated or time-resolved spectral fits and binning in either time or energy. Originally developed for the analysis of BATSE Gamma-Ray Burst (GRB) spectroscopy, rmfit is a tool for the spectroscopy of transient sources; it accommodates the Fermi GBM and LAT data and Swift BAT.

[ascl:1403.011] RMHB: Hierarchical Reverberation Mapping

RMHB is a hierarchical Bayesian code for reverberation mapping (RM) that combines results of a sparsely sampled broad line region (BLR) light curve and a large sample of active galactic nuclei (AGN) to infer properties of the sample of the AGN. The key idea of RM is to measure the time lag τ between variations in the continuum emission from the accretion disc and subsequent response of the broad line region (BLR). The measurement of τ is typically used to estimate the physical size of the BLR and is combined with other measurements to estimate the black hole mass MBH. A major difficulty with RM campaigns is the large amount of data needed to measure τ. RMHB allows a clear interpretation of a posterior distribution for hyperparameters describing the sample of AGN.

[ascl:1104.008] Rmodel: Determining Stellar Population Parameters

This program determines stellar population parameters (e.g. age, metallicity, IMF slope,...), using as input a pair of line-strength indices, through the interpolation in SSP model predictions. Both linear and bivariate fits are computed to perform the interpolation.

[ascl:1603.008] ROBAST: ROOT-based ray-tracing library for cosmic-ray telescopes

ROBAST (ROOT-based simulator for ray tracing) is a non-sequential ray-tracing simulation library developed for wide use in optical simulations of gamma-ray and cosmic-ray telescopes. The library is written in C++ and fully utilizes the geometry library of the ROOT analysis framework, and can build the complex optics geometries typically used in cosmic ray experiments and ground-based gamma-ray telescopes.

[ascl:1808.011] Robbie: Radio transients and variables detection workflow

Robbie automates cataloging sources, finding variables, and identifying transients in the image domain. It works in a batch processing paradigm with a modular design so components can be swapped out or upgraded to adapt to different input data while retaining a consistent and coherent methodological approach. Robbie is based on commonly used and open software, including AegeanTools (ascl:1212.009) and STILS/TOPCAT (ascl:1101.010).

[ascl:1502.023] ROBOSPECT: Width fitting program

ROBOSPECT, written in C, automatically measures and deblends line equivalent widths for absorption and emission spectra. ROBOSPECT should not be used for stars with spectra in which there is no discernible continuum over large wavelength regions, nor for the most carbon-enhanced stars for which spectral synthesis would be favored. Although ROBOSPECT was designed for metal-poor stars, it is capable of fitting absorption and emission features in a variety of astronomical sources.

[ascl:1201.002] Roche: Visualization and analysis tool for Roche-lobe geometry of evolving binaries

Roche is a visualization and analysis tool for drawing the Roche-lobe geometry of evolving binaries. Roche can be used as a standalone program reading data from the command line or from a file generated by SeBa (ascl:1201.003). Eventually Roche will be able to read data from any other binary evolution program. Roche requires Starlab (ascl:1010.076) version 4.1.1 or later and the pgplot (ascl:1103.002) libraries. Roche creates a series of images, based on the SeBa output file SeBa.data, displaying the evolutionary state of a binary.

[ascl:1210.008] Rockstar: Phase-space halo finder

Rockstar (Robust Overdensity Calculation using K-Space Topologically Adaptive Refinement) identifies dark matter halos, substructure, and tidal features. The approach is based on adaptive hierarchical refinement of friends-of-friends groups in six phase-space dimensions and one time dimension, which allows for robust (grid-independent, shape-independent, and noise-resilient) tracking of substructure. Our method is massively parallel (up to 10^5 CPUs) and runs on the largest current simulations (>10^10 particles) with high efficiency (10 CPU hours and 60 gigabytes of memory required per billion particles analyzed). Rockstar offers significant improvement in substructure recovery as compared to several other halo finders.

[ascl:1712.009] RODRIGUES: RATT Online Deconvolved Radio Image Generation Using Esoteric Software

RODRIGUES (RATT Online Deconvolved Radio Image Generation Using Esoteric Software) is a web-based radio telescope simulation and reduction tool. From a technical perspective it is a web based parameterized docker container scheduler with a result set viewer.

[ascl:1907.028] ROHSA: Separation of diffuse sources in hyper-spectral data

ROHSA (Regularized Optimization for Hyper-Spectral Analysis) reveals the statistical properties of interstellar gas through atomic and molecular lines. It uses a Gaussian decomposition algorithm based on a multi-resolution process from coarse to fine grid to decompose any kind of hyper-spectral observations into a sum of coherent Gaussian. Optimization is performed on the whole data cube at once to obtain a solution with spatially smooth parameters.

[ascl:1902.006] RPFITS: Routines for reading and writing RPFITS files

The RPFITS data file format records synthesis visibility data obtained from the Australia Telescope Compact Array (ATCA) at Narrabri, NSW. It is also used for single-dish spectral line data obtained from Parkes and Mopra, including Parkes multibeam data. RPFITS superficially resembles random group FITS, but differs in important respects, making it incompatible with standard FITS software such as FITSIO (ascl:1010.001) and FTOOLS (ascl:9912.002) and, in particular, it precludes the use of fv (ascl:1205.005). The RPFITS Fortran library contains routines for reading and writing RPFITS files. A header file, RPFITS.h, is provided to facilitate usage by C and C++ applications. Also included is rpfhdr, a utility for viewing RPFITS headers (it also works for standard FITS), and rpfex for extracting selected scans from an RPFITS file.

[ascl:1905.015] rPICARD: Radboud PIpeline for the Calibration of high Angular Resolution Data

rPICARD (Radboud PIpeline for the Calibration of high Angular Resolution Data) reduces data from different VLBI arrays, including high-frequency and low-sensitivity arrays, and supports continuum, polarization, and phase-referencing observations. Built on the CASA (ascl:1107.013) framework, it uses CASA for CLEAN imaging and self-calibration, and can be run non-interactively after only a few non-default input parameters are set. rPICARD delivers high-quality calibrated data and large bandwidth data can be processed within reasonable computing times.

[ascl:1808.002] rsigma: Resonant disturbance

rsigma calculates the resonant disturbing function, R(sigma), for a massless particle in an arbitrary orbit perturbed by a planet in circular orbit. This function defines the strength of the resonance (its semi-amplitude) and the location of the stable equilibrium points (the minima). It depends on the variable sigma called critical angle and on the particle's orbital elements a, e, i and the argument of the perihelion. R(sigma) is numerically calculated and the code is valid for arbitrary eccentricities and inclinations, including retrograde orbits.

[ascl:1607.015] RT1D: 1D code for Rayleigh-Taylor instability

The parallel one-dimensional moving-mesh hydrodynamics code RT1D reproduces the multidimensional dynamics from Rayleigh-Taylor instability in supernova remnants.

[ascl:1706.002] rtpipe: Searching for Fast Radio Transients in Interferometric Data

rtpipe (real-time pipeline) analyzes radio interferometric data with an emphasis on searching for transient or variable astrophysical sources. The package combines single-dish concepts such as dedispersion and filters with interferometric concepts, including images and the uv-plane. In contrast to time-domain data recorded with large single-dish telescopes, visibilities from interferometers can precisely localize sources anywhere in the entire field of view. rtpipe opens interferometers to the study of fast transient sky, including sources like pulsars, stellar flares, rotating radio transients, and fast radio bursts. Key portions of the search pipeline, such as image generation and dedispersion, have been accelerated. That, in combination with its multi-threaded, multi-node design, makes rtpipe capable of searching millisecond timescale data in real time on small compute clusters.

[ascl:1802.011] runDM: Running couplings of Dark Matter to the Standard Model

runDM calculates the running of the couplings of Dark Matter (DM) to the Standard Model (SM) in simplified models with vector mediators. By specifying the mass of the mediator and the couplings of the mediator to SM fields at high energy, the code can calculate the couplings at low energy, taking into account the mixing of all dimension-6 operators. runDM can also extract the operator coefficients relevant for direct detection, namely low energy couplings to up, down and strange quarks and to protons and neutrons.

[ascl:1406.007] RV: Radial Components of Observer's Velocity

The RV program produces a report listing the components, in a given direction, of the observer's velocity on a given date. This allows an observed radial velocity to be referred to an appropriate standard of rest -- typically either the Sun or an LSR.

As a secondary function, RV computes light time components to the Sun, thus allowing the times of phenomena observed from a terrestrial observatory to be referred to a heliocentric frame of reference. n.b. It will of course, in addition, be necessary to express the observations in the appropriate timescale as well as applying light time corrections. In particular, it is likely that an observed UTC will need to be converted to TDB as well as being corrected to the Sun.)

RV is distributed with the Starlink software collection (ascl:1110.012) and uses SLALIB (ascl:1403.025).

[ascl:1505.020] rvfit: Radial velocity curves fitting for binary stars or exoplanets

rvfit, developed in IDL 7.0, fits non-precessing keplerian radial velocity (RV) curves for double-line and single-line binary stars or exoplanets. It fits a simple keplerian model to the observed RV and computes the seven parameters (six for a single-line system) from the model. Some parameters can be fixed beforehand if they are known, for instance, if photometric observations are available. The fit is done using an Adaptive Simulated Annealing algorithm optimized for this specific task. Simulated Annealing methods are powerful heuristic algorithms to minimize functions in multiparametric spaces.

[ascl:1210.031] RVLIN: Fitting Keplerian curves to radial velocity data

The RVLIN package for IDL is a set of routines that quickly fits an arbitrary number of Keplerian curves to radial velocity data. It can handle data from multiple telescopes (i.e. it solves for the offset), constraints on P, e, and time of peri passage, and can incorporate transit timing data. The code handles fixed periods and circular orbits in combination and transit time constraints, including for multiple transiting planets.

[ascl:9912.003] RVSAO 2.0: Digital Redshifts and Radial Velocities

RVSAO is a set of programs to obtain redshifts and radial velocities from digital spectra. RVSAO operates in the IRAF (Tody 1986, 1993) environment. The heart of the system is xcsao, which implements the cross-correlation method, and is a direct descendant of the system built by Tonry and Davis (1979). emsao uses intelligent heuristics to search for emission lines in spectra, then fits them to obtain a redshift. sumspec shifts and sums spectra to build templates for cross-correlation. linespec builds synthetic spectra given a list of spectral lines. bcvcorr corrects velocities for the motion of the earth. We discuss in detail the parameters necessary to run xcsao and emsao properly. We discuss the reliability and error associated with xcsao derived redshifts. We develop an internal error estimator, and we show how large, stable surveys can be used to develop more accurate error estimators. We develop a new methodology for building spectral templates for galaxy redshifts. We show how to obtain correlation velocities using emission line templates. Emission line correlations are substantially more efficient than the previous standard technique, automated emission line fitting. We compare the use of RVSAO with new methods, which use Singular Value Decomposition and $chi^2$ fitting techniques.

[ascl:1907.013] RVSpecFit: Radial velocity and stellar atmospheric parameter fitting

RVSpecFit determines radial velocities and stellar atmospheric parameters from spectra by direct pixel fitting by interpolated stellar templates. The code doesn't require spectrum normalization and can deal with non-flux calibrated spectra. RVSpecFit is able to fit multiple spectra simultaneously.

[ascl:1606.008] s2: Object oriented wrapper for functions on the sphere

The s2 package can represent any arbitrary function defined on the sphere. Both real space map and harmonic space spherical harmonic representations are supported. Basic sky representations have been extended to simulate full sky noise distributions and Gaussian cosmic microwave background realisations. Support for the representation and convolution of beams is also provided. The code requires HEALPix (ascl:1107.018) and CFITSIO (ascl:1010.001).

[ascl:1110.013] S2HAT: Scalable Spherical Harmonic Transform Library

Many problems in astronomy and astrophysics require a computation of the spherical harmonic transforms. This is in particular the case whenever data to be analyzed are distributed over the sphere or a set of corresponding mock data sets has to be generated. In many of those contexts, rapidly improving resolutions of both the data and simulations puts increasingly bigger emphasis on our ability to calculate the transforms quickly and reliably.

The scalable spherical harmonic transform library S2HAT consists of a set of flexible, massively parallel, and scalable routines for calculating diverse (scalar, spin-weighted, etc) spherical harmonic transforms for a class of isolatitude sky grids or pixelizations. The library routines implement the standard algorithm with the complexity of O(n^3/2), where n is a number of pixels/grid points on the sphere, however, owing to their efficient parallelization and advanced numerical implementation, they achieve very competitive performance and near perfect scalability. S2HAT is written in Fortran 90 with a C interface. This software is a derivative of the spherical harmonic transforms included in the HEALPix package and is based on both serial and MPI routines of its version 2.01, however, since version 2.5 this software is fully autonomous of HEALPix and can be compiled and run without the HEALPix library.

[ascl:1211.001] S2LET: Fast wavelet analysis on the sphere

S2LET provides high performance routines for fast wavelet analysis of signals on the sphere. It uses the SSHT code built on the MW sampling theorem to perform exact spherical harmonic transforms on the sphere. The resulting wavelet transform implemented in S2LET is theoretically exact, i.e. a band-limited signal can be recovered from its wavelet coefficients exactly and the wavelet coefficients capture all the information. S2LET also supports the HEALPix sampling scheme, in which case the transforms are not theoretically exact but achieve good numerical accuracy. The core routines of S2LET are written in C and have interfaces in Matlab, IDL and Java. Real signals can be written to and read from FITS files and plotted as Mollweide projections.

[ascl:1103.003] S2PLOT: Three-dimensional (3D) Plotting Library

We present a new, three-dimensional (3D) plotting library with advanced features, and support for standard and enhanced display devices. The library - S2PLOT - is written in C and can be used by C, C++ and FORTRAN programs on GNU/Linux and Apple/OSX systems. S2PLOT draws objects in a 3D (x,y,z) Cartesian space and the user interactively controls how this space is rendered at run time. With a PGPLOT inspired interface, S2PLOT provides astronomers with elegant techniques for displaying and exploring 3D data sets directly from their program code, and the potential to use stereoscopic and dome display devices. The S2PLOT architecture supports dynamic geometry and can be used to plot time-evolving data sets, such as might be produced by simulation codes. In this paper, we introduce S2PLOT to the astronomical community, describe its potential applications, and present some example uses of the library.

[ascl:1111.003] Saada: A Generator of Astronomical Database

Saada transforms a set of heterogeneous FITS files or VOtables of various categories (images, tables, spectra, etc.) in a powerful database deployed on the Web. Databases are located on your host and stay independent of any external server. This job doesn’t require writing code. Saada can mix data of various categories in multiple collections. Data collections can be linked each to others making relevant browsing paths and allowing data-mining oriented queries. Saada supports 4 VO services (Spectra, images, sources and TAP) . Data collections can be published immediately after the deployment of the Web interface.

[ascl:1306.001] SAC: Sheffield Advanced Code

The Sheffield Advanced Code (SAC) is a fully non-linear MHD code designed for simulations of linear and non-linear wave propagation in gravitationally strongly stratified magnetized plasma. It was developed primarily for the forward modelling of helioseismological processes and for the coupling processes in the solar interior, photosphere, and corona; it is built on the well-known VAC platform that allows robust simulation of the macroscopic processes in gravitationally stratified (non-)magnetized plasmas. The code has no limitations of simulation length in time imposed by complications originating from the upper boundary, nor does it require implementation of special procedures to treat the upper boundaries. SAC inherited its modular structure from VAC, thereby allowing modification to easily add new physics.

[submitted] Sacc: Save All Correlations and Covariances

SACC (Save All Correlations and Covariances) is a format and reference library for general storage
of summary statistic measurements for the Dark Energy Science Collaboration (DESC) within and from the Large Synoptic Survey Telescope (LSST) project's Dark Energy Science Collaboration.

[ascl:1601.006] SAGE: Semi-Analytic Galaxy Evolution

SAGE (Semi-Analytic Galaxy Evolution) models galaxy formation in a cosmological context. SAGE has been rebuilt to be modular and customizable. The model runs on any dark matter cosmological N-body simulation whose trees are organized in a supported format and contain a minimum set of basic halo properties.

[ascl:1203.011] SALT2: Spectral Adaptive Lightcurve Template

SALT (Spectral Adaptive Lightcurve Template) is a package for Type Ia Supernovae light curve fitting. Its main purpose is to provide a distance estimator but it can also be used for photometric redshifts, and spectroscopic + photometric identification. This code is also known by the name snfit.

[ascl:1407.006] SAMI: Sydney-AAO Multi-object Integral field spectrograph pipeline

The SAMI (Sydney-AAO Multi-object Integral field spectrograph) pipeline reduces data from the Sydney-AAO Multi-object Integral field spectrograph (SAMI) for the SAMI Galaxy Survey. The python code organizes SAMI data and, along with the AAO 2dfdr package, carries out all steps in the data reduction, from raw data to fully calibrated datacubes. The principal steps are: data management, use of 2dfdr to produce row-stacked spectra, flux calibration, correction for telluric absorption, removal of atmospheric dispersion, alignment of dithered exposures, and drizzling onto a regular output grid. Variance and covariance information is tracked throughout the pipeline. Some quality control routines are also included.

[ascl:1504.011] samiDB: A Prototype Data Archive for Big Science Exploration

samiDB is an archive, database, and query engine to serve the spectra, spectral hypercubes, and high-level science products that make up the SAMI Galaxy Survey. Based on the versatile Hierarchical Data Format (HDF5), samiDB does not depend on relational database structures and hence lightens the setup and maintenance load imposed on science teams by metadata tables. The code, written in Python, covers the ingestion, querying, and exporting of data as well as the automatic setup of an HTML schema browser. samiDB serves as a maintenance-light data archive for Big Science and can be adopted and adapted by science teams that lack the means to hire professional archivists to set up the data back end for their projects.

[ascl:1605.015] SAND: Automated VLBI imaging and analyzing pipeline

The Search And Non-Destroy (SAND) is a VLBI data reduction pipeline composed of a set of Python programs based on the AIPS interface provided by ObitTalk. It is designed for the massive data reduction of multi-epoch VLBI monitoring research. It can automatically investigate calibrated visibility data, search all the radio emissions above a given noise floor and do the model fitting either on the CLEANed image or directly on the uv data. It then digests the model-fitting results, intelligently identifies the multi-epoch jet component correspondence, and recognizes the linear or non-linear proper motion patterns. The outputs including CLEANed image catalogue with polarization maps, animation cube, proper motion fitting and core light curves. For uncalibrated data, a user can easily add inline modules to do the calibration and self-calibration in a batch for a specific array.

[ascl:0003.002] SAOImage DS9: A utility for displaying astronomical images in the X11 window environment

SAOImage DS9 is an astronomical imaging and data visualization application. DS9 supports FITS images and binary tables, multiple frame buffers, region manipulation, and many scale algorithms and colormaps. It provides for easy communication with external analysis tasks and is highly configurable and extensible via XPA and SAMP. DS9 is a stand-alone application. It requires no installation or support files. Versions of DS9 currently exist for Solaris, Linux, MacOSX, and Windows. All versions and platforms support a consistent set of GUI and functional capabilities. DS9 supports advanced features such as multiple frame buffers, mosaic images, tiling, blinking, geometric markers, colormap manipulation, scaling, arbitrary zoom, rotation, pan, and a variety of coordinate systems. DS9 also supports FTP and HTTP access. The GUI for DS9 is user configurable. GUI elements such as the coordinate display, panner, magnifier, horizontal and vertical graphs, button bar, and colorbar can be configured via menus or the command line. DS9 is a Tk/Tcl application which utilizes the SAOTk widget set. It also incorporates the X Public Access (XPA) mechanism to allow external processes to access and control its data, GUI functions, and algorithms.

[ascl:1210.029] Sapporo: N-body simulation library for GPUs

Sapporo mimics the behavior of GRAPE hardware and uses the GPU to perform high-precision gravitational N-body simulations. It makes use of CUDA and therefore only works on NVIDIA GPUs. N-body codes currently running on GRAPE-6 can switch to Sapporo by a simple relinking of the library. Sapporo's precision is comparable to that of GRAPE-6, even though internally the GPU hardware is limited to single precision arithmetics. This limitation is effectively overcome by emulating double precision for calculating the distance between particles.

[ascl:1907.005] SARA-PPD: Preconditioned primal-dual algorithm for radio-interferometric imaging

SARA-PPD is a proof of concept MATLAB implementation of an acceleration strategy for a recently proposed primal-dual distributed algorithm. The algorithm optimizes resolution by accounting for the correct noise statistics, leverages natural weighting in the definition of the minimization problem for image reconstruction, and optimizes sensitivity by enabling accelerated convergence through a preconditioning strategy incorporating sampling density information. This algorithm offers efficient processing of large-scale data sets that will be acquired by next generation radio-interferometers such as the Square Kilometer Array.

[ascl:1904.020] SARAH: SUSY and non-SUSY model builder and analyzer

SARAH builds and analyzes SUSY and non-SUSY models. It calculates all vertices, mass matrices, tadpoles equations, one-loop corrections for tadpoles and self-energies, and two-loop RGEs for a given model. SARAH writes model files for a variety of other software packages for dark matter studies, includes many SUSY and non-SUSY models, and makes implementing new models efficient and straightforward. Written in Mathematica, SARAH can also use output from Vevacious (ascl:1904.019) to check for the global minimum for a given model and parameter point.

[ascl:1404.004] SAS: Science Analysis System for XMM-Newton observatory

The Science Analysis System (SAS) is an extensive suite of software tasks developed to process the data collected by the XMM-Newton Observatory. The SAS extracts standard (spectra, light curves) and/or customized science products, and allows reproductions of the reduction pipelines run to get the PPS products from the ODFs files. SAS includes a powerful and extensive suite of FITS file manipulation packages based on the Data Access Layer library.

[ascl:1707.002] SASRST: Semi-Analytic Solutions for 1-D Radiative Shock Tubes

SASRST, a small collection of Python scripts, attempts to reproduce the semi-analytical one-dimensional equilibrium and non-equilibrium radiative shock tube solutions of Lowrie & Rauenzahn (2007) and Lowrie & Edwards (2008), respectively. The included code calculates the solution for a given set of input parameters and also plots the results using Matplotlib. This software was written to provide validation for numerical radiative shock tube solutions produced by a radiation hydrodynamics code.

[ascl:1309.005] SATMC: SED Analysis Through Monte Carlo

SATMC is a general purpose, MCMC-based SED fitting code written for IDL and Python. Following Bayesian statistics and Monte Carlo Markov Chain algorithms, SATMC derives the best fit parameter values and returns the sampling of parameter space used to construct confidence intervals and parameter-parameter confidence contours. The fitting may cover any range of wavelengths. The code is designed to incorporate any models (and potential priors) of the user's choice. The user guide lists all the relevant details for including observations, models and usage under both IDL and Python.

[ascl:1601.012] SavGolFilterCov: Savitzky Golay filter for data with error covariance

A Savitzky–Golay filter is often applied to data to smooth the data without greatly distorting the signal; however, almost all data inherently comes with noise, and the noise properties can differ from point to point. This python script improves upon the traditional Savitzky-Golay filter by accounting for error covariance in the data. The inputs and arguments are modeled after scipy.signal.savgol_filter.

[ascl:1904.015] SBGAT: Small Bodies Geophysical Analysis Tool

SBGAT (Small Body Geophysical Analysis Tool) generates simulated data originating from small bodies shape models, combined with advanced shape-modification properties. It uses polyhedral shape models from which can be computed mass properties such as volume, center of mass, and inertia, synthetic observations such as lightcurves and radar, and which can be used within dynamical models, such as spherical harmonics and polyhedron gravity modeling. SBGAT can generate spherical harmonics expansions from constant-density polyhedra (and export them to JSON) and evaluate the spherical harmonics expansions. It can also generate YORP coefficients, multi-threaded Polyhedron Gravity Model gravity and potential evaluations, and synthetic light-curve and radar observations for single/primary asteroids.

SBGAT has two distinct packages: a dynamic library SBGAT Core that contains the data structure and algorithm backbone of SBGAT, and SBGAT Gui, which wraps the former inside a VTK, Qt user interface to facilitate user/data interaction. SBGAT Core can be used without the SBGAT Gui wrapper.

[ascl:1907.014] sbpy: Small-body planetary astronomy

sbpy, an Astropy affiliated package, supplements functionality provided by Astropy (ascl:1304.002) with functions and methods that are frequently used for planetary astronomy with a clear focus on asteroids and comets. It offers access tools for various databases for orbital and physical data, spectroscopy analysis tools and models, photometry models for resolved and unresolved observations, ephemerides services, and other tools useful for small-body planetary astronomy.

[ascl:1010.063] SCAMP: Automatic Astrometric and Photometric Calibration

Astrometric and photometric calibrations have remained the most tiresome step in the reduction of large imaging surveys. SCAMP has been written to address this problem. The program efficiently computes accurate astrometric and photometric solutions for any arbitrary sequence of FITS images in a completely automatic way. SCAMP is released under the GNU General Public License.

[ascl:1209.012] Scanamorphos: Maps from scan observations made with bolometer arrays

Scanamorphos is an IDL program to build maps from scan observations made with bolometer arrays. Scanamorphos can post-process scan observations performed with the Herschel photometer arrays. This post-processing mainly consists in subtracting the total low-frequency noise (both its thermal and non-thermal components), masking cosmic ray hit residuals, and projecting the data onto a map. Although it was developed for Herschel, it is also applicable with minimal adjustment to scan observations made with other bolometer arrays provided they entail sufficient redundancy; it was successfully applied to P-Artemis, an instrument operating on the APEX telescope. Scanamorphos does not assume any particular noise model and does not apply any Fourier-space filtering to the data. It is an empirical tool using only the redundancy built in the observations, taking advantage of the fact that each portion of the sky is sampled at multiple times by multiple bolometers. The user is allowed to optionally visualize and control results at each intermediate step, but the processing is fully automated.

[ascl:1803.003] scarlet: Source separation in multi-band images by Constrained Matrix Factorization

SCARLET performs source separation (aka "deblending") on multi-band images. It is geared towards optical astronomy, where scenes are composed of stars and galaxies, but it is straightforward to apply it to other imaging data. Separation is achieved through a constrained matrix factorization, which models each source with a Spectral Energy Distribution (SED) and a non-parametric morphology, or multiple such components per source. The code performs forced photometry (with PSF matching if needed) using an optimal weight function given by the signal-to-noise weighted morphology across bands. The approach works well if the sources in the scene have different colors and can be further strengthened by imposing various additional constraints/priors on each source. Because of its generic utility, this package provides a stand-alone implementation that contains the core components of the source separation algorithm. However, the development of this package is part of the LSST Science Pipeline; the meas_deblender package contains a wrapper to implement the algorithms here for the LSST stack.

[ascl:1505.008] SCEPtER: Stellar CharactEristics Pisa Estimation gRid

SCEPtER (Stellar CharactEristics Pisa Estimation gRid) estimates the stellar mass and radius given a set of observable quantities; the results are obtained by adopting a maximum likelihood technique over a grid of pre-computed stellar models. The code is quite flexible since different observables can be used, depending on their availability, as well as different grids of models.

[ascl:1907.001] schwimmbad: Parallel processing pools interface

schwimmbad provides a uniform interface to parallel processing pools and enables switching easily between local development (e.g., serial processing or with multiprocessing) and deployment on a cluster or supercomputer (via, e.g., MPI or JobLib). The utilities provided by schwimmbad require that tasks or data be “chunked” and that code can be “mapped” onto the chunked tasks.

[ascl:1311.001] SciDB: Open Source DMAS for Scientific Research

SciDB is a DMAS (Data Management and Analytics Software System) optimized for data management of big data and for big analytics. SciDB is organized around multidimensional array storage, a generalization of relational tables, and is designed to be scalable up to petabytes and beyond. Complex analytics are simplified with SciDB because arrays and vectors are first-class objects with built-in optimized operations. Spatial operators and time-series analysis are easy to express. Interfaces to common scientific tools like R as well as programming languages like C++ and Python are provided.

[ascl:1609.006] SCIMES: Spectral Clustering for Interstellar Molecular Emission Segmentation

SCIMES identifies relevant molecular gas structures within dendrograms of emission using the spectral clustering paradigm. It is useful for decomposing objects in complex environments imaged at high resolution.

[ascl:1601.003] SCOUSE: Semi-automated multi-COmponent Universal Spectral-line fitting Engine

The Semi-automated multi-COmponent Universal Spectral-line fitting Engine (SCOUSE) is a spectral line fitting algorithm that fits Gaussian files to spectral line emission. It identifies the spatial area over which to fit the data and generates a grid of spectral averaging areas (SAAs). The spatially averaged spectra are fitted according to user-provided tolerance levels, and the best fit is selected using the Akaike Information Criterion, which weights the chisq of a best-fitting solution according to the number of free-parameters. A more detailed inspection of the spectra can be performed to improve the fit through an iterative process, after which SCOUSE integrates the new solutions into the solution file.

[ascl:1210.012] SearchCal: The JMMC Evolutive Search Calibrator Tool

SearchCal builds an evolutive catalog of stars suitable as calibrators within any given user-defined angular distance and magnitude around a scientific target. SearchCal can select suitable bright calibration stars (V ≤ 10; K ≤ 5.0) for obtaining the ultimate precision of current interferometric instruments like the VLTI and faint calibration stars up to K ~ 15 around the scientific target. Star catalogs available at the CDS are searched via web requests and provide the useful astrometric and photometric informations for selecting calibrators. The missing photometries are computed with an accuracy of about 0.1 mag. The stellar angular diameter is estimated with a precision of about 10% through newly determined surface-brightness versus color-index relations based on the I, J, H and K magnitudes. For each star the squared visibility is computed taking into account the central wavelength and the maximum baseline of the predicted observations.

[ascl:1201.003] SeBa: Stellar and binary evolution

The stellar and binary evolution package SeBa is fully integrated into the kira integrator, although it can also be used as a stand-alone module for non-dynamical applications. Due to the interaction between stellar evolution and stellar dynamics, it is difficult to solve for the evolution of both systems in a completely self-consistent way. The trajectories of stars are computed using a block timestep scheme, as described earlier. Stellar and binary evolution is updated at fixed intervals (every 1/64 of a crossing time, typically a few thousand years). Any feedback between the two systems may thus experience a delay of at most one timestep. Internal evolution time steps may differ for each star and binary, and depend on binary period, perturbations due to neighbors, and the evolutionary state of the star. Time steps in this treatment vary from several milliseconds up to (at most) a million years.

[ascl:1101.001] Second-order Tight-coupling Code

Prior to recombination photons, electrons, and atomic nuclei rapidly scattered and behaved, almost, like a single tightly-coupled photon-baryon plasma. In order to solve the cosmological perturbation equations during that time, Cosmic Microwave Background (CMB) codes use the so-called tight-coupling approximation in which the problematic terms (i.e. the source of the stiffness) are expanded in inverse powers of the Thomson Opacity. Most codes only keep the terms linear in the inverse Thomson Opacity. We have developed a second-order tight-coupling code to test the validity of the usual first-order tight-coupling code. It is based on the publicly available code CAMB.

[ascl:1901.008] SEDobs: Observational spectral energy distribution simulation

SEDobs uses state-of-the-art theoretical galaxy SEDs (spectral energy distributions) to create simulated observations of distant galaxies. It used BC03 and M05 theoretical models and allows the user to configure the simulated observation that are needed. For a given simulated galaxy, the user is able to simulate multi-spectral and multi-photometric observations.

[ascl:1905.026] SEDPY: Modules for storing and operating on astronomical source spectral energy distribution

SEDPY performs a variety of tasks for astronomical spectral energy distributions. It can generate synthetic photometry through any filter, provides detailed modeling of extinction curves, and offers basic aperture photometry algorithms. SEDPY can also store and interpolate model SEDs, convolve absolute or apparent fluxes, and calculate rest-frame magnitudes.

[ascl:1607.020] SEEK: Signal Extraction and Emission Kartographer

SEEK (Signal Extraction and Emission Kartographer) processes time-ordered-data from single dish radio telescopes or from the simulation pipline HIDE (ascl:1607.019), removes artifacts from Radio Frequency Interference (RFI), automatically applies flux calibration, and recovers the astronomical radio signal. With its companion code HIDE (ascl:1607.019), it provides end-to-end simulation and processing of radio survey data.

[ascl:1411.007] segueSelect: SDSS/SEGUE selection function modelling

The Python package segueSelect automatically models the SDSS/SEGUE selection fraction -- the fraction of stars with good spectra -- as a continuous function of apparent magnitude for each plate. The selection function can be determined for any desired sample cuts in signal-to-noise ratio, u-g, r-i, and E(B-V). The package requires Pyfits (ascl:1207.009) and, for coordinate transformations, galpy (ascl:1411.008). It can calculate the KS probability that the spectropscopic sample was drawn from the underlying photometric sample with the model selection function, plot the cumulative distribution function in r-band apparent magnitude of the spectroscopic sample (red) and the photometric sample+selection-function-model for this plate, and, if galpy is installed, can transform velocities into the Galactic coordinate frame. The code can also determine the selection function for SEGUE K stars.

[ascl:1504.009] Self-lensing binary code with Markov chain

The self-lensing binary code with Markov chain code was used to analyze the self-lensing binary system KOI-3278. It includes the MCMC modeling and the key figures.

[ascl:1807.026] SENR: Simple, Efficient Numerical Relativity

SENR (Simple, Efficient Numerical Relativity) provides the algorithmic framework that combines the C codes generated by NRPy+ (ascl:1807.025) into a functioning numerical relativity code. It is part of the numerical relativity code package SENR/NRPy+. The package extends previous implementations of the BSSN reference-metric formulation to a much broader class of curvilinear coordinate systems, making it suitable for modeling physical configurations with approximate or exact symmetries, such as modeling black hole dynamics.

[ascl:1811.004] SEP: Source Extraction and Photometry

SEP (Source Extraction and Photometry) makes the core algorithms of Source Extractor (ascl:1010.064) available as a library of standalone functions and classes. These operate directly on in-memory arrays (no FITS files or configuration files). The code is derived from the Source Extractor code base (written in C) and aims to produce results compatible with Source Extractor whenever possible. SEP consists of a C library with no dependencies outside the standard library and a Python module that wraps the C library in a Pythonic API. The Python wrapper operates on NumPy arrays with NumPy as its only dependency. It is generated using Cython.

From Source Extractor, SEP includes background estimation, image segmentation (including on-the-fly filtering and source deblending), aperture photometry in circular and elliptical apertures, and source measurements such as Kron radius, "windowed" position fitting, and half-light radius. It also adds the following features that are not available in Source Extractor: optimized matched filter for variable noise in source extraction; circular annulus and elliptical annulus aperture photometry functions; local background subtraction in shape consistent with aperture in aperture photometry functions; exact pixel overlap mode in all aperture photometry functions; and masking of elliptical regions on images.

[ascl:1404.005] SER: Subpixel Event Repositioning Algorithms

Subpixel Event Repositioning (SER) techniques significantly improve the already unprecedented spatial resolution of Chandra X-ray imaging with the Advanced CCD Imaging Spectrometer (ACIS). Chandra CCD SER techniques are based on the premise that the impact position of events can be refined, based on the distribution of charge among affected CCD pixels. Unlike ACIS SER models that are restricted to corner split (3- and 4-pixel) events and assume that such events take place at the split pixel corners, this IDL code uses two-pixel splits as well, and incorporates more realistic estimates of photon impact positions.

[ascl:1102.010] SEREN: A SPH code for star and planet formation simulations

SEREN is an astrophysical Smoothed Particle Hydrodynamics code designed to investigate star and planet formation problems using self-gravitating hydrodynamics simulations of molecular clouds, star-forming cores, and protostellar disks.

SEREN is written in Fortran 95/2003 with a modular philosophy for adding features into the code. Each feature can be easily activated or deactivated by way of setting options in the Makefile before compiling the code. This has the added benefit of allowing unwanted features to be removed at the compilation stage resulting in a smaller and faster executable program. SEREN is written with OpenMP directives to allow parallelization on shared-memory architecture.

Would you like to view a random code?