ASCL.net

Astrophysics Source Code Library

Making codes discoverable since 1999

Welcome to the ASCL

The Astrophysics Source Code Library (ASCL) is a free online registry for source codes of interest to astronomers and astrophysicists and lists codes that have been used in research that has appeared in, or been submitted to, peer-reviewed publications. The ASCL is indexed by the SAO/NASA Astrophysics Data System (ADS) and is citable by using the unique ascl ID assigned to each code. The ascl ID can be used to link to the code entry by prefacing the number with ascl.net (i.e., ascl.net/1201.001).


Most Recently Added Codes

2018 Apr 13

[submitted] DPPP: Default Pre-Processing Pipeline

DPPP (Default Pre-Processing Pipeline, also referred to as NDPPP) reads and writes radio-interferometric data in the form of Measurement Sets, mainly those that are created by the LOFAR telescope. It goes through visibilities in time order and contains standard operations like averaging, phase-shifting and flagging bad stations. Between the steps in a pipeline, the data is not written to disk, making this tool suitable for operations where I/O dominates. More advanced procedures such as gain calibration are also included. Other computing steps can be provided by loading a shared library; currently supported external steps are the AOFlagger (ascl:1010.017) and a bridge that enables loading python steps.

2018 Apr 12

[submitted] ipole - semianalytic scheme for relativistic polarized radiative transport

ipole is a new public ray-tracing code for covariant, polarized radiative transport. The code extends the ibothros scheme for covariant, unpolarized transport using two representations of the polarized radiation field: in the coordinate frame, it parallel transports the coherency tensor; in the frame of the plasma it evolves the Stokes parameters under emission, absorption, and Faraday conversion. The transport step is implemented to be as spacetime- and coordinate- independent as possible. The emission, absorption, and Faraday conversion step is implemented using an analytic solution to the polarized transport equation with constant coefficients. As a result, ipole is stable, efficient, and produces a physically reasonable solution even for a step with high optical depth and Faraday depth. We show that the code matches analytic results in flat space, and that it produces results that converge to those produced by Dexter's grtrans polarized transport code on a complicated model problem. We expect ipole will mainly find applications in modeling Event Horizon Telescope sources, but it may also be useful in other relativistic transport problems such as modeling for the IXPE mission.

2018 Apr 08

[submitted] ASTROPOP, the ASTROnomical Polarimetry and Photometry pipeline

We present you the AstroPoP. An astronomical data reduction pipeline develop to deal with standard image photometry and polarimetry data. It is primary written to reduce the IAGPOL polarimeter data, installed at Observatório Pico dos Dias, Brazil, but due its modularity, it can be easily used to reduce almost any CCD photometry and image polarimetry data. For the photometry reducing, the code is capable to perform source finding, aperture and PSF photometry, astrometry calibration using different automated and non-automated methods and automated source identification and magnitude calibration based on online and local catalogs. For polarimetry, the code is capable to resolve linear and circular Stokes parameters produced by image beam splitter or polarizer polarimeters. In addition to the modular functions, ready-to-use pipelines based in configuration files and header keys are also provided with the code. The code is written in a pure-Python philosophy, being ultra-portable, and its distributed under the 3 clause BSD license.

2018 Apr 06

[submitted] Binary neutron star merger rate via the luminosity function of short gamma-ray bursts

The luminosity function of short Gamma Ray Bursts (GRBs) is modelled by using the available catalogue data of all short GRBs (sGRBs) detected till October, 2017. The luminosities are estimated via the `pseudo-redshifts' obtained from the `Yonetoku correlation', assuming a standard delay distribution between the cosmic star formation rate and the production rate of their progenitors. While the simple powerlaw is ruled out to high confidence, the data is fit well both by exponential cutoff powerlaw and broken powerlaw models. Using the derived parameters of these models along with conservative values in the jet opening angles seen from afterglow observations, the true rate of short GRBs is derived. Assuming a short GRB is produced from each binary neutron star merger (BNSM), the rate of gravitational wave (GW) detections from these mergers are derived for the past, present and future configurations of the GW detector networks. Stringent lower limits of 1.87 per year for the aLIGO-VIRGO, and 3.11 per year for the upcoming aLIGO-VIRGO-KAGRA-LIGO/India configurations are thus derived for the BNSM rate at 68% confidence. The BNSM rates calculated from this work and that independently inferred from the observation of the only confirmed BNSM observed till date, are shown to have a mild tension; however the scenario that all BNSMs produce sGRBs cannot be ruled out.

[submitted] Luminosity function of long gamma-ray bursts using Swift and Fermi

I have used a sample of long gamma-ray bursts (GRBs) common to both Swift and Fermi to re-derive the parameters of the Yonetoku correlation. This allowed me to self-consistently estimate pseudo-redshifts of all the bursts with unknown redshifts. This is the first time such a large sample of GRBs from these two instruments is used, both individually and in conjunction, to model the long GRB luminosity function. The GRB formation rate is modelled as the product of the cosmic star formation rate and a GRB formation efficiency for a given stellar mass. An exponential cut-off power-law luminosity function fits the data reasonably well, with ν = 0.6 and Lb = 5.4 × 1052 ergs-1, and does not require a cosmological evolution. In the case of a broken power law, it is required to incorporate a sharp evolution of the break given by Lb ∼ 0.3 × 10^52(1 + z)2.90 erg s-1, and the GRB formation efficiency (degenerate up to a beaming factor of GRBs) decreases with redshift as ∝ (1 + z)-0.80. However, it is not possible to distinguish between the two models. The derived models are then used as templates to predict the distribution of GRBs detectable by CZT Imager onboard AstroSat as a function of redshift and luminosity. This demonstrates that via a quick localization and redshift measurement of even a few CZT Imager GRBs, AstroSat will help in improving the statistics of GRBs both typical and peculiar.

2018 Apr 05

[submitted] APPHi: An Automated Photometry Pipeline for High Cadence, Large Volume Data

The software package APPHi (Automated Photometry Pipeline for High Cadence, Large Volume Data), developed to carry out the aperture and differential photometry of the data to be produced
by the TAOS-II project. It is computationally efficient manner and as such, it can be used also with other astronomical wide field image data. The main features of the APPHi software are: its computational speed, the capacity to work with large volumes of data and that it can handle both FITS and HDF5 formats. Due the large number of stars that the software has to handle in an enormous number of frames, it is optimized to automatically find the best value for parameters to carry out the photometry as: mask size for aperture, size of window for extraction of a single star and number of counts for the threshold for detecting a faint star. Using images with similar features to those that will be collected by TAOS-II, we made a comparative performance with IRAF, and found that APPHi can obtain light curves of comparable quality in terms of the SNR, but up to 300 times faster. In addition to its efficiency,
the carried out tests prove that APPHi, although intended to work with the
TAOS-II data, can also be used to analyze any set of astronomical images,
being a robust and versatile tool to perform stellar aperture,
differential photometry.

2018 Apr 03

[submitted] ASERA: A spectrum eye recognition assistant for quasar spectra

Spectral type recognition is an important and fundamental step of large sky survey projects in the data reduction for further scientific research, like parameter measurement and statistic work. It tends out to be a huge job to manually inspect the low quality spectra produced from the massive spectroscopic survey, where the automatic pipeline may not provide confident type classification results. In order to improve the efficiency and effectiveness of spectral classification, we develop a semi-automated toolkit named ASERA, ASpectrum Eye Recognition Assistant. The main purpose of ASERA is to help the user in quasar spectral recognition and redshift measurement. Furthermore it can also be used to recognize various types of spectra of stars, galaxies and AGNs (Active Galactic Nucleus). It is an interactive software allowing the user to visualize observed spectra, superimpose template spectra from the Sloan Digital Sky Survey (SDSS), and interactively access related spectral line information. It is an efficient and user-friendly toolkit for the accurate classification of spectra observed by LAMOST (the Large Sky Area Multi-object Fiber Spectroscopic Telescope). The toolkit is available in two modes: a Java standalone application and a Java applet. ASERA has a few functions, such as wavelength and flux scale setting, zoom in and out, redshift estimation, spectral line identification, which helps user to improve the spectral classification accuracy especially for low quality spectra and reduce the labor of eyeball check. The function and performance of this tool is displayed through the recognition of several quasar spectra and a late type stellar spectrum from the LAMOST Pilot survey. Its future expansion capabilities are discussed.

2018 Mar 31

[ascl:1803.015] RAPTOR: Imaging code for relativistic plasmas in strong gravity

RAPTOR produces accurate images, animations, and spectra of relativistic plasmas in strong gravity by numerically integrating the equations of motion of light rays and performing time-dependent radiative transfer calculations along the rays. The code is compatible with any analytical or numerical spacetime, is hardware-agnostic and may be compiled and run on both GPUs and CPUs. RAPTOR is useful for studying accretion models of supermassive black holes, performing time-dependent radiative transfer through general relativistic magneto-hydrodynamical (GRMHD) simulations and investigating the expected observational differences between the so-called fastlight and slow-light paradigms.

[ascl:1803.014] ExoCross: Spectra from molecular line lists

ExoCross generates spectra and thermodynamic properties from molecular line lists in ExoMol, HITRAN, or several other formats. The code is parallelized and also shows a high degree of vectorization; it works with line profiles such as Doppler, Lorentzian and Voigt and supports several broadening schemes. ExoCross is also capable of working with the recently proposed method of super-lines. It supports calculations of lifetimes, cooling functions, specific heats and other properties. ExoCross converts between different formats, such as HITRAN, ExoMol and Phoenix, and simulates non-LTE spectra using a simple two-temperature approach. Different electronic, vibronic or vibrational bands can be simulated separately using an efficient filtering scheme based on the quantum numbers.

[ascl:1803.013] optBINS: Optimal Binning for histograms

optBINS (optimal binning) determines the optimal number of bins in a uniform bin-width histogram by deriving the posterior probability for the number of bins in a piecewise-constant density model after assigning a multinomial likelihood and a non-informative prior. The maximum of the posterior probability occurs at a point where the prior probability and the the joint likelihood are balanced. The interplay between these opposing factors effectively implements Occam's razor by selecting the most simple model that best describes the data.