ASCL.net

Astrophysics Source Code Library

Making codes discoverable since 1999

Browsing Codes

Order
Title Date
 
Mode
Abstract Compact
Per Page
50100250All
[ascl:1507.013] K-Inpainting: Inpainting for Kepler

Inpainting is a technique for dealing with gaps in time series data, as frequently occurs in asteroseismology data, that may generate spurious peaks in the power spectrum, thus limiting the analysis of the data. The inpainting method, based on a sparsity prior, judiciously fills in gaps in the data, preserving the asteroseismic signal as far as possible. This method can be applied both on ground and space-based data. The inpainting technique improves the oscillation modes detection and estimation, the impact of the observational window function is reduced, and the interpretation of the power spectrum is simplified. K-Inpainting can be used to study very long time series of many stars because its computation is very fast.

[ascl:1507.014] getsources: Multi-scale, multi-wavelength source extraction

getsources is a powerful multi-scale, multi-wavelength source extraction algorithm. It analyzes fine spatial decompositions of original images across a wide range of scales and across all wavebands, cleans those single-scale images of noise and background, and constructs wavelength-independent single-scale detection images that preserve information in both spatial and wavelength dimensions. getsources offers several advantages over other existing methods of source extraction, including the filtering out of irrelevant spatial scales to improve detectability, especially in the crowded regions and for extended sources, the ability to combine data over all wavebands, and the full automation of the extraction process.

[ascl:1507.015] DALI: Derivative Approximation for LIkelihoods

DALI (Derivative Approximation for LIkelihoods) is a fast approximation of non-Gaussian likelihoods. It extends the Fisher Matrix in a straightforward way and allows for a wider range of posterior shapes. The code is written in C/C++.

[ascl:1507.016] Least Asymmetry: Centering Method

Least Asymmetry finds the center of a distribution of light in an image using the least asymmetry method; the code also contains center of light and fitting a Gaussian routines. All functions in Least Asymmetry are designed to take optional weights.

[ascl:1507.017] REDSPEC: NIRSPEC data reduction

REDSPEC is an IDL based reduction package designed with NIRSPEC in mind though can be used to reduce data from other spectrographs as well. REDSPEC accomplishes spatial rectification by summing an A+B pair of a calibration star to produce an image with two spectra; the image is remapped on the basis of polynomial fits to the spectral traces and calculation of gaussian centroids to define their separation, producing straight spectral traces with respect to the detector rows. The raw images are remapped onto a coordinate system with uniform intervals in spatial extent along the slit and in wavelength along the dispersion axis.

[ascl:1507.018] pyro: Python-based tutorial for computational methods for hydrodynamics

pyro is a simple python-based tutorial on computational methods for hydrodynamics. It includes 2-d solvers for advection, compressible, incompressible, and low Mach number hydrodynamics, diffusion, and multigrid. It is written with ease of understanding in mind. An extensive set of notes that is part of the Open Astrophysics Bookshelf project provides details of the algorithms.

[ascl:1507.019] AstroStat: Statistical analysis tool

AstroStat performs statistical analysis on data and is compatible with Virtual Observatory (VO) standards. It accepts data in a variety of formats and performs various statistical tests using a menu driven interface. Analyses, performed in R, include exploratory tests, visualizations, distribution fitting, correlation and causation, hypothesis testing, multivariate analysis and clustering. AstroStat is available in two versions with an identical interface and features: as a web service that can be run using any standard browser and as an offline application.

[ascl:1507.020] IEHI: Ionization Equilibrium for Heavy Ions

IEHI, written in Fortran, outputs a simple "coronal" ionization equilibrium (i.e., collisional ionization and auto-ionization balanced by radiative and dielectronic recombination) for a plasma at a given electron temperature.

[ascl:1508.001] HMcode: Halo-model matter power spectrum computation

HMcode computes the halo-model matter power spectrum. It is written in Fortran90 and has been designed to quickly (~0.5s for 200 k-values across 16 redshifts on a single core) produce matter spectra for a wide range of cosmological models. In testing it was shown to match spectra produced by the 'Coyote Emulator' to an accuracy of 5 per cent for k less than 10h Mpc^-1. However, it can also produce spectra well outside of the parameter space of the emulator.

[ascl:1508.002] NICOLE: NLTE Stokes Synthesis/Inversion Code

NICOLE, written in Fortran 90, seeks the model atmosphere that provides the best fit to the Stokes profiles (in a least-squares sense) of an arbitrary number of simultaneously-observed spectral lines from solar/stellar atmospheres. The inversion core used for the development of NICOLE is the LORIEN engine (the Lovely Reusable Inversion ENgine), which combines the SVD technique with the Levenberg-Marquardt minimization method to solve the inverse problem.

[ascl:1508.003] REDUCEME: Long-slit spectroscopic data reduction and analysis

The astronomical data reduction package REDUCEME reduces and analyzes long-slit spectroscopic data. The package uses the unformatted FORTRAN raw data format, so requires FITS files be transformed to REDUCEME format; the reverse operation (from REDUCEME to FITS format) is also available. The package is a set of programs written in FORTRAN 77 and includes shell scripts (using the C shell syntax) to perform routine tasks; it can be extended by the inclusion of external programs. REDUCEME uses PGPLOT (ascl:1103.002) for line plots and images, and a subset of subroutines, called BUTTON, enables the user to communicate interactively with the image display employing graphic buttons. One advantage of using REDUCEME is that for each image an associated error image can also be processed throughout the reduction process, allowing for a careful control of the error propagation.

[ascl:1508.004] FRELLED: FITS Realtime Explorer of Low Latency in Every Dimension

FRELLED (FITS Realtime Explorer of Low Latency in Every Dimension) creates 3D images in real time from 3D FITS files and is written in Python for the 3D graphics suite Blender. Users can interactively generate masks around regions of arbitrary geometry and use them to catalog sources, hide regions, and perform basic analysis (e.g., image statistics within the selected region, generate contour plots, query NED and the SDSS). World coordinates are supported and multi-volume rendering is possible. FRELLED is designed for viewing HI data cubes and provides a number of tasks to commonly-used MIRIAD (ascl:1106.007) tasks (e.g. mbspect); however, many of its features are suitable for any type of data set. It also includes an n-body particle viewer with the ability to display 3D vector information as well as the ability to render time series movies of multiple FITS files and setup simple turntable rotation movies for single files.

[ascl:1508.005] ColorPro: PSF-corrected aperture-matched photometry

ColorPro automatically obtains robust colors across images of varied PSF. To correct for the flux lost in images with poorer PSF, the "detection image" is blurred to match the PSF of these other images, allowing observation of how much flux is lost. All photometry is performed in the highest resolution frame (images being aligned given WCS information in the FITS headers), and identical apertures are used in every image. Usually isophotal apertures are used, as determined by SExtractor (ascl:1010.064). Using SExSeg (ascl:1508.006), object aperture definitions can be pre-defined and object detections from different image filters can be combined automatically into a single comprehensive "segmentation map." After producing the final photometric catalog, ColorPro can automatically run BPZ (ascl:1108.011) to obtain Bayesian Photometric Redshifts.

[ascl:1508.006] SExSeg: SExtractor segmentation

SExSeg forces SExtractor (ascl:1010.064) to run using a pre-defined segmentation map (the definition of objects and their borders). The defined segments double as isophotal apertures. SExSeg alters the detection image based on a pre-defined segmenation map while preparing your "analysis image" by subtracting the background in a separate SExtractor run (using parameters you specify). SExtractor is then run in "double-image" mode with the altered detection image and background-subtracted analysis image.

[ascl:1508.007] TreeCorr: Two-point correlation functions

TreeCorr efficiently computes two-point correlation functions. It can compute correlations of regular number counts, weak lensing shears, or scalar quantities such as convergence or CMB temperature fluctuations. Two-point correlations may be auto-correlations or cross-correlations, including any combination of shear, kappa, and counts. Two-point functions can be done with correct curved-sky calculation using RA, Dec coordinates, on a Euclidean tangent plane, or in 3D using RA, Dec and a distance. The front end is written in Python, which can be used as a Python module or as a standalone executable using configuration files; the actual computation of the correlation functions is done in C++ using ball trees (similar to kd trees), making the calculation extremely efficient, and when available, OpenMP is used to run in parallel on multi-core machines.

[ascl:1508.008] NGMIX: Gaussian mixture models for 2D images

NGMIX implements Gaussian mixture models for 2D images. Both the PSF profile and the galaxy are modeled using mixtures of Gaussians. Convolutions are thus performed analytically, resulting in fast model generation as compared to methods that perform the convolution in Fourier space. For the galaxy model, NGMIX supports exponential disks and de Vaucouleurs and Sérsic profiles; these are implemented approximately as a sum of Gaussians using the fits from Hogg & Lang (2013). Additionally, any number of Gaussians can be fit, either completely free or constrained to be cocentric and co-elliptical.

[ascl:1508.009] Trilogy: FITS image conversion software

Trilogy automatically scales and combines FITS images to produce color or grayscale images using Python scripts. The user assigns images to each color channel (RGB) or a single image to grayscale luminosity. Trilogy determines the intensity scaling automatically and independently in each channel to display faint features without saturating bright features. Each channel's scaling is determined based on a sample of the image (or summed images) and two input parameters. One parameter sets the output luminosity of "the noise," currently determined as 1-sigma above the sigma-clipped mean. The other parameter sets what fraction of the data (if any) in the sample region should be allowed to saturate. Default values for these parameters (0.15% and 0.001%, respectively) work well, but the user is able to adjust them. The scaling is accomplished using the logarithmic function y = a log(kx + 1) clipped between 0 and 1, where a and k are constants determined based on the data and desired scaling parameters as described above.

[ascl:1508.010] SHDOM: Spherical Harmonic Discrete Ordinate Method for atmospheric radiative transfer

The Spherical Harmonic Discrete Ordinate Method (SHDOM) radiative transfer model computes polarized monochromatic or spectral band radiative transfer in a one, two, or three-dimensional medium for either collimated solar and/or thermal emission sources of radiation. The model is written in a variant of Fortran 77 and in Fortran90 and requires a Fortran 90 compiler. Also included are programs for generating the optical property files input to SHDOM from physical properties of water cloud particles and aerosols.

[ascl:1509.001] XSHPipelineManager: Wrapper for the VLT/X-shooter Data Reduction Pipeline

XSHPipelineManager provides a framework for reducing spectroscopic observations taken by the X-shooter spectrograph at the Very Large Telescope. This Python code wraps recipes developed by the European Southern Observatory and runs the full X-shooter data reduction pipeline. The code offers full flexibility in terms of what data reduction recipes to include and which calibration files to use. During the data reduction chain restart-files are saved, making it possible to restart at any step in the chain.

[ascl:1509.002] Tempo: Pulsar timing data analysis

Tempo analyzes pulsar timing data. Pulse times of arrival (TOAs), pulsar model parameters, and coded instructions are read from one or more input files. The TOAs are fitted by a pulse timing model incorporating transformation to the solar-system barycenter, pulsar rotation and spin-down and, where necessary, one of several binary models. Program output includes parameter values and uncertainties, residual pulse arrival times, chi-squared statistics, and the covariance matrix of the model. In prediction mode, ephemerides of pulse phase behavior (in the form of polynomial expansions) are calculated from input timing models. Tempo is the basis for the Tempo2 (ascl:1210.015) code.

[ascl:1509.003] AFR (ASPFitsReader): A pulsar FITS file reader and analysis package

AFR, or ASPFitsReader, reduces, processes, and manipulates pulsar data, including calibration, template profile creation, and interactive excision of radio frequency interference from pulsar profile data. It also creates times-of-arrival compatible with Tempo (ascl:1509.002) and Tempo2 (ascl:1210.015) timing software.

[ascl:1509.004] FalconIC: Initial conditions generator for cosmological N-body simulations in Newtonian, Relativistic and Modified theories

FalconIC generates discrete particle positions, velocities, masses and pressures based on linear Boltzmann solutions that are computed by libraries such as CLASS and CAMB. FalconIC generates these initial conditions for any species included in the selection, including Baryons, Cold Dark Matter and Dark Energy fluids. Any species can be set in Eulerian (on a fixed grid) or Lagrangian (particle motion) representation, depending on the gauge and reality chosen. That is, for relativistic initial conditions in the synchronous comoving gauge, Dark Matter can only be described in an Eulerian representation. For all other choices (Relativistic in Longitudinal gauge, Newtonian with relativistic expansion rates, Newtonian without any notion of radiation), all species can be treated in all representations. The code also computes spectra. FalconIC is useful for comparative studies on initial conditions.

[ascl:1509.005] TRUVOT: True Background Technique for the Swift UVOT Grisms

TRUVOT decontaminates Swift UVOT grism spectra for transient objects. The technique makes use of template images in a process similar to image subtraction.

[ascl:1509.006] FARGO3D: Hydrodynamics/magnetohydrodynamics code

A successor of FARGO (ascl:1102.017), FARGO3D is a versatile HD/MHD code that runs on clusters of CPUs or GPUs, with special emphasis on protoplanetary disks. FARGO3D offers Cartesian, cylindrical or spherical geometry; 1-, 2- or 3-dimensional calculations; and orbital advection (aka FARGO) for HD and MHD calculations. As in FARGO, a simple Runge-Kutta N-body solver may be used to describe the orbital evolution of embedded point-like objects. There is no need to know CUDA; users can develop new functions in C and have them translated to CUDA automatically to run on GPUs.

[ascl:1509.007] pycola: N-body COLA method code

pycola is a multithreaded Python/Cython N-body code, implementing the Comoving Lagrangian Acceleration (COLA) method in the temporal and spatial domains, which trades accuracy at small-scales to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing. The COLA method achieves its speed by calculating the large-scale dynamics exactly using LPT while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos.

[ascl:1509.008] GFARGO: FARGO for GPU

GFARGO is a GPU version of FARGO (ascl:1102.017). It is written in C and C for CUDA and runs only on NVIDIA’s graphics cards. Though it corresponds to the standard, isothermal version of FARGO, not all functionalities of the CPU version have been translated to CUDA. The code is available in single and double precision versions, the latter compatible with FERMI architectures. GFARGO can run on a graphics card connected to the display, allowing the user to see in real time how the fields evolve.

[ascl:1509.009] OPERA: Objective Prism Enhanced Reduction Algorithms

OPERA (Objective Prism Enhanced Reduction Algorithms) automatically analyzes astronomical images using the objective-prism (OP) technique to register thousands of low resolution spectra in large areas. It detects objects in an image, extracts one-dimensional spectra, and identifies the emission line feature. The main advantages of this method are: 1) to avoid subjectivity inherent to visual inspection used in past studies; and 2) the ability to obtain physical parameters without follow-up spectroscopy.

[ascl:1509.010] PyCS : Python Curve Shifting

PyCS is a software toolbox to estimate time delays between multiple images of strongly lensed quasars, from resolved light curves such as obtained by the COSMOGRAIL monitoring program. The pycs package defines a collection of classes and high level functions, that you can script in a flexible way. PyCS makes it easy to compare different point estimators (including your own) without much code integration. The package heavily depends on numpy, scipy, and matplotlib.

[ascl:1510.001] GGADT: Generalized Geometry Anomalous Diffraction Theory

GGADT uses anomalous diffraction theory (ADT) to compute the differential scattering cross section (or the total cross sections as a function of energy) for a specified grain of arbitrary geometry (natively supports spheres, ellipsoids, and clusters of spherical monomers). It is written in Fortran 95. ADT is valid when the grain is large compared to the wavelength of incident light. GGADT can calculate either the integrated cross sections (absorption, scattering, extinction) as a function of energy, or it can calculate the differential scattering cross section as a function of scattering angle.

[ascl:1510.002] batman: BAsic Transit Model cAlculatioN in Python

batman provides fast calculation of exoplanet transit light curves and supports calculation of light curves for any radially symmetric stellar limb darkening law. It uses an integration algorithm for models that cannot be quickly calculated analytically, and in typical use, the batman Python package can calculate a million model light curves in well under ten minutes for any limb darkening profile.

[ascl:1510.003] PyLDTk: Python toolkit for calculating stellar limb darkening profiles and model-specific coefficients for arbitrary filters

PyLDTk automates the calculation of custom stellar limb darkening (LD) profiles and model-specific limb darkening coefficients (LDC) using the library of PHOENIX-generated specific intensity spectra by Husser et al. (2013). It facilitates exoplanet transit light curve modeling, especially transmission spectroscopy where the modeling is carried out for custom narrow passbands. PyLDTk construct model-specific priors on the limb darkening coefficients prior to the transit light curve modeling. It can also be directly integrated into the log posterior computation of any pre-existing transit modeling code with minimal modifications to constrain the LD model parameter space directly by the LD profile, allowing for the marginalization over the whole parameter space that can explain the profile without the need to approximate this constraint by a prior distribution. This is useful when using a high-order limb darkening model where the coefficients are often correlated, and the priors estimated from the tabulated values usually fail to include these correlations.

[ascl:1510.004] DEBiL: Detached Eclipsing Binary Light curve fitter

DEBiL rapidly fits a large number of light curves to a simple model. It is the central component of a pipeline for systematically identifying and analyzing eclipsing binaries within a large dataset of light curves; the results of DEBiL can be used to flag light curves of interest for follow-up analysis.

[ascl:1510.005] GALFORM: Galactic modeling

GALFORM is a semi-analytic model for calculating the formation and evolution of galaxies in hierarchical clustering cosmologies. Using a Monte Carlo algorithm to follow the merging evolution of dark matter haloes with arbitrary mass resolution, it incorporates realistic descriptions of the density profiles of dark matter haloes and the gas they contain. It follows the chemical evolution of gas and stars, and the associated production of dust and includes a detailed calculation of the sizes of discs and spheroids.

[ascl:1510.006] ASPIC: STARLINK image processing package

ASPIC handled basic astronomical image processing. Early releases concentrated on image arithmetic, standard filters, expansion/contraction/selection/combination of images, and displaying and manipulating images on the ARGS and other devices. Later releases added new astronomy-specific applications to this sound framework. The ASPIC collection of about 400 image-processing programs was written using the Starlink "interim" environment in the 1980; the software is now obsolete.

[submitted] Xsmurf - Measuring multifractal properties with the continuous wavelet transform modulus maxima (WTMM) method

Xsmurf is a software package written in C/Tcl/Tk that implements the continuous wavelet transform modulus maxima method, an image processing tool for measuring fractal and multifractal properties in experimental and simulation data.
Multifractal analysis is described in the following page: http://www.scholarpedia.org/article/Wavelet-based_multifractal_analysis

Xsmurf has been used in multiple applications in astrophysics, e.g. :
- analysis of solar magnetograms for characterizing complexity of evolving regions
- fractal/multifractal nature and anisotropic structure of Galactic atomic hydrogen (H I)
- analysis of simulation data (velocity field, ...) of turbulent flow

[ascl:1510.007] ccdproc: CCD data reduction software

Ccdproc is an affiliated package for the AstroPy package for basic data reductions of CCD images. The ccdproc package provides many of the necessary tools for processing of ccd images built on a framework to provide error propagation and bad pixel tracking throughout the reduction process.

[ascl:1511.001] SuperFreq: Numerical determination of fundamental frequencies of an orbit

SuperFreq numerically estimates the fundamental frequencies and orbital actions of pre-computed orbital time series. It is an implementation of a version of the Numerical Analysis of Fundamental Frequencies close to that by Monica Valluri, which itself is an implementation of an algorithm first used by Jacques Laskar.

[ascl:1511.002] JSPAM: Interacting galaxies modeller

JSPAM models galaxy collisions using a restricted n-body approach to speed up computation. Instead of using a softened point-mass potential, the software supports a modified version of the three component potential created by Hernquist (1994, ApJS 86, 389). Although spherically symmetric gravitationally potentials and a Gaussian model for the bulge are used to increase computational efficiency, the potential mimics that of a fully consistent n-body model of a galaxy. Dynamical friction has been implemented in the code to improve the accuracy of close approaches between galaxies. Simulations using this code using thousands of particles over the typical interaction times of a galaxy interaction take a few seconds on modern desktop workstations, making it ideal for rapidly prototyping the dynamics of colliding galaxies. Extensive testing of the code has shown that it produces nearly identical tidal features to those from hierarchical tree codes such as Gadget but using a fraction of the computational resources. This code was used in the Galaxy Zoo: Mergers project and is very well suited for automated fitting of galaxy mergers with automated pattern fitting approaches such as genetic algorithms. Java and Fortran versions of the code are available.

[ascl:1511.003] SkyView Virtual Telescope

The SkyView Virtual telescope provides access to survey datasets ranging from radio through the gamma-ray regimes. Over 100 survey datasets are currently available. The SkyView library referenced here is used as the basis for the SkyView web site (at http://skvyiew.gsfc.nasa.gov) but is designed for individual use by researchers as well.

SkyView's approach to access surveys is distinct from most other toolkits. Rather than providing links to the original data, SkyView attempts to immediately re-render the source data in the user-requested reference frame, projection, scaling, orientation, etc. The library includes a set of geometry transformation and mosaicking tools that may be integrated into other applications independent of SkyView.

[ascl:1511.004] Xgremlin: Interferograms and spectra from Fourier transform spectrometers analysis

Xgremlin is a hardware and operating system independent version of the data analysis program Gremlin used for Fourier transform spectrometry. Xgremlin runs on PCs and workstations that use the X11 window system, including cygwin in Windows. It is used to Fourier transform interferograms, plot spectra, perform phase corrections, perform intensity and wavenumber calibration, and find and fit spectral lines. It can also be used to construct synthetic spectra, subtract continua, compare several different spectra, and eliminate ringing around lines.

[ascl:1511.005] pyhrs: Spectroscopic data reduction package for SALT

The pyhrs package reduces data from the High Resolution Spectrograph (HRS) on the Southern African Large Telescope (SALT). HRS is a dual-beam, fiber fed echelle spectrectrograph with four modes of operation: low (R~16000), medium (R~34000), high (R~65000), and high stability (R~65000). pyhrs, written in Python, includes all of the steps necessary to reduce HRS low, medium, and high resolution data; this includes basic CCD reductions, order identification, wavelength calibration, and extraction of the spectra.

[ascl:1511.006] T-Matrix: Codes for Computing Electromagnetic Scattering by Nonspherical and Aggregated Particles

The T-Matrix package includes codes to compute electromagnetic scattering by homogeneous, rotationally symmetric nonspherical particles in fixed and random orientations, randomly oriented two-sphere clusters with touching or separated components, and multi-sphere clusters in fixed and random orientations. All codes are written in Fortran-77. LAPACK-based, extended-precision, Gauss-elimination- and NAG-based, and superposition codes are available, as are double-precision superposition, parallelized double-precision, double-precision Lorenz-Mie codes, and codes for the computation of the coefficients for the generalized Chebyshev shape.

[ascl:1511.007] MHF: MLAPM Halo Finder

MHF is a Dark Matter halo finder that is based on the refinement grids of MLAPM. The grid structure of MLAPM adaptively refines around high-density regions with an automated refinement algorithm, thus naturally "surrounding" the Dark Matter halos, as they are simply manifestations of over-densities within (and exterior) to the underlying host halo. Using this grid structure, MHF restructures the hierarchy of nested isolated MLAPM grids into a "grid tree". The densest cell in the end of a tree branch marks center of a prospective Dark Matter halo. All gravitationally bound particles about this center are collected to obtain the final halo catalog. MHF automatically finds halos within halos within halos.

[ascl:1511.008] MCAL: M dwarf metallicity and temperature calculator

MCAL calculates high precision metallicities and effective temperatures for M dwarfs; the method behaves properly down to R = 40 000 and S/N = 25, and results were validated against a sample of stars in common with SOPHIE high resolution spectra.

[ascl:1511.009] Pangloss: Reconstructing lensing mass

Pangloss reconstructs all the mass within a light cone through the Universe. Understanding complex mass distributions like this is important for accurate time delay lens cosmography, and also for accurate lens magnification estimation. It aspires to use all available data in an attempt to make the best of all mass maps.

[ascl:1511.010] Galileon-Solver: N-body code

Galileon-Solver adds an extra force to PMCode (ascl:9909.001) using a modified Poisson equation to provide a non-linearly transformed density field, with the operations all performed in real space. The code's implicit spherical top-hat assumption only works over fairly long distance averaging scales, where the coarse-grained picture it relies on is a good approximation of reality; it uses discrete Fourier transforms and cyclic reduction in the usual way.

[ascl:1511.011] SparsePZ: Sparse Representation of Photometric Redshift PDFs

SparsePZ uses sparse basis representation to fully represent individual photometric redshift probability density functions (PDFs). This approach requires approximately half the parameters for the same multi-Gaussian fitting accuracy, and has the additional advantage that an entire PDF can be stored by using a 4-byte integer per basis function. Only 10-20 points per galaxy are needed to reconstruct both the individual PDFs and the ensemble redshift distribution, N(z), to an accuracy of 99.9 per cent when compared to the one built using the original PDFs computed with a resolution of δz = 0.01, reducing the required storage of 200 original values by a factor of 10-20. This basis representation can be directly extended to a cosmological analysis, thereby increasing computational performance without losing resolution or accuracy.

[ascl:1511.012] milkywayproject_triggering: Correlation functions for two catalog datasets

This triggering code calculates the correlation function between two astrophysical data catalogs using the Landy-Szalay approximator generalized for heterogeneous datasets (Landy & Szalay, 1993; Bradshaw et al, 2011) or the auto-correlation function of one dataset. It assumes that one catalog has positional information as well as an object size (effective radius), and the other only positional information.

[ascl:1511.013] CCDtoRGB: RGB image production from three-band astronomical images

CCDtoRGB produces red‐green‐blue (RGB) composites from three‐band astronomical images, ensuring an object with a specified astronomical color has a unique color in the RGB image rather than burnt‐out white stars. Use of an arcsinh stretch shows faint objects while simultaneously preserving the structure of brighter objects in the field, such as the spiral arms of large galaxies.

[ascl:1511.014] HumVI: Human Viewable Image creation

HumVI creates a composite color image from sets of input FITS files, following the Lupton et al (2004, ascl:1511.013) composition algorithm. Written in Python, it takes three FITS files as input and returns a color composite, color-saturated png image with an arcsinh stretch. HumVI reads the zero points out of the FITS headers and uses them to put all the images on the same flux scale; photometrically calibrated images produce the best results.

Would you like to view a random code?