Results 3101-3200 of 3846 (3741 ASCL, 105 submitted)
Curvit produces light curves from UVIT (Ultraviolet Imaging Telescope) data. It uses the events list from the official UVIT L2 pipeline (version 6.3 onwards) as input. The makecurves function of curvit automatically detects sources from events list and creates light curves. Curvit provides source coordinates only in the instrument coordinate system. If you already have the source coordinates, the curve function of curvit can be used to create light curves. The package has several parameters that can be set by the user; some of these parameters have default values. Curvit is available on PyPI.
The CURSA package manipulates astronomical catalogs and similar tabular datasets. It provides facilities for browsing or examining catalogs; selecting subsets from a catalog; sorting and copying catalogs; pairing two catalogs; converting catalog coordinates between some celestial coordinate systems; and plotting finding charts and photometric calibration. It can also extract subsets from a catalog in a format suitable for plotting using other Starlink packages such as PONGO. CURSA can access catalogs held in the popular FITS table format, the Tab-Separated Table (TST) format or the Small Text List (STL) format. Catalogs in the STL and TST formats are simple ASCII text files. CURSA also includes some facilities for accessing remote on-line catalogs via the Internet. It is part of the Starlink software collection (ascl:1110.012).
Written in c, the Customizable User Pipeline for IRS Data (CUPID) allows users to run the Spitzer IRS Pipelines to re-create Basic Calibrated Data and extract calibrated spectra from the archived raw files. CUPID provides full access to all the parameters of the BCD, COADD, BKSUB, BKSUBX, and COADDX pipelines, as well as the opportunity for users to provide their own calibration files (e.g., flats or darks). CUPID is available for Mac, Linux, and Solaris operating systems.
The CUPID package allows the identification and analysis of clumps of emission within 1, 2 or 3 dimensional data arrays. Whilst targeted primarily at sub-mm cubes, it can be used on any regularly gridded 1, 2 or 3D data. A variety of clump finding algorithms are implemented within CUPID, including the established ClumpFind (ascl:1107.014) and GAUSSCLUMPS (ascl:1406.018) algorithms. In addition, two new algorithms called FellWalker and Reinhold are also provided. CUPID allows easy inter-comparison between the results of different algorithms; the catalogues produced by each algorithm contains a standard set of columns containing clump peak position, clump centroid position, the integrated data value within the clump, clump volume, and the dimensions of the clump. In addition, pixel masks are produced identifying which input pixels contribute to each clump. CUPID is distributed as part of the Starlink (ascl:1110.012) software collection.
I introduce a new code for fast calculation of the Lomb-Scargle periodogram, that leverages the computing power of graphics processing units (GPUs). After establishing a background to the newly emergent field of GPU computing, I discuss the code design and narrate key parts of its source. Benchmarking calculations indicate no significant differences in accuracy compared to an equivalent CPU-based code. However, the differences in performance are pronounced; running on a low-end GPU, the code can match 8 CPU cores, and on a high-end GPU it is faster by a factor approaching thirty. Applications of the code include analysis of long photometric time series obtained by ongoing satellite missions and upcoming ground-based monitoring facilities; and Monte-Carlo simulation of periodogram statistical properties.
cuFFS (CUDA-accelerated Fast Faraday Synthesis) performs Faraday rotation measure synthesis; it is particularly well-suited for performing RM synthesis on large datasets. Compared to a fast single-threaded and vectorized CPU implementation, depending on the structure and format of the data cubes, cuFFs achieves an increase in speed of up to two orders of magnitude. The code assumes that the pixels values are IEEE single precision floating points (BITPIX=-32), and the input cubes must have 3 axes (2 spatial dimensions and 1 frequency axis) with frequency axis as NAXIS1. A package is included to reformat data with individual stokes Q and U channel maps to the required format. The code supports both the HDFITS format and the standard FITS format, and is written in C with GPU-acceleration achieved using Nvidia's CUDA parallel computing platform.
Cue interprets nebular emission across a wide range of ionizing conditions of galaxies. The software, based on Cloudy (ascl:9910.001), emulates a neural net. It does not require a specific ionizing spectrum as a source, instead approximating the ionizing spectrum with a 4-part piece-wise power-law. Along with the flexible ionizing spectra, Cue allows freedom in [O/H], [N/O], [C/O], gas density, and total ionizing photon budget.
cuDisc simulates the evolution of protoplanetary discs in both the radial and vertical dimensions, assuming axisymmetry. The code performs 2D dust advection-diffusion, dust coagulation/fragmentation, and radiative transfer. A 1D evolution model is also included, with the 2D gas structure calculated via vertical hydrostatic equilibrium. cuDisc requires a NVIDIA GPU.
CUDAHM accelerates Bayesian inference of Hierarchical Models using Markov Chain Monte Carlo by constructing a Metropolis-within-Gibbs MCMC sampler for a three-level hierarchical model, requiring the user to supply only a minimimal amount of CUDA code. CUDAHM assumes that a set of measurements are available for a sample of objects, and that these measurements are related to an unobserved set of characteristics for each object. For example, the measurements could be the spectral energy distributions of a sample of galaxies, and the unknown characteristics could be the physical quantities of the galaxies, such as mass, distance, or age. The measured spectral energy distributions depend on the unknown physical quantities, which enables one to derive their values from the measurements. The characteristics are also assumed to be independently and identically sampled from a parent population with unknown parameters (e.g., a Normal distribution with unknown mean and variance). CUDAHM enables one to simultaneously sample the values of the characteristics and the parameters of their parent population from their joint posterior probability distribution.
CUBISM, written in IDL, constructs spectral cubes, maps, and arbitrary aperture 1D spectral extractions from sets of mapping mode spectra taken with Spitzer's IRS spectrograph. CUBISM is optimized for non-sparse maps of extended objects, e.g. the nearby galaxy sample of SINGS, but can be used with data from any spectral mapping AOR (primarily validated for maps which are designed as suggested by the mapping HOWTO).
CubiCal implements several accelerated gain solvers which exploit complex optimization for fast radio interferometric gain calibration. The code can be used for both direction-independent and direction-dependent self-calibration. CubiCal is implemented in Python and Cython, and multiprocessing is fully supported.
A successor to CubiCal, QuartiCal (ascl:2305.006), is available.
CUBEP3M is a high performance cosmological N-body code which has many utilities and extensions, including a runtime halo finder, a non-Gaussian initial conditions generator, a tuneable accuracy, and a system of unique particle identification. CUBEP3M is fast, has a memory imprint up to three times lower than other widely used N-body codes, and has been run on up to 20,000 cores, achieving close to ideal weak scaling even at this problem size. It is well suited and has already been used for a broad number of science applications that require either large samples of non-linear realizations or very large dark matter N-body simulations, including cosmological reionization, baryonic acoustic oscillations, weak lensing or non-Gaussian statistics.
CubeIndexer indexes regions of interest (ROIs) in data cubes reducing the necessary storage space. The software can process data cubes containing megabytes of data in fractions of a second without human supervision, thus allowing it to be incorporated into a production line for displaying objects in a virtual observatory. The software forms part of the Chilean Virtual Observatory (ChiVO) and provides the capability of content-based searches on data cubes to the astronomical community.
Cubefit is an OXY class that performs spectral fitting with spatial regularization in a spectro-imaging context. The 3D model is based on a 1D model and 2D parameter maps; the 2D maps are regularized using an L1L2 regularization by default. The estimator is a compound of a chi^2 based on the 1D model, a regularization term based of the 2D regularization of the various 2D parameter maps, and an optional decorrelation term based on the cross-correlation of specific pairs of parameter maps.
CUBE, written in Coarray Fortran, is a particle-mesh based parallel cosmological N-body simulation code. The memory usage of CUBE can approach as low as 6 bytes per particle. Particle pairwise (PP) force, cosmological neutrinos, spherical overdensity (SO) halofinder are included.
CuBANz is a photometric redshift estimator code for high redshift galaxies that uses the back propagation neural network along with clustering of the training set, making it very efficient. The training set is divided into several self learning clusters with galaxies having similar photometric properties and spectroscopic redshifts within a given span. The clustering algorithm uses the color information (i.e. u-g, g-r etc.) rather than the apparent magnitudes at various photometric bands, as the photometric redshift is more sensitive to the flux differences between different bands rather than the actual values. The clustering method enables accurate determination of the redshifts. CuBANz considers uncertainty in the photometric measurements as well as uncertainty in the neural network training. The code is written in C.
The Cuba library offers four independent routines for multidimensional numerical integration: Vegas, Suave, Divonne, and Cuhre. The four algorithms work by very different methods, and can integrate vector integrands and have very similar Fortran, C/C++, and Mathematica interfaces. Their invocation is very similar, making it easy to cross-check by substituting one method by another. For further safeguarding, the output is supplemented by a chi-square probability which quantifies the reliability of the error estimate.
CTR (Coronal Temperature Reconstruction) reconstructs differential emission measures (DEMs) in the solar corona. Written in IDL, the code guarantees positivity of the recovered DEM, enforces an explicit smoothness constraint, returns a featureless (flat) solution in the absence of information, and converges quickly. The algorithm is robust and can be extended to other wavelengths where the DEM treatment is valid.
ctools provides tools for the scientific analysis of Cherenkov Telescope Array (CTA) data. Analysis of data from existing Imaging Air Cherenkov Telescopes (such as H.E.S.S., MAGIC or VERITAS) is also supported, provided that the data and response functions are available in the format defined for CTA. ctools comprises a set of ftools-like binary executables with a command-line interface allowing for interactive step-wise data analysis. A Python module allows control of all executables, and the creation of shell or Python scripts and pipelines is supported. ctools provides cscripts, which are Python scripts complementing the binary executables. Extensions of the ctools package by user defined binary executables or Python scripts is supported. ctools are based on GammaLib (ascl:1110.007).
Charge Transfer Inefficiency (CTI) due to radiation damage above the Earth's atmosphere creates spurious trailing in images from Charge-Coupled Device (CCD) imaging detectors. Radiation damage also creates unrelated warm pixels, which can be used to measure CTI. This code provides pixel-based correction for CTI and has proven effective in Hubble Space Telescope Advanced Camera for Surveys raw images, successfully reducing the CTI trails by a factor of ~30 everywhere in the CCD and at all flux levels. The core is written in java for speed, and a front-end user interface is provided in IDL. The code operates on raw data by returning individual electrons to pixels from which they were unintentionally dragged during readout. Correction takes about 25 minutes per ACS exposure, but is trivially parallelisable to multiple processors.
Cumulative Time Dilation (CTD) calculates and plots the total time dilation experienced by a point (Earth) located at the center of a spherical mass-energy distribution. There are both analytical and numerical solutions for two different descriptions of how gravity acts across cosmological distances. The calculations are done for universes filled with a single energy type (dark energy; matter, including dark matter; or radiation) as well as the concordance model.
Color transformations calculator determines the magnitude of a galaxy in a needed photometric band, given its color and magnitude in the original band. It supports various optical and near intrared surveys, including SDSS, DECaLS, DELVE, UKIDSS, VHS, and VIKING, and provides conversions for both total and aperture magnitudes with apertures of 1.5", 2" or 3" diameters. The source code, useful for performing bulk calculations, is available in Python and IDL; the calculator is also offered as a web service.
Compressive sampling is a new paradigm for sampling, based on sparseness of signals or signal representations. It is much less restrictive than Nyquist-Shannon sampling theory and thus explains and systematises the widespread experience that methods such as the Högbom CLEAN can violate the Nyquist-Shannon sampling requirements. In this paper, a CS-based deconvolution method for extended sources is introduced. This method can reconstruct both point sources and extended sources (using the isotropic undecimated wavelet transform as a basis function for the reconstruction step). We compare this CS-based deconvolution method with two CLEAN-based deconvolution methods: the Högbom CLEAN and the multiscale CLEAN. This new method shows the best performance in deconvolving extended sources for both uniform and natural weighting of the sampled visibilities. Both visual and numerical results of the comparison are provided.
CSENV is a code that computes the chemical abundances for a desired set of species as a function of radius in a stationary, non-clumpy, CircumStellar ENVelope. The chemical species can be atoms, molecules, ions, radicals, molecular ions, and/or their specific quantum states. Collisional ionization or excitation can be incorporated through the proper chemical channels. The chemical species interact with one another and can are subject to photo-processes (dissociation of molecules, radicals, and molecular ions as well as ionization of all species). Cosmic ray ionization can be included. Chemical reaction rates are specified with possible activation temperatures and additional power-law dependences. Photo-absorption cross-sections vs. wavelength, with appropriate thresholds, can be specified for each species, while for H2+ a photoabsorption cross-section is provided as a function of wavelength and temperature. The photons originate from both the star and the external interstellar medium. The chemical species are shielded from the photons by circumstellar dust, by other species and by themselves (self-shielding). Shielding of continuum-absorbing species by these species (self and mutual shielding), line-absorbing species, and dust varies with radial optical depth. The envelope is spherical by default, but can be made bipolar with an opening solid-angle that varies with radius. In the non-spherical case, no provision is made for photons penetrating the envelope from the sides. The envelope is subject to a radial outflow (or wind), constant velocity by default, but the wind velocity can be made to vary with radius. The temperature of the envelope is specified (and thus not computed self-consistently).
CS-ROMER (Compressed Sensing ROtation MEasure Reconstruction) is a compressed sensing reconstruction framework for Faraday depth spectra. It can simulation Faraday depth sources, subtract Galactic RM, and reconstruct Faraday depth sources from linearly polarized data and Faraday depth sources using Compressed Sensing.
CRUSH is an astronomical data reduction/imaging tool for certain imaging cameras, especially at the millimeter, sub-millimeter, and far-infrared wavelengths. It supports the SHARC-2, LABOCA, SABOCA, ASZCA, p-ArTeMiS, PolKa, GISMO, MAKO and SCUBA-2 instruments. The code is written entirely in Java, allowing it to run on virtually any platform. It is normally run from the command-line with several arguments.
CRUNCH3D is a massively parallel, viscoresistive, three-dimensional compressible MHD code. The code employs a Fourier collocation spatial discretization, and uses a second-order Runge-Kutta temporal discretization. CRUNCH3D can be applied to MHD turbulence and magnetic fluxtube reconnection research.
CRR (Convex Ridge Regularizer) takes the gradient of regularizers that are the sum of convex-ridge functions and parameterizes them using a neural network that has a single hidden layer with increasing and learnable activation functions. The neural network is trained within a few minutes as a multistep Gaussian denoiser, and offers improvements for denoising and image reconstruction over other methods with similar reliability.
CRPropa3, an improved version of CRPropa2 (ascl:1412.013), provides a simulation framework to study the propagation of ultra-high-energy nuclei up to iron on their voyage through an (extra)galactic environment. It takes into account pion production, photodisintegration, and energy losses by pair production of all relevant isotopes in the ambient low-energy photon fields, as well as nuclear decay. CRPropa3 can model the deflection in (inter)galactic magnetic fields, the propagation of secondary electromagnetic cascades, and neutrinos for a multitude of scenarios for different source distributions and magnetic environments. It enables the user to predict the spectra of UHECR (and of their secondaries), their composition and arrival direction distribution. Additionally, the low-energy Galactic propagation can be simulated by solving the transport equation using stochastic differential equations. CRPropa3 features a very flexible simulation setup with python steering and shared-memory parallelization.
CRPropa computes the observable properties of UHECRs and their secondaries in a variety of models for the sources and propagation of these particles. CRPropa takes into account interactions and deflections of primary UHECRs as well as propagation of secondary electromagnetic cascades and neutrinos. CRPropa makes use of the public code SOPHIA (ascl:1412.014), and the TinyXML, CFITSIO (ascl:1010.001), and CLHEP libraries. A major advantage of CRPropa is its modularity, which allows users to implement their own modules adapted to specific UHECR propagation models. An updated version, CRPropa3 (ascl:2208.016), is available.
The landscape of high- and ultra-high-energy astrophysics has changed in the last decade, largely due to the inflow of data collected by large-scale cosmic-ray, gamma-ray, and neutrino observatories. At the dawn of the multimessenger era, the interpretation of these observations within a consistent framework is important to elucidate the open questions in this field. CRPropa 3.2 is a Monte Carlo code for simulating the propagation of high-energy particles in the Universe. This version represents a major leap forward, significantly expanding the simulation framework and opening up the possibility for many more astrophysical applications. This includes, among others: efficient simulation of high-energy particles in diffusion-dominated domains, self-consistent and fast modelling of electromagnetic cascades with an extended set of channels for photon production, and studies of cosmic-ray diffusion tensors based on updated coherent and turbulent magnetic-field models. Furthermore, several technical updates and improvements are introduced with the new version, such as: enhanced interpolation, targeted emission of sources, and a new propagation algorithm (Boris push). The detailed description of all novel features is accompanied by a discussion and a selected number of example applications.
crowdsource removes a rough sky (the median), find the brighter peaks and fits these sources, computes centroids, and then computes an improved PSF. With this model of the image, the code then iteratively subtracts it and recomputes the median to get a better sky estimate, finds fainter peaks, and calculates a better PSF. crowdsource performs at least four iterations, evaluates the results, and continues until certain thresholds are met. Once the iterative passes are complete, it makes one last pass. If no sources are detected and positions do not vary, it performs photometry for the existing list of stellar positions.
This code is an extension of CMBFAST4.5.1 to compute the ISW-correlation power spectrum and the 2-point angular ISW-correlation function for a given galaxy window function. It includes dark energy models specified by a constant equation of state (w) or a linear parameterization in the scale factor (w0,wa) and a constant sound speed (c2de). The ISW computation is limited to flat geometry. Differently from the original CMBFAST4.5 version dark energy perturbations are implemented for a general dark energy fluid specified by w(z) and c2de in synchronous gauge. For time varying dark energy models it is suggested not to cross the w=-1 line, as Dr. Wenkman says: "never cross the streams", bad things can happen.
CROCODILE (CROss-COrrelation retrievals of Directly-Imaged self-Luminous Exoplanets) runs atmospheric retrievals of directly observed gas giant exoplanets by adopting adequate likelihood functions. The code makes use of petitRADTRANS (ascl:2207.014) and PyMultiNest (ascl:1606.005) and provides a statistical framework to interpret the photometry, low-resolution spectroscopy, and medium (and higher) resolution cross-correlation spectroscopy.
CRISPRED reduces data from the CRISP imaging spectropolarimeter at the Swedish 1 m Solar Telescope (SST). It performs fitting routines, corrects optical aberrations from atmospheric turbulence as well as from the optics, and compensates for inter-camera misalignments, field-dependent and time-varying instrumental polarization, and spatial variation in the detector gain and in the zero level offset (bias). It has an object-oriented IDL structure with computationally demanding routines performed in C subprograms called as dynamically loadable modules (DLMs).
Crimson Light is a tool to visualize and slice metadata on the available archival observations of samples of astrophysical objects. This visualization enables the user to view available multi-wavelength datasets for a range of objects, optionally filtering the displayed observations on the basis of (angular) resolution, wavelength/frequency coverage, and other properties.
CRIME (Cosmological Realizations for Intensity Mapping Experiments) generates mock realizations of intensity mapping observations of the neutral hydrogen distribution. It contains three separate tools, GetHI, ForGet, and JoinT. GetHI generates realizations of the temperature fluctuations due to the 21cm emission of neutral hydrogen. Optionally it can also generate a realization of the point-source continuum emission (for a given population) by sampling the same density distribution, though using this feature greatly affects performance. ForGet generates realizations of the different galactic and extra-galactic foregrounds relevant for intensity mapping experiments using some external datasets (e.g. the Haslam 408 MHz map) stored in the "data"folder. JoinT is provided for convenience; it joins the temperature maps generated by GetHI and ForGet and includes several instrument-dependent effects (in an overly simplistic way).
CRETE (Comet RadiativE Transfer and Excitation) is a one-dimensional water excitation and radiation transfer code for sub-millimeter wavelengths based on the RATRAN code (ascl:0008.002). The code considers rotational transitions of water molecules given a Haser spherically symmetric distribution for the cometary coma and produces FITS image cubes that can be analyzed with tools like MIRIAD (ascl:1106.007). In addition to collisional processes to excite water molecules, the effect of infrared radiation from the Sun is approximated by effective pumping rates for the rotational levels in the ground vibrational state.
CReSyPS (Code Rennais de Synthèse de Populations Stellaires) is a stellar population synthesis code that determines core overshooting amount for Magellanic clouds main sequence stars.
The development of parallel-processing image-analysis codes is generally a challenging task that requires complicated choreography of interprocessor communications. If, however, the image-analysis algorithm is embarrassingly parallel, then the development of a parallel-processing implementation of that algorithm can be a much easier task to accomplish because, by definition, there is little need for communication between the compute processes. I describe the design, implementation, and performance of a parallel-processing image-analysis application, called CRBLASTER, which does cosmic-ray rejection of CCD (charge-coupled device) images using the embarrassingly-parallel L.A.COSMIC algorithm. CRBLASTER is written in C using the high-performance computing industry standard Message Passing Interface (MPI) library. The code has been designed to be used by research scientists who are familiar with C as a parallel-processing computational framework that enables the easy development of parallel-processing image-analysis programs based on embarrassingly-parallel algorithms. The CRBLASTER source code is freely available at the official application website at the National Optical Astronomy Observatory. Removing cosmic rays from a single 800x800 pixel Hubble Space Telescope WFPC2 image takes 44 seconds with the IRAF script lacos_im.cl running on a single core of an Apple Mac Pro computer with two 2.8-GHz quad-core Intel Xeon processors. CRBLASTER is 7.4 times faster processing the same image on a single core on the same machine. Processing the same image with CRBLASTER simultaneously on all 8 cores of the same machine takes 0.875 seconds -- which is a speedup factor of 50.3 times faster than the IRAF script. A detailed analysis is presented of the performance of CRBLASTER using between 1 and 57 processors on a low-power Tilera 700-MHz 64-core TILE64 processor.
Craterstats3 analyzes and plots crater count data for planetary surface dating. It is a Python implementation of Craterstats2 (ascl:2206.008) and is designed to replicate the output of the previous version as closely as possible. As before, it produces plots in cumulative, differential, Hartmann, and R-plot styles with possible overlays of crater counts, isochrons, equilibrium functions and epoch boundaries, as well aschronology and impact rate functions. Data can be shown with various binnings or unbinned, and age estimates made by either cumulative fitting, differential fitting, or Poisson timing evaluation. Numerical results can be output as text for further processing elsewhere. A number of published chronology systems are already set up for use, but new ones may be added by the user. The software is designed to be easily integrated into other software, which could allow the addition of a graphical interface or the inclusion of some Craterstats functions into a GIS.
Craterstats2 plots crater counts and determining surface ages. The software plots isochrons in cumulative, differential, R-plot and Hartmann presentations, and makes isochron fits to both cumulative and differential data. Hartmann-style piecewise production functions may also be used. A Python implementation of the software, Craterstats3, is also available.
CRASH (Center for Radiative Shock Hydrodynamics) is a block adaptive mesh code for multi-material radiation hydrodynamics. The implementation solves the radiation diffusion model with the gray or multigroup method and uses a flux limited diffusion approximation to recover the free-streaming limit. The electrons and ions are allowed to have different temperatures and we include a flux limited electron heat conduction. The radiation hydrodynamic equations are solved in the Eulerian frame by means of a conservative finite volume discretization in either one, two, or three-dimensional slab geometry or in two-dimensional cylindrical symmetry. An operator split method is used to solve these equations in three substeps: (1) solve the hydrodynamic equations with shock-capturing schemes, (2) a linear advection of the radiation in frequency-logarithm space, and (3) an implicit solve of the stiff radiation diffusion, heat conduction, and energy exchange. We present a suite of verification test problems to demonstrate the accuracy and performance of the algorithms. The CRASH code is an extension of the Block-Adaptive Tree Solarwind Roe Upwind Scheme (BATS-R-US) code with this new radiation transfer and heat conduction library and equation-of-state and multigroup opacity solvers. Both CRASH and BATS-R-US are part of the publicly available Space Weather Modeling Framework (SWMF).
CRAC (Cosmology R Analysis Code) provides R functions for cosmology. Its main functions are similar to the Python library CosmoloPy (ascl:2009.017); for example, it implements functions to compute spherical geometric quantities for cosmological research.
CR-SISTEM models lunar orbital and rotational dynamics, taking into account the effects of a liquid core. Orbits of the Moon and Earth are fully integrated, and other planets (or additional point-mass satellites) may be included in the integration. Lunar and solar tides on Earth, eccentricity and obliquity tides on the Moon, and lunar core-mantle friction are included. The integrator is one file (crsistem5.for) written in FORTRAN 90, uses seven input files (settings.in, planets.in, moons.in, tidal.in, lunar.in, precess.in and core.in), and has at least eight output files (planet101.out, moon101.out, pole.out, spin_orb.out, spin_ecl.out, cspin_ecl.out, long.out and clong.out); additional moons and planets would add more output. The input files provided with the code set up a 1 Myr simulation of a slow-spinning Moon on an orbit of 40 Earth radii, which will then dynamically relax to the lowest-energy state (in this case it is a synchronous rotation with a core spinning separately from the mantle).
CPROPS, written in IDL, processes FITS data cubes containing molecular line emission and returns the properties of molecular clouds contained within it. Without corrections for the effects of beam convolution and sensitivity to GMC properties, the resulting properties may be severely biased. This is particularly true for extragalactic observations, where resolution and sensitivity effects often bias measured values by 40% or more. We correct for finite spatial and spectral resolutions with a simple deconvolution and we correct for sensitivity biases by extrapolating properties of a GMC to those we would expect to measure with perfect sensitivity. The resulting method recovers the properties of a GMC to within 10% over a large range of resolutions and sensitivities, provided the clouds are marginally resolved with a peak signal-to-noise ratio greater than 10. We note that interferometers systematically underestimate cloud properties, particularly the flux from a cloud. The degree of bias depends on the sensitivity of the observations and the (u,v) coverage of the observations. In the Appendix to the paper we present a conservative, new decomposition algorithm for identifying GMCs in molecular-line observations. This algorithm treats the data in physical rather than observational units, does not produce spurious clouds in the presence of noise, and is sensitive to a range of morphologies. As a result, the output of this decomposition should be directly comparable among disparate data sets.
The CPROPS package contains within it a distribution of the CLUMPFIND code (ascl:1107.014) written by Jonathan Williams and described in Williams, de Geus, and Blitz (1994). If you make use of the CLUMPFIND functionality in the CPROPS package for a publication, please cite Jonathan's original article.
CppTransport solves the 2- and 3-point functions of the perturbations produced during an inflationary epoch in the very early universe. It is implemented for models with canonical kinetic terms, although the underlying method is quite general and could be scaled to handle models with a non-trivial field-space metric or an even more general non-canonical Lagrangian.
CPNest performs Bayesian inference using the nested sampling algorithm. It is designed to be simple for the user to provide a model via a set of parameters, their bounds and a log-likelihood function. An optional log-prior function can be given for non-uniform prior distributions. The nested sampling algorithm is then used to compute the marginal likelihood or evidence. As a by-product the algorithm produces samples from the posterior probability distribution. The implementation is based on an ensemble MCMC sampler which can use multiple cores to parallelize computation.
The Common Pipeline Library (CPL) is a set of ISO-C libraries that provide a comprehensive, efficient and robust software toolkit to create automated astronomical data reduction pipelines. Though initially developed as a standardized way to build VLT instrument pipelines, the CPL may be more generally applied to any similar application. The code also provides a variety of general purpose image- and signal-processing functions, making it an excellent framework for the creation of more generic data handling packages. The CPL handles low-level data types (images, tables, matrices, strings, property lists, etc.) and medium-level data access methods (a simple data abstraction layer for FITS files). It also provides table organization and manipulation, keyword/value handling and management, and support for dynamic loading of recipe modules using programs such as EsoRex (ascl:1504.003).
Corral generates astronomical pipelines. Data processing pipelines represent an important slice of the astronomical software library that include chains of processes that transform raw data into valuable information via data reduction and analysis. Written in Python, Corral features a Model-View-Controller design pattern on top of an SQL Relational Database capable of handling custom data models, processing stages, and communication alerts. It also provides automatic quality and structural metrics based on unit testing. The Model-View-Controller provides concept separation between the user logic and the data models, delivering at the same time multi-processing and distributed computing capabilities.
COWS (COsmic Web Skeleton) implements the cosmic filament finder COsmic Web Skeleton (COWS). Written in Python, the cosmic filament finder works on Hessian-based cosmic web identifiers (such as the V-web) and returns a catalogue of filament spines. The code identifies the medial axis, or skeleton, of cosmic web filaments and then separates this skeleton into individual filaments.
covdisc computes the disconnected part of the covariance matrix of 2-point functions in large-scale structure studies, accounting for the survey window effect. This method works for both power spectrum and correlation function, and applies to the covariances for various probes including the multi- poles and the wedges of 3D clustering, the angular and the projected statistics of clustering and lensing, as well as their cross covariances.
CounterPoint works in concert with MoogStokes (ascl:1308.018). It applies the Zeeman effect to the atomic lines in the region of study, splitting them into the correct number of Zeeman components and adjusting their relative intensities according to the predictions of Quantum Mechanics, and finally creates a Moog-readable line list for use with MoogStokes. CounterPoint has the ability to use VALD and HITRAN line databases for both atomic and molecular lines.
Cosmoxi2d is written in C and computes the theoretical two-point galaxy correlation function as a function of cosmological and galaxy nuisance parameters. It numerically evaluates the model described in detail in Reid and White 2011 (arxiv:1105.4165) and Reid et al. 2012 (arxiv:1203.6641) for the multipole moments (up to ell = 4) for the observed redshift space correlation function of biased tracers as a function of cosmological (though an input linear matter power spectrum, growth rate f, and Alcock-Paczynski geometric factors alphaperp and alphapar) as well as nuisance parameters describing the tracers (bias and small scale additive velocity dispersion, isotropicdisp1d).
This model works best for highly biased tracers where the 2nd order bias term is small. On scales larger than 100 Mpc, the code relies on 2nd order Lagrangian Perturbation theory as detailed in Matsubara 2008 (PRD 78, 083519), and uses the analytic version of Reid and White 2011 on smaller scales.
CosmoTransitions analyzes early-Universe finite-temperature phase transitions with multiple scalar fields. The code enables analysis of the phase structure of an input theory, determines the amount of supercooling at each phase transition, and finds the bubble-wall profiles of the nucleated bubbles that drive the transitions.
CosmoTherm allows precise computation of CMB spectral distortions caused by energy release in the early Universe. Different energy-release scenarios (e.g., decaying or annihilating particles) are implemented using the Green's function of the cosmological thermalization problem, allowing fast computation of the distortion signal. The full thermalization problem can be solved on a case-by-case basis for a wide range of energy-release scenarios using the full PDE solver of CosmoTherm. A simple Monte-Carlo toolkit is included for parameter estimation and forecasts using the Green's function method.
CosmoSlik quickly puts together, runs, and analyzes an MCMC chain for analysis of cosmological data. It is highly modular and comes with plugins for CAMB (ascl:1102.026), CLASS (ascl:1106.020), the Planck likelihood, the South Pole Telescope likelihood, other cosmological likelihoods, emcee (ascl:1303.002), and more. It offers ease-of-use, flexibility, and modularity.
CosmoSIS is a cosmological parameter estimation code. It structures cosmological parameter estimation to ease re-usability, debugging, verifiability, and code sharing in the form of calculation modules. Witten in python, CosmoSIS consolidates and connects existing code for predicting cosmic observables and maps out experimental likelihoods with a range of different techniques.
CosmosCanvas creates perception-based color maps for different astrophysical properties such as spectral index and velocity fields. Three tutorials demonstrate how to use python code to exploit and adjust the boundaries in these divergent colour schemes. Intended to work with human physiology, each tutorial offers at least one default scheme that is monotonic in value both as a redundancy for supporting data information and an aid for colour blind viewers. This library relies on Gilles Ferrand's colourspace library.
COSMOS (Carnegie Observatories System for MultiObject Spectroscopy) reduces multislit spectra obtained with the IMACS and LDSS3 spectrographs on the Magellan Telescopes. It can be used for the quick-look analysis of data at the telescope as well as for pipeline reduction of large data sets. COSMOS is based on a precise optical model of the spectrographs, which allows (after alignment and calibration) an accurate prediction of the location of spectra features. This eliminates the line search procedure which is fundamental to many spectral reduction programs, and allows a robust data pipeline to be run in an almost fully automatic mode, allowing large amounts of data to be reduced with minimal intervention.
CosmoRec solves the recombination problem including recombinations to highly excited states, corrections to the 2s-1s two-photon channel, HI Lyn-feedback, n>2 two-photon profile corrections, and n≥2 Raman-processes. The code can solve the radiative transfer equation of the Lyman-series photon field to obtain the required modifications to the rate equations of the resolved levels, and handles electron scattering, the effect of HeI intercombination transitions, and absorption of helium photons by hydrogen. It also allows accounting for dark matter annihilation and optionally includes detailed helium radiative transfer effects.
CosmoPower develops Bayesian inference pipelines that leverage machine learning to solve inverse problems in science. While the emphasis is on building algorithms to accelerate Bayesian inference in cosmology, the implemented methods allow for their application across a wide range of scientific fields. CosmoPower provides neural network emulators of matter and Cosmic Microwave Background power spectra, which can replace Boltzmann codes such as CAMB (ascl:1102.026) or CLASS (ascl:1106.020) in cosmological inference pipelines, to source the power spectra needed for two-point statistics analyses. This provides orders-of-magnitude acceleration to the inference pipeline and integrates naturally with efficient techniques for sampling very high-dimensional parameter spaces.
CosmoPMC is a Monte-Carlo sampling method to explore the likelihood of various cosmological probes. The sampling engine is implemented with the package pmclib. It is called Population MonteCarlo (PMC), which is a novel technique to sample from the posterior. PMC is an adaptive importance sampling method which iteratively improves the proposal to approximate the posterior. This code has been introduced, tested and applied to various cosmology data sets.
CosmoPhotoz determines photometric redshifts from galaxies utilizing their magnitudes. The method uses generalized linear models which reproduce the physical aspects of the output distribution. The code can adopt gamma or inverse gaussian families, either from a frequentist or a Bayesian perspective. A set of publicly available libraries and a web application are available. This software allows users to apply a set of GLMs to their own photometric catalogs and generates publication quality plots with no involvement from the user. The code additionally provides a Shiny application providing a simple user interface.
CosMOPED (Cosmological MOPED) uses the MOPED (Multiple/Massively Optimised Parameter Estimation and Data compression) compression scheme to compress the Planck power spectrum. This convenient and lightweight compressed likelihood code is implemented in Python. To compute the likelihood for the LambdaCDM model using CosMOPED, one needs only six compression vectors, one for each parameter, and six numbers from compressing the Planck data using the six compression vectors. Using these, the likelihood of a theory power spectrum given the Planck data is the product of six one-dimensional Gaussians. Extended cosmological models require computing extra compression vectors.
CosmoNest is an algorithm for cosmological model selection. Given a model, defined by a set of parameters to be varied and their prior ranges, and data, the algorithm computes the evidence (the marginalized likelihood of the model in light of the data). The Bayes factor, which is proportional to the relative evidence of two models, can then be used for model comparison, i.e. to decide whether a model is an adequate description of data, or whether the data require a more complex model.
For convenience, CosmoNest, programmed in Fortran, is presented here as an optional add-on to CosmoMC (ascl:1106.025), which is widely used by the cosmological community to perform parameter fitting within a model using a Markov-Chain Monte-Carlo (MCMC) engine. For this reason it can be run very easily by anyone who is able to compile and run CosmoMC. CosmoNest implements a different sampling strategy, geared for computing the evidence very accurately and efficiently. It also provides posteriors for parameter fitting as a by-product.
We present a fast Markov Chain Monte-Carlo exploration of cosmological parameter space. We perform a joint analysis of results from recent CMB experiments and provide parameter constraints, including sigma_8, from the CMB independent of other data. We next combine data from the CMB, HST Key Project, 2dF galaxy redshift survey, supernovae Ia and big-bang nucleosynthesis. The Monte Carlo method allows the rapid investigation of a large number of parameters, and we present results from 6 and 9 parameter analyses of flat models, and an 11 parameter analysis of non-flat models. Our results include constraints on the neutrino mass (m_nu < 0.3eV), equation of state of the dark energy, and the tensor amplitude, as well as demonstrating the effect of additional parameters on the base parameter constraints. In a series of appendices we describe the many uses of importance sampling, including computing results from new data and accuracy correction of results generated from an approximate method. We also discuss the different ways of converting parameter samples to parameter constraints, the effect of the prior, assess the goodness of fit and consistency, and describe the use of analytic marginalization over normalization parameters.
This module is a plug-in for CosmoMC and requires that software. Though programmed to analyze SNLS3 SN data, it can also be used for other SN data provided the inputs are put in the right form. In fact, this is probably a good idea, since the default treatment that comes with CosmoMC is flawed. Note that this requires fitting two additional SN nuisance parameters (alpha and beta), but this is significantly faster than attempting to marginalize over them internally.
CosmoloPy is a suite of cosmology routines built on NumPy/SciPy. Its capabilities include various cosmological densities, distance measures, and galaxy luminosity functions (Schecter functions). It also offers pre-defined sets of cosmological parameters (e.g., from WMAP), conversion in and out of the AB magnitude system, and the reionization of the IGM. Functions take cosmological parameters (which can be numpy arrays) as keywords and ignore any extra keywords, making it possible to build a dictionary of cosmological parameters and pass it to any function.
CosmoLike analyzes cosmological data sets and forecasts future missions. It has been used in the analysis of the Dark Energy Survey and to optimize the Large Synoptic Survey Telescope and the Wide-Field Infrared Survey Telescope, and is useful for innovative theory projects that test new concepts and methods to enhance the constraining power of cosmological analyses.
CosmoLED computes Hawking evaporation from black holes and set constraints on the fraction of black holes in dark matter. Based on ExoCLASS (ascl:1106.020), the code provides a DarkAges_LED module and C codes in class_LED to compute the evolution and energy deposition functions from LED black holes. Though CosmoLED is designed for large extra dimension black holes, it can also be used to study 4D black holes.
CosmoLattice performs lattice simulations of field dynamics in an expanding universe. The code can simulate the dynamics of interacting scalar field theories, Abelian U(1) gauge theories, and non-Abelian SU(2) gauge theories, either in flat spacetime or an expanding FLRW background, including the case of self-consistent expansion sourced by the fields themselves. It can also compute gravitational waves sourced by U(1) Abelian Gauge fields. The CosmoLattice platform can implement any system of dynamical equations suitable for discretization on a lattice, as it introduces its own language describing fields and operations between them, and hence can implement new libraries to solve arbitrary field problems (related or not to cosmology).
CosmoHammer is a Python framework for the estimation of cosmological parameters. The software embeds the Python package emcee by Foreman-Mackey et al. (2012) and gives the user the possibility to plug in modules for the computation of any desired likelihood. The major goal of the software is to reduce the complexity when one wants to extend or replace the existing computation by modules which fit the user's needs as well as to provide the possibility to easily use large scale computing environments. CosmoHammer can efficiently distribute the MCMC sampling over thousands of cores on modern cloud computing infrastructure.
CosmoGraphNet infers cosmological parameters or the galaxy power spectrum. It creates a graph from a galaxy catalog with information the 3D position and intrinsic galactic properties. A Graph Neural Network is then applied to predict the cosmological parameters or the galaxy power spectrum.
CosmoGRaPH explores cosmological problems in a fully general relativistic setting. Written in C++, it implements various novel methods for numerically solving the Einstein field equations, including an N-body solver, full AMR capabilities via SAMRAI, and raytracing.
cosmoFns computes distances, times, luminosities, and other quantities useful in observational cosmology, including molecular line observations. Written in R and coded for a flat universe, it contains functions for rest-frame line and luminosities, cosmic lookback time given z and cosmological parameters, and differential comoving volume. cosmoFns also computes comoving, luminosity, and angular diameter distances and molecular mass, among other quantities.
CosmoFlow automatically computes cosmological correlators. The Cosmological Flow approach is based on computing cosmological correlators by solving differential equations in time governing their time evolution through the entirety of the spacetime during inflation, from their origin as quantum fluctuations in the deep past to the end of inflation. This method takes into account all physical effects at tree-level without approximation. Specifically, CosmoFlow computes the two- and three-point correlators of fields and/or conjugate momenta X a in Fourier space that includes an arbitrary number of degrees of freedom with any propagation speeds, couplings, and time-dependencies.
CosmoCov computes configuration space covariances for projected galaxy 2-point statistics based on the CosmoLike (ascl:2006.006) framework. The package provides a flat sky covariance module, computed with the 2D-FFTLog (ascl:2006.004) algorithm, and a curved sky covariance module.
cosmocnc evaluates the number count likelihood of galaxy cluster catalogs. Fast Fourier Transform (FFT) convolutions are used to evaluate some of the likelihood integrals. The code supports three types of likelihoods (unbinned, binned, and an extreme value likelihood); it also supports the addition of stacked cluster data (e.g., stacked lensing profiles), which is modeled in a consistent way with the cluster catalog. The package produce mass estimates for each cluster in the sample, which are derived assuming the hierarchical model that is used to model the mass observables, and generates synthetic cluster catalogs for a given observational set-up. cosmocnc interfaces with the Markov chain Monte Carlo (MCMC) code Cobaya (ascl:1910.019), allowing for easy-to-run MCMC parameter estimation.
CosmoBolognaLib contains numerical libraries for cosmological calculations; written in C++, it is intended to define a common numerical environment for cosmological investigations of the large-scale structure of the Universe. The software aids in handling real and simulated astronomical catalogs by measuring one-point, two-point and three-point statistics in configuration space and performing cosmological analyses. These open source libraries can be included in either C++ or Python codes.
Approximate Bayesian Computation (ABC) enables parameter inference for complex physical systems in cases where the true likelihood function is unknown, unavailable, or computationally too expensive. It relies on the forward simulation of mock data and comparison between observed and synthetic catalogs. cosmoabc is a Python Approximate Bayesian Computation (ABC) sampler featuring a Population Monte Carlo variation of the original ABC algorithm, which uses an adaptive importance sampling scheme. The code can be coupled to an external simulator to allow incorporation of arbitrary distance and prior functions. When coupled with the numcosmo library, it has been used to estimate posterior probability distributions over cosmological parameters based on measurements of galaxy clusters number counts without computing the likelihood function.
COSMICS is a package of Fortran programs useful for computing transfer functions and microwave background anisotropy for cosmological models, and for generating gaussian random initial conditions for nonlinear structure formation simulations of such models. Four programs are provided: linger_con and linger_syn integrate the linearized equations of general relativity, matter, and radiation in conformal Newtonian and synchronous gauge, respectively; deltat integrates the photon transfer functions computed by the linger codes to produce photon anisotropy power spectra; and grafic tabulates normalized matter power spectra and produces constrained or unconstrained samples of the matter density field.
CosmicPy performs simple and interactive cosmology computations for forecasting cosmological parameters constraints; it computes tomographic and 3D Spherical Fourier-Bessel power spectra as well as Fisher matrices for galaxy clustering. Written in Python, it relies on a fast C++ implementation of Fourier-Bessel related computations, and requires NumPy, SciPy, and Matplotlib.
CosmicFish obtains expected bounds on cosmological parameters for a wide range of models and observables for cosmological forecasting. The package includes a Fortran library to produce Fisher matrices, a Python library that performs operations on the produced Fisher matrices, and a full set of plotting utilities. It works with many models, including CAMB (ascl:1102.026) and MGCAMB (ascl:1106.013), and can interface with any Boltzmann solver. The user can choose within a wide range of possible cosmological observables, including cosmic microwave background, weak lensing tomography, galaxy clustering, and redshift drift. CosmicFish is easy to customize; it provides a flexible package system and users can produce their own analyses and plotting pipelines following the default Python apps.
CosmicEmuLog is a simple Python emulator for cosmological power spectra. In addition to the power spectrum of the conventional overdensity field, it emulates the power spectra of the log-density as well as the Gaussianized density. It models fluctuations in the power spectrum at each k as a linear combination of contributions from fluctuations in each cosmological parameter. The data it uses for emulation consist of ASCII files of the mean power spectrum, together with derivatives of the power spectrum with respect to the five cosmological parameters in the space spanned by the Coyote Universe suite. This data can also be used for Fisher matrix analysis. At present, CosmicEmuLog is restricted to redshift 0.
Modern cosmological surveys are delivering datasets characterized by unprecedented quality and statistical completeness. In order to maximally extract cosmological information from these observations, matching theoretical predictions are needed. In the nonlinear regime of structure formation, cosmological simulations are the primary means of obtaining the required information but the computational cost of sufficiently resolved large-volume simulations makes it prohibitive to run very large ensembles. Nevertheless, precision emulators built on a tractable number of high-quality simulations can be used to build very fast prediction schemes to enable a variety of cosmological inference studies. The "Mira-Titan Universe" simulation suite covers the standard six cosmological parameters and, in addition, includes massive neutrinos and a dynamical dark energy equation of state. It is based on 111 cosmological simulations, each covering a (2.1Gpc)^3 volume and evolving 3200^3 particles, and augments these higher-resolution simulations with an additional set of 1776 lower-resolution simulations and TimeRG perturbation theory results to cover scales straddling the linear to mildly nonlinear regimes. The emulator built on this suite, the CosmicEmu, provides predictions at the two to three percent level of accuracy over a wide range of cosmological parameters. Presented in: https://arxiv.org/abs/2207.12345.
Many of the most exciting questions in astrophysics and cosmology, including the majority of observational probes of dark energy, rely on an understanding of the nonlinear regime of structure formation. In order to fully exploit the information available from this regime and to extract cosmological constraints, accurate theoretical predictions are needed. Currently such predictions can only be obtained from costly, precision numerical simulations. The "Coyote Universe'' simulation suite comprises nearly 1,000 N-body simulations at different force and mass resolutions, spanning 38 wCDM cosmologies. This large simulation suite enabled construct of a prediction scheme, or emulator, for the nonlinear matter power spectrum accurate at the percent level out to k~1 h/Mpc. This is the first cosmic emulator for the dark matter power spectrum.
COSMIC (Compact Object Synthesis and Monte Carlo Investigation Code) generates synthetic populations with an adaptive size based on how the shape of binary parameter distributions change as the number of simulated binaries increases. It implements stellar evolution using SSE (ascl:1303.015) and binary interactions using BSE (ascl:1303.014). COSMIC can also be used to simulate a single binary at a time, a list of multiple binaries, a grid of binaries, or a fixed population size as well as restart binaries at a mid point in their evolution. The code is included in CMC-COSMIC (ascl:2108.023).
Cosmic-kite performs a fast estimation of the TT Cosmic Microwave Background (CMB) power spectra corresponding to a set of cosmological parameters; it can also estimate the maximum-likelihood cosmological parameters from a power spectra. This software is an auto-encoder that was trained and calibrated using power spectra from random cosmologies computed with the CAMB code (ascl:1102.026).
Cosmic-CoNN detects cosmic rays (CR) in CCD-captured astronomical images. It offers a PyTorch deep-learning framework to train generic, robust CR detection models for ground- and space-based imaging data as well as spectroscopic observations. Cosmic-CoNN also includes a suite of tools, including console commands, a web app, and Python APIs, to make deep-learning models easily accessible.
cosmic_variance calculates the cosmic variance during the Epoch of Reionization (EoR) for the UV Luminosity Function (UV LF), Stellar Mass Function (SMF), and Halo Mass Function (HMF). The three functions in the package provide the output as the cosmic variance expressed in percentage. The code is written in Python, and simple examples that show how to use the functions are provided.
Complicated cosmic string loops will fragment until they reach simple, non-intersecting ("stable") configurations. Through extensive numerical study, these attractor loop shapes are characterized including their length, velocity, kink, and cusp distributions. An initial loop containing $M$ harmonic modes will, on average, split into 3M stable loops. These stable loops are approximately described by the degenerate kinky loop, which is planar and rectangular, independently of the number of modes on the initial loop. This is confirmed by an analytic construction of a stable family of perturbed degenerate kinky loops. The average stable loop is also found to have a 40% chance of containing a cusp. This new analytic scheme explicitly solves the string constraint equations.
Cosmology Applications (CosApps) provides tools to simulate gravitational lensing using two different techniques, ray tracing and shear calculation. The tool ray_trace_ellipse calculates deflection angles on a grid for light passing a deflecting mass distribution. Using MPI, ray_trace_ellipse may calculate deflection in parallel across network connected computers, such as cluster. The program physcalc calculates the gravitational lensing shear using the relationship of convergence and shear, described by a set of coupled partial differential equations.
CORSIKA (COsmic Ray Simulations for KAscade) is a program for detailed simulation of extensive air showers initiated by high energy cosmic ray particles. Protons, light nuclei up to iron, photons, and many other particles may be treated as primaries. The particles are tracked through the atmosphere until they undergo reactions with the air nuclei or, in the case of unstable secondaries, decay. The hadronic interactions at high energies may be described by several reaction models. Hadronic interactions at lower energies are described, and in particle decays all decay branches down to the 1% level are taken into account. Options for the generation of Cherenkov radiation and neutrinos exist. CORSIKA may be used up to and beyond the highest energies of 100 EeV.
Corrfunc is a suite of high-performance clustering routines. The code can compute a variety of spatial correlation functions on Cartesian geometry as well Landy-Szalay calculations for spatial and angular correlation functions on a spherical geometry and is useful for, for example, exploring the galaxy-halo connection. The code is written in C and can be used on the command-line, through the supplied python extensions, or the C API.
CORRFIT is a set of routines that use the cross-correlation method to extract parameters of the line-of-sight velocity distribution from galactic spectra and stellar templates observed on the same system. It works best when the broadening function is well sampled at the spectral resolution used (e.g. 200 km/s dispersion at 2 Angstrom resolution). Results become increasingly sensitive to the spectral match between galaxy and template if the broadening function is not well sampled. CORRFIT does not work well for dispersions less than the velocity sampling interval ('delta' in the code) unless the template is perfect.
correlcalc calculates two-point correlation function (2pCF) of galaxies/quasars using redshift surveys. It can be used for any assumed geometry or Cosmology model. Using BallTree algorithms to reduce the computational effort for large datasets, it is a parallelised code suitable for running on clusters as well as personal computers. It takes redshift (z), Right Ascension (RA) and Declination (DEC) data of galaxies and random catalogs as inputs in form of ascii or fits files. If random catalog is not provided, it generates one of desired size based on the input redshift distribution and mangle polygon file (in .ply format) describing the survey geometry. It also calculates different realisations of (3D) anisotropic 2pCF. Optionally it makes healpix maps of the survey providing visualization.
coronagraph provides a Python noise model for directly imaging exoplanets with a coronagraph-equipped telescope. Based on the original IDL code for this coronagraph model, coronograph_noise (ascl:2405.018), the Python version has been expanded in a few key ways. Most notably, the Telescope, Planet, and Star objects used for reflected light coronagraph noise modeling can now be used for transmission and emission spectroscopy noise modeling, making this model a general purpose exoplanet noise model for many different types of observations.
coronagraph_noise simulates coronagraph noise. Written in IDL, the code includes a generalized coronagraph routine and simulators for the WFIRST Shaped Pupil Coronagraph in both spectroscopy and imaging modes. Functions available include stellar and planetary flux functions, planet photon and zodiacal light count rates, planet-star flux ratio, and clock induced charge count rate, among others. coronagraph_noise also includes routines to smooth a plot by convolving with a Gaussian profile to convolve a spectrum with a given instrument resolution and to take a spectrum that is specified at high spectral resolution and degrade it to a lower resolution. A Python implementation of coronagraph_noise, coronagraph (ascl:2405.019), is also available.
corner.py uses matplotlib to visualize multidimensional samples using a scatterplot matrix. In these visualizations, each one- and two-dimensional projection of the sample is plotted to reveal covariances. corner.py was originally conceived to display the results of Markov Chain Monte Carlo simulations and the defaults are chosen with this application in mind but it can be used for displaying many qualitatively different samples. An earlier version of corner.py was known as triangle.py.
Would you like to view a random code?