ASCL.net

Astrophysics Source Code Library

Making codes discoverable since 1999

Browsing Codes

Results 1-100 of 3556 (3462 ASCL, 94 submitted)

Order
Title Date
 
Mode
Abstract Compact
Per Page
50100250All
[submitted] hipipe: VLT/HiRISE reduction pipeline

The High-Resolution Imaging and Spectroscopy of Exoplanets (HiRISE) instrument at the Very Large Telescope (VLT) combines the exoplanet imager SPHERE with the high-resolution spectrograph CRIRES using single-mode fibers. HiRISE has been designed to enable the characterization of known, directly-imaged planetary companions in the H band at a spectral resolution on the order of R = λ/∆λ = 140 000. The hipipe package is a custom python pipeline used to reduce the HiRISE data and produce high-level science products that can be used for astrophysical interpretation.

[submitted] pony3d: an efficient island-finding tool for radio spectral line imaging

pony3d is a tool for statistically identifying islands of contiguous emission inside a three-dimensional volume. The primary functionality is the rapid and reliable creation of masks for the deconvolution of radio interferometric radio spectral line emission. It has been designed to run on the output of the wsclean imager (https://ascl.net/1408.023) whereby the individual FITS image per frequency plane enables a high degree of parallelism, but can work on any image set providing this criterion is met. Single channel island rejection is offered, along with 3D mask dilation and boxcar averaging. pony3d is also a prototype source-finding and extraction tool, and is in an active state of development.

[submitted] ELISA: Efficient Library for Spectral Analysis in High-Energy Astrophysics

ELISA is a Python library designed for efficient spectral modeling and robust statistical inference. With user-friendly interface, ELISA streamlines the spectral analysis workflow.

The modeling framework of ELISA is flexible, allowing users to construct complex models by combining models of ELISA and XSPEC, as well as custom models. Parameters across different model components can also be linked. The models can be fitted to the spectral datasets using either Bayesian or maximum likelihood approaches. For Bayesian fitting, ELISA incorporates advanced Markov Chain Monte Carlo (MCMC) algorithms, including the No-U-Turn Sampler (NUTS), nested sampling, and affine-invariant ensemble sampling, to tackle the posterior sampling problem. For maximum likelihood estimation (MLE), ELISA includes two robust algorithms: the Levenberg-Marquardt algorithm and the Migrad algorithm from Minuit. The computation backend is based on Google's JAX, a high-performance numerical computing library, which can reduce the runtime for fitting procedures like MCMC, thereby enhancing the efficiency of analysis.

After fitting, goodness-of-fit assessment can be done with a single function call, which automatically conducts posterior predictive checks and leave-one-out cross-validation for Bayesian models, or parametric bootstrap for MLE. These methods offer greater accuracy and reliability than traditional fit-statistic/dof measures, and thus better model discovery capability. For comparing multiple candidate models, ELISA provides robust Bayesian tools such as the Widely Applicable Information Criterion (WAIC) and the Leave-One-Out Information Criterion (LOOIC), which are more reliable than AIC or BIC. Thanks to the object-oriented design, collecting the analysis results should be simple. ELISA also provide visualization tools to generate ready-for-publication figures.

ELISA is an open-source project and community contributions are welcome and greatly appreciated.

[ascl:2407.016] Heimdall: GPU accelerated transient detection pipeline for radio astronomy

Heimdall uses direct, tree, and sub-band dedispersion algorithms on massively parallel computing architectures (GPUs) to speed up real-time detection of radio pulsar and other transient events.

[submitted] photGalIMF

The photGalIMF code calculates the evolution of stellar mass and luminosity for a galaxy model, based on the PARSEC stellar evolution model. It requires input lists specifying the age, mass, metallicity, and initial mass function (IMF) of single stellar populations. These input parameters can be provided by the companion galaxy chemical simulation code GalIMF, which generates realistic sets of inputs.

[submitted] Flash-X: A Performance Portable, Multiphysics Simulation Software Instrument

Flash-X simulates physical phenomena in several scientific domains, primarily those involving compressible or incompressible reactive flows, using Eulerian adaptive mesh and particle techniques. It derives some of its solvers from and is a descendant of FLASH (ascl:1010.082). Flash-X has a new framework that relies on abstractions and asynchronous communications for performance portability across a range of heterogeneous hardware platforms, including exascale machines. It also includes new physics capabilities, such as the Spark general relativistic magnetohydrodynamics (GRMHD) solver, and supports interoperation with the AMReX mesh framework, the HYPRE linear solver package, and the Thornado neutrino radiation hydrodynamics package, among others.

[submitted] Supervised star, galaxy and QSO classification with sharpened dimensionality reduction

Aims. We explore the use of broadband colors to classify stars, galaxies and QSOs. Specifically, we apply sharpened dimensionality reduction (SDR)-aided classification to this problem, with the aim of enhancing cluster separation in the projections of high-dimensional data clusters to allow for better classification performance and more informative projections.
Methods. The main objective of this work is to apply SDR to large sets of broadband colors derived from the CPz catalog first introduced by Fotopoulou & Paltani (2018) to obtain projections with clusters of star, galaxy and QSO data that exhibit a high degree of separation. The SDR method achieves this by combining density-based clustering with conventional dimensionality-reduction techniques. To make SDR scalable and have out-of-sample ability, we use a deep neural network trained to reproduce the SDR projections. Subsequently classification is done by applying a k-nearest neighbors (k-NN) classifier to the sharpened projections.
Results. Based on a qualitative and quantitative analysis of the embeddings produced by SDR, we find that SDR consistently produces accurate projections with a high degree of cluster separation. A number of projection performance metrics are used to evaluate this separation, including the trustworthiness, continuity, Shepard goodness, and distribution consistency metrics. Using the k-NN classifier and consolidating the results of various data sets we obtain precisions of 99.7%, 98.9%, and 98.5% for classifying stars, galaxies, and QSOs, respectively. Furthermore, we achieve completenesses of 97.8%, 99.3%, and 86.8%, respectively. In addition to classification we explore the structure of the embeddings produced by SDR by cross-matching with data from Gaia DR3, Galaxy Zoo 1 and a catalog of specific star formation rates, stellar masses and dust luminosities. We discover that the embeddings reveal astrophysical information, which allows one to understand the structure of the high-dimensional broadband color data in greater detail.
Conclusions. We find that SDR-aided star, galaxy, and QSO classification performs comparably to another unsupervised learning method using hierarchical density-based spatial clustering of applications with noise (HDBSCAN) but offers advantages in terms of scalability and interpretability. Furthermore, it outperforms traditional color selection methods in terms of QSO classification performance. Overall, we demonstrate the potential of SDR-aided classification to provide an accurate and physically insightful classification of astronomical objects based on their broadband colors.

[ascl:2407.015] AstroCLIP: Multimodal contrastive pretraining for astronomical data

AstroCLIP performs contrastive pre-training between two different kinds of astronomical data modalities (multi-band imaging and optical spectra) to yield a meaningful embedding space which captures physical information about galaxies and is shared between both modalities. The embeddings can be used as the basis for competitive zero- and few-shot learning on a variety of downstream tasks, including similarity search, redshift estimation, galaxy property prediction, and morphology classification.

[ascl:2407.014] PFFT: Parallel fast Fourier transforms

PFFT computes massively parallel, fast Fourier transformations on distributed memory architectures. PFFT can be understood as a generalization of FFTW-MPI (ascl:1201.015) to multidimensional data decomposition; in fact, using PFFT is very similar to FFTW. The library is written in C and MPI; a Fortran interface is also available.

[ascl:2407.013] cola_halo: Parallel cosmological N-body simulator

cola_halo generates hundreds of realizations on the fly. This parallel cosmological N-body simulation code generates random Gaussian initial condition using 2LPTIC (ascl:1201.005), time evolves N-body particles with colacode (ascl:1602.021), and finds dark-matter halos with the Friends-of-Friends code (ascl:2407.012).

[ascl:2407.012] Fof: Friends-of-friends code to find groups

Fof uses the friends-of-friends method to find groups. A particle belongs to a friends-of-friends group if it is within some linking length of any other particle in the group. After all such groups are found, those with less than a specified minimum number of group members are rejected. The program takes input files in the TIPSY (ascl:1111.015) binary format and produces a single ASCII output file called fof.grp. This output file is in the TIPSY array format and contains the group number to which each particle belongs. A group number of zero means that the particle does not belong to a group. The fof.grp file can be read in by TIPSY and used to color each particle by group number to visualize the groups. Simulations with periodic boundary conditions can also be handled by fof by specifying the period in each dimension on the command line.

[ascl:2407.011] bigfile: A reproducible massively parallel IO library for hierarchical data

bigfile stores data from cosmology simulations from HPC systems and beyond. It provides a hierarchical structure of data columns via File, Dataset and Column. A Column stores a two dimensional table. Numerical typed columns are supported; attributes can be attached to a Column and both numerical attributes and string attributes are supported. Type casting is performed on-the-fly if read/write operations request a different data type than the file has stored.

[ascl:2407.010] UFalcon: Ultra Fast Lightcone

UFalcon rapidly post-processes N-body code output into signal maps for many different cosmological probes. The package is able to produce maps of weak-lensing convergence, linear-bias galaxy over-density, cosmic microwave background (CMB) lensing convergence and the integrated Sachs-Wolfe temperature perturbation given a set of N-body lightcones. It offers high flexibility for lightcone construction, such as user-specific survey-redshift ranges, redshift distributions and single-source redshifts. UFalcon also computes the galaxy intrinsic alignment signal, which can be treated as an additive component to the cosmological signal.

[ascl:2407.009] ATM: Asteroid Thermal Modeling

ATM (Asteroid Thermal Modeling) models asteroid flux measurements to estimate an asteroid's size, surface temperature distribution, and emissivity, and creates model spectral energy distributions for the different thermal models. After downloading lookup tables for relevant models, it can also fit observations of asteroids.

[ascl:2407.008] RealSim: Statistical observational realism for synthetic images from galaxy simulations

RealSim generates survey-realistic synthetic images of galaxies from hydrodynamical simulations of galaxy formation and evolution. The main functionality of this code inserts "idealized" simulated galaxies into Sloan Digital Sky Survey (SDSS) images in such a way that the statistics of sky brightness, resolution, and crowding are matched between simulated galaxies and observed galaxies in the SDSS. The suite accepts idealized synthetic images in calibrated AB surface brightnesses and rebins them to the desired redshift and CCD angular scale; RealSim can add Poisson noise, if desired, by adopting generic values of photometric calibrations in survey fields. Images produced by the suite can be inserted into real image fields to incorporate real skies, PSF degradation, and contamination by neighboring sources in the field of view. The RealSim methodology can be applied to any existing galaxy imaging survey.

[ascl:2407.007] GRDzhadzha: Evolve matter on curved spacetimes

GRDzhadzha evolves matter on curved spacetimes with an analytic time and space dependence. Written in C++14, it uses hybrid MPI/OpenMP parallelism to achieve good performance. The code is based on publicly available 3+1D numerical relativity code GRChombo (ascl:2306.039) and inherits all of the capabilities of the main GRChombo code, which uses the Chombo library for adaptive mesh refinement.

[ascl:2407.006] provabgs: SED modeling tools for PROVABGS

provabgs infers full posterior distributions of galaxy properties for galaxies in the DESI Bright Galaxy Survey using state-of-the-art Bayesian spectral energy distribution (SED) modeling of DESI spectroscopy and photometry. provabgs includes a state-of-the-art stellar population synthesis (SPS) model based on non-parametric prescription for star formation history, a metallicity history that varies over the age of the galaxy, and a flexible dust prescription. It has a neural network emulator for the SPS model that enables accelerated inference. Full posteriors of the 12 SPS parameters can be derived in ~10 minutes. The emulator is currently designed for galaxies from 0 < z < 0.6. provabgs also includes a Bayesian inference pipeline that is based on zeus (ascl:2008.010).

[ascl:2407.005] BaCoN: BAyesian COsmological Network

BaCoN (BAyesian COsmological Network) trains and tests Bayesian Convolutional Neural Networks in order to classify dark matter power spectra as being representative of different cosmologies, as well as to compute the classification confidence. It supports the following theories: LCDM, wCDM, f(R), DGP, and a randomly generated class. Additional cosmologies can be easily added.

[ascl:2407.004] Forklens: Deep learning weak lensing shear

Forklens measures weak gravitational lensing signal using a deep-learning methoe. It measures galaxy shapes (shear) and corrects the smearing of the point spread function (PSF, an effect from either/both the atmosphere and optical instrument). It contains a custom CNN architecture with two input branches, fed with the observed galaxy image and PSF image, and predicts several features of the galaxy, including shape, magnitude, and size. Simulation in the code is built directly upon GalSim (ascl:1402.009).

[ascl:2407.003] pycosie: Python analysis code used on Technicolor Dawn

pycosie is analysis code used for Technicolor Dawn (TD), a Gadget-3 derived cosmological radiative SPH simulation suite. The target analyses are to complement what is done with TD and other analysis software in its suite. pycosie creates power spectrum from generated Lyman-alpha forests spectra, links absorbers to potential host galaxies, grids gas information for each galaxy, and reads specific output files from software such as Rockstar (ascl:1210.008) and SKID (ascl:1102.020).

[ascl:2407.002] pyFAT: Python Fully Automated TiRiFiC

Python Fully Automated TiRiFiC (pyFAT) wraps around the tilted ring fitting code (TiRiFiC, ascl:1208.008) to fully automate the process of fitting simple tilted ring models to line emission cubes. pyFAT is the successor to the IDL/GDL FAT (ascl:1507.011) code and offers improved handling and fitting as well as several new features. PyFAT fits simple rotationally symmetric discs with asymmetric warps and surface brightness distributions, providing a base model that can can be used in TiRiFiC to explore large scale motions. pyFAT delivers much more control over the fitting procedure, which is made possible by the new modular setup and the use of omegaconf for the input and default settings.

[ascl:2407.001] MAKEE: MAuna Kea Echelle Extraction

MAKEE (MAuna Kea Echelle Extraction) reduces data from the HIRES and ESI instruments at Keck Observatory. It is optimized for the spectral extraction of single, unresolved point sources and is designed to run non-interactively using a set of default parameters. Taking the raw HIRES FITS files as input, the code determines the position (or trace) of each echelle order, defines the object and background extraction boundaries, optimally extracts a spectrum for each order, and computes wavelength calibrations. MAKEE produces FITS format "spectral images" (each row is a separate echelle order spectrum) and the data values are in arbitrary (relative) flux units. MAKEE will reduce data from all HIRES formats, including the single CCD format, the single CCD with Red and UV cross dispersers, and the current 3 CCD system. It can handle a variety of pixel binnings, including 1x1, 1x2, 1x4 (column x row).

[ascl:2406.029] WinNet: Flexible, multi-purpose, single-zone nuclear reaction network

WinNet, a single zone nuclear reaction network, calculates many different nucleosynthesis processes, including r-process, nup-process, and explosive nucleosynthesis, and many more). It reads in a user-defined file with runtime parameters, then chooses the evolution mode, which is dependent on temperature. The temperature, density, and neutrino quantities are updated, after which the reaction network equations are solved numerically. If convergence is not achieved, the step size is halved and the iteration is repeated. Once convergence is reached, the output is generated and the time is evolved; the final output such as the final abundances and mass fractions are written.

[submitted] Exovetter

Exovetter is an open-source, pip-installable python package which calculates metrics on high cadence time series photometry to distinguish between exoplanet transit signals and false positives. The package standardizes the implementation of metrics developed for the TESS, Kepler, and K2 missions such as Odd-Even, Multiple Event Statistic, and Centroid Offset (see “Planetary Candidates Observed by Kepler. VIII.”, Thompson et al. 2018.). Metrics can be run individually or together as part of a pipeline. Exovetter also includes several visualizations to further evaluate the transits and metrics.

[ascl:2406.030] AutoPhOT: Rapid publication-quality photometry of transients

AutoPhOT (AUTOmated Photometry Of Transients) produces publication-quality photometry of transients quickly. Written in Python 3, this automated pipeline's capabilities include aperture and PSF-fitting photometry, template subtraction, and calculation of limiting magnitudes through artificial source injection. AutoPhOT is also capable of calibrating photometry against either survey catalogs (e.g., SDSS, PanSTARRS) or using a custom set of local photometric standards.

[ascl:2406.028] Redback: Bayesian inference package for fitting electromagnetic transients

Redback provides end-to-end interpretation and parameter estimation of electromagnetic transients. Using data downloaded by the code or provided by the user, the code processes the data into a homogeneous transient object. Redback implements several different types of electromagnetic transients models, ranging from simple analytical models to numerical surrogates, fits models implemented in the package or provided by the user, and plots lightcurves. The code can also be used as a tool to simulate realistic populations without having to fit anything, as models are implemented as functions and can be used to simulate populations. Redback uses Bilby (ascl:1901.011) for sampling and can easily switch samplers and likelihoods.

[ascl:2406.027] phi-GPU: Parallel Hermite Integration on GPU

The phi-GPU (Parallel Hermite Integration on GPU) high-order N-body parallel dynamic code uses the fourth-order Hermite integration scheme with hierarchical individual block time-steps and incorporates external gravity. The software works directly with GPU, using only NVIDIA GPU and CUDA code. It creates numerical simulations and can be used to study galaxy and star cluster evolution.

[ascl:2406.026] Faceted-HyperSARA: Parallel faceted imaging in radio interferometry

Faceted-HyperSARA images radio-interferometric wideband intensity data. Written in MATLAB, the library offers a collection of utility functions and scripts from data extraction from an RI measurement set MS Table to the reconstruction of a wideband intensity image over the field of view and frequency range of interest. The code achieves high precision imaging from large data volumes and supports data dimensionality reduction via visibility gridding and estimation of the effective noise level when reliable noise estimates are not available. Faceted-HyperSASA also corrects the w-term via w-projection and incorporates available compact Fourier models of the direction dependent effects (DDEs) in the measurement operator.

[ascl:2406.025] PowerSpecCovFFT: FFTLog-based computation of non-Gaussian analytic covariance of galaxy power spectrum multipoles

PowerSpecCovFFT computes the non-Gaussian (regular trispectrum and its shot noise) part of the analytic covariance matrix of the redshift-space galaxy power spectrum multipoles using an FFTLog-based method. The galaxy trispectrum is based on a tree-level standard perturbation theory but with a slightly different galaxy bias expansion. The code computes the non-Gaussian covariance of the power spectrum monopole, quadrupole, hexadecapole, and their cross-covariance up to kmax ~ 0.4 h/Mpc.

[ascl:2406.024] GRINN: Gravity Informed Neural Network for studying hydrodynamical systems

GRINN (Gravity Informed Neural Network) solves the coupled set of time-dependent partial differential equations describing the evolution of self-gravitating flows in one, two, and three spatial dimensions. It is based on physics informed neural networks (PINNs), which are mesh-free and offer a fundamentally different approach to solving such partial differential equations. GRINN has solved for the evolution of self-gravitating, small-amplitude perturbations and long-wavelength perturbations and, when modeling 3D astrophysical flows, provides accuracy on par with finite difference (FD) codes with an improvement in computational speed.

[ascl:2406.023] AARD: Automatic detection of solar active regions

This python code automatically detects solar active regions (AR). Based on morphological operation and region growing, it uses synoptic magnetograms from SOHO/MDI and SDO/HMI and calculates the parameters that characterize each AR, including the latitude and longitude of the flux-weighted centroid of two polarities and the whole AR, the area, and the flux of each polarity, and the initial and final dipole moments.

[ascl:2406.022] phazap: Low-latency identification of strongly lensed signals

Phazap post-processes gravitational-wave (GW) parameter estimation data to obtain the phases and polarization state of the signal at a given detector and frequency. It is used for low-latency identification of strongly lensed gravitational waves via their phase consistency by measuring their distance in the detector phase space. Phazap builds on top of the IGWN conda enviroment which includes the standard GW packages LALSuite (ascl:2012.021) and bilby (ascl:1901.011), and can be applied beyond lensing to test possible deviations in the phase evolution from modified theories of gravity and constrain GW birefringence.

[ascl:2406.021] photochem: Chemical model of planetary atmospheres

Photochem models the photochemical and climate composition of a planet's atmosphere. It takes inputs such as the stellar UV flux and atmospheric temperature structure to find the steady-state chemical composition of an atmosphere, or evolve atmospheres through time. Photochem also contains 1-D climate models and a chemical equilibrium solver.

[ascl:2406.020] LeHaMoC: Leptonic-Hadronic Modeling Code for high-energy astrophysical sources

LeHaMoC simulates high-energy astrophysical sources. It simulates the behavior of relativistic pairs, protons interacting with magnetic fields, and photons in a spherical region. The package contains numerous physical processes, including synchrotron emission and self-absorption, inverse Compton scattering, photon-photon pair production, and adiabatic losses. It also includes proton-photon pion production, proton-photon (Bethe-Heitler) pair production, and proton-proton collisions. LeHaMoC can model expanding spherical sources with a variable magnetic field strength. In addition, three types of external radiation fields can be defined: grey body or black body, power-law, and tabulated.

[ascl:2406.019] MBE: Magnification bias estimation

Magnification bias estimation estimates magnification bias for a galaxy sample with a complex photometric selection for the example of SDSS BOSS. The code works for CMASS and the LOWZ, z1 and z3 samples. A template for applying the approach to other surveys is included; requirements include a galaxy catalog that provides magnitudes (used for photometric selection) and the exact conditions used for the photometric selection.

[ascl:2406.018] SuperLite: Spectral synthesis code for interacting transients

SuperLite produces synthetic spectra for astrophysical transient phenomena affected by circumstellar interaction. It uses Monte Carlo methods and multigroup structured opacity calculations for semi-implicit, semirelativistic radiation transport in high-velocity shocked outflows, and can reproduce spectra of typical Type Ia, Type IIP, and Type IIn supernovae. SuperLite also generates high-quality spectra that can be compared with observations of transient events, including superluminous supernovae, pulsational pair-instability supernovae, and other peculiar transients.

[ascl:2406.017] ytree: yt-based merger-tree code

ytree reads and works with merger tree data from multiple formats. An extension of yt (ascl:1011.022), which can analyze snapshots from cosmological simulations, ytree can be thought of as the yt of merger trees. ytree's online documentation lists supported formats; support for additional formats can be added, as in principle, any type of tree-like data where an object has one or more ancestors and a single descendant can be supported.

[ascl:2406.016] BiaPy: Bioimage analysis pipeline builder

BiaPy provides deep-learning workflows for a large variety of image analysis tasks, including 2D and 3D semantic segmentation, instance segmentation, object detection, image denoising, single image super-resolution, self-supervised learning and image classification. Though developed specifically for bioimages, it can be used for watershed-based instance segmentation for friends-of-friends proto-haloes.

[ascl:2406.015] FLORAH: Galaxy merger tree generator with machine learning

FLORAH generates the assembly history of halos using a recurrent neural network and normalizing flow model. The machine-learning framework can be used to combine multiple generated networks that are trained on a suite of simulations with different redshift ranges and mass resolutions. Depending on the training, the code recovers key properties, including the time evolution of mass and concentration, and galaxy stellar mass versus halo mass relation and its residuals. FLORAH also reproduces the dependence of clustering on properties other than mass, and is a step towards a machine learning-based framework for planting full merger trees.

[ascl:2406.014] EVA: Excess Variability-based Age

EVA (Excess Variability-based Age) computes the VarX values and VarX90 ages for a given list of stars. The package retrieves information from Gaia, performs basic var90 calculations, then calculates the age of the group in a given band or overall (by combining all three bands). EVA then analyzes and plots the results.

[ascl:2406.013] AAD: ALeRCE Anomaly Detector

The ALeRCE anomaly detector cross-validates six anomaly detection algorithms for three classes (transient, periodic, and stochastic) of anomalous sources within the Zwicky Transient Facility (ZTF) data stream using the ALeRCE light curve features. A machine and deep learning-based framework is used for anomaly detection. For each class, a distinct anomaly detection model is constructed using only information about the known objects (i.e., inliers) for training. An anomaly score is computed using the probabilities to determine whether the light curve corresponds to a transient, stochastic, or periodic nature.

[ascl:2406.012] QMC: Quadratic Monte Carlo

Quadratic Monte Carlo generates ensembles of models and confines fitness landscapes without relying on linear stretch moves; it works very efficiently for ring potential and Rosenbrock density. The method is general and can be implemented into any existing MC software, requiring only a few lines of code.

[ascl:2406.011] CTC: Color transformations calculator

Color transformations calculator determines the magnitude of a galaxy in a needed photometric band, given its color and magnitude in the original band. It supports various optical and near intrared surveys, including SDSS, DECaLS, DELVE, UKIDSS, VHS, and VIKING, and provides conversions for both total and aperture magnitudes with apertures of 1.5", 2" or 3" diameters. The source code, useful for performing bulk calculations, is available in Python and IDL; the calculator is also offered as a web service.

[ascl:2406.010] PRyMordial: Precise computations of BBN within and beyond the Standard Model

PRyMordial offers fast and precise evaluation of both the Big Bang Nucleosynthesis (BBN) light-element abundances and the effective number of relativistic degrees of freedom. It can be used within and beyond the Standard Model. The package calculates Neff and helium-4, deuterium, helium-3 and lithium-7 abundances. PRyMordial corrects for QED plasma effects, neutron lifetime, and incomplete neutrino decoupling, and includes an optional module that re-elaborates all the ODE systems of the code in Julia.

[ascl:2406.009] CBiRd: Bias tracers In Redshift space

CBiRd (Code for Bias tracers In Redshift space) provides correlators in the Effective Field Theory of Large-Scale Structure (EFTofLSS) in a ready-to-use pipeline for cosmological analysis of galaxy-redshift surveys data. It provides a core calculation package (C++BiRd), a Python implementation of a Taylor expansion of the power spectrum around a reference cosmology for efficient evaluation (TBiRd), and libraries to correct for observational systematics. CBiRd also provides MCMC samplers (MCBiRd) for a power spectrum and bispectrum analysis of galaxy-redshift surveys data based on emcee (ascl:1303.002), and can provide an earlybird pass to explore the cosmos with LSS surveys.

[ascl:2406.008] sphereint: Integrate data on a grid within a sphere

sphereint calculates the numerical volume in a sphere. It provides a weight for each grid position based on whether or not it is in (weight = 1), out (weight = 0), or partially in (weight in between 0 and 1) a sphere of a given radius. A cubic cell is placed around each grid position and the volume of the cell in the sphere (assuming a flat surface in the cell) is calculated and normalized by the cell volume to obtain the weight.

[ascl:2406.007] CARDiAC: Anisotropic Redshift Distributions in Angular Clustering

CARDiAC (Code for Anisotropic Redshift Distributions in Angular Clustering) computes the impact of anisotropic redshift distributions on a wide class of angular clustering observables. It supports auto- and cross-correlations of galaxy samples and cosmic shear maps, including galaxy-galaxy lensing. The anisotropy can be present in the mean redshift and/or width of Gaussian distributions, as well as in the fraction of galaxies in each component of multi-modal distributions. Templates of these variations can be provided by the user or simulated internally within the code.

[ascl:2406.006] anzu: Measurements and emulation of Lagrangian bias models for clustering and lensing cross-correlations

The anzu package offers two independent codes for hybrid Lagrangian bias models in large-scale structure. The first code measures the hybrid "basis functions"; the second takes measurements of these basis functions and constructs an emulator to obtain predictions from them at any cosmology (within the bounds of the training set). anzu is self-contained; given a set of N-body simulations used to build emulators, it measures the basis functions. Alternatively, given measurements of the basis functions, anzu should in principle be useful for constructing a custom emulator.

[ascl:2406.005] Lenser: Measure weak gravitational flexion

Lenser estimates weak gravitational lensing signals, particularly flexion, from real survey data or realistically simulated images. Lenser employs a hybrid of image moment analysis and an Analytic Image Modeling (AIM) analysis. In addition to extracting flexion measurements by fitting a (modified Sérsic) model to a single image of a galaxy, Lenser can do multi-band, multi-epoch fitting. In multi-band mode, Lenser fits a single model to multiple postage stamps, each representing an exposure of a single galaxy in a particular band.

[ascl:2406.004] candl: Differentiable likelihood framework for analyzing CMB power spectrum measurements

candl (CMB Analysis With A Differentiable Likelihood) analyzes CMB power spectrum measurements using a differentiable likelihood framework. It is compatible with JAX (ascl:2111.002), though JAX is optional, allowing for fast and easy computation of gradients and Hessians of the likelihoods, and candl provides interface tools for working with other cosmology software packages, including Cobaya (ascl:1910.019) and MontePython (ascl:1805.027). The package also provides auxiliary tools for common analysis tasks, such as generating mock data, and supports the analysis of primary CMB and lensing power spectrum data.

[ascl:2406.003] SMART: Spectral energy distribution (SED) fitter

SMART (Spectral energy distributions Markov chain Analysis with Radiative Transfer models) implements a Bayesian Markov chain Monte Carlo (MCMC) method to fit the ultraviolet to millimeter spectral energy distributions (SEDs) of galaxies exclusively with radiative transfer models. The models constitute four types of pre-computed libraries, which describe the starburst, active galactic nucleus (AGN) torus, host galaxy and polar dust components.

[ascl:2406.002] SRF: Scaling Relations Finder

Scaling Relations Finder finds the scaling relations between magnetic field properties and observables for a model of galactic magnetic fields. It uses observable quantities as input: the galaxy rotation curve, the surface densities of the gas, stars and star formation rate, and the gas temperature to create galactic dynamo models. These models can be used to estimate parameters of the random and mean components of the magnetic field, as well as the gas scale height, root-mean-square velocity and the correlation length and time of the interstellar turbulence, in terms of the observables.

[ascl:2406.001] GAStimator: Python MCMC gibbs-sampler with adaptive stepping

GAStimator implements a Python MCMC Gibbs-sampler with adaptive stepping. The code is simple, robust, and stable and well suited to high dimensional problems with many degrees of freedom and very sharp likelihood features. It has been used extensively for kinematic modeling of molecular gas in galaxies, but is fully general and may be used for any problem MCMC methods can tackle.

[ascl:2405.025] CosmoPower: Machine learning-accelerated Bayesian inference

CosmoPower develops Bayesian inference pipelines that leverage machine learning to solve inverse problems in science. While the emphasis is on building algorithms to accelerate Bayesian inference in cosmology, the implemented methods allow for their application across a wide range of scientific fields. CosmoPower provides neural network emulators of matter and Cosmic Microwave Background power spectra, which can replace Boltzmann codes such as CAMB (ascl:1102.026) or CLASS (ascl:1106.020) in cosmological inference pipelines, to source the power spectra needed for two-point statistics analyses. This provides orders-of-magnitude acceleration to the inference pipeline and integrates naturally with efficient techniques for sampling very high-dimensional parameter spaces.

[ascl:2405.024] ndcube: Multi-dimensional contiguous and non-contiguous coordinate-aware arrays

ndcube manipulates, inspects, and visualizes multi-dimensional contiguous and non-contiguous coordinate-aware data arrays. A sunpy (ascl:1401.010) affiliated package, it combines data, uncertainties, units, metadata, masking, and coordinate transformations into classes with unified slicing and generic coordinate transformations and plotting and animation capabilities. ndcube handles data of any number of dimensions and axis types (e.g., spatial, temporal, and spectral) whose relationship between the array elements and the real world can be described by World Coordinate System (WCS) translations.

[ascl:2405.023] raccoon: Radial velocities and Activity indicators from Cross-COrrelatiON with masks

raccoon implements the cross-correlation function (CCF) method. It builds weighted binary masks from a stellar spectrum template, computes the CCF of stellar spectra with a mask, and derives radial velocities (RVs) and activity indicators from the CCF. raccoon is mainly implemented in Python 3; it also uses some Fortran subroutines that are called from Python.

[ascl:2405.022] blackthorn: Spectra from right-handed neutrino decays

blackthorn generates spectra of dark matter annihilations into right-handed (RH) neutrinos or into particles that result from their decay. These spectra include photons, positrons, and neutrinos. The code provides support for varied RH-neutrino masses ranging from MeV to TeV by incorporating hazma, PPPC4DMID, and HDMSpectra models to compute dark matter annihilation cross sections and mediator decay widths. blackthorn also computes decay branching fractions and partial decay widths.

[ascl:2405.021] PALpy: Python positional astronomy library interface

PALpy provides a Python interface to PAL, the positional Astronomy Library (ascl:1606.002), which is written in C. All arguments modified by the C API are returned and none are modified. The one routine that is different is palObs, which returns a simple dict that can be searched using standard Python. The keys to the dict are the short names and the values are another dict with keys name, long, lat and height.

[ascl:2405.020] tapify: Multitaper spectrum for time-series analysis

tapify implements a suite of multitaper spectral estimation techniques for analyzing time series data. It supports analysis of both evenly and unevenly sampled time series data. The multitaper statistic tackles the problems of bias and consistency, which makes it an improvement over the classical periodogram for evenly sampled data and the Lomb-Scargle periodogram for uneven sampling. In basic statistical terms, this estimator provides a confident look at the properties of a time series in the frequency or Fourier domain.

[ascl:2405.019] coronagraph: Python noise model for directly imaging exoplanets

coronagraph provides a Python noise model for directly imaging exoplanets with a coronagraph-equipped telescope. Based on the original IDL code for this coronagraph model, coronograph_noise (ascl:2405.018), the Python version has been expanded in a few key ways. Most notably, the Telescope, Planet, and Star objects used for reflected light coronagraph noise modeling can now be used for transmission and emission spectroscopy noise modeling, making this model a general purpose exoplanet noise model for many different types of observations.

[ascl:2405.018] coronagraph_noise: Coronagraph noise modeling routines

coronagraph_noise simulates coronagraph noise. Written in IDL, the code includes a generalized coronagraph routine and simulators for the WFIRST Shaped Pupil Coronagraph in both spectroscopy and imaging modes. Functions available include stellar and planetary flux functions, planet photon and zodiacal light count rates, planet-star flux ratio, and clock induced charge count rate, among others. coronagraph_noise also includes routines to smooth a plot by convolving with a Gaussian profile to convolve a spectrum with a given instrument resolution and to take a spectrum that is specified at high spectral resolution and degrade it to a lower resolution. A Python implementation of coronagraph_noise, coronagraph (ascl:2405.019), is also available.

[ascl:2405.017] AFINO: Automated Flare Inference of Oscillations

AFINO (Automated Flare Inference of Oscillations) finds oscillations in time series data using a Fourier-based model comparison approach. The code analyzes the date and generates a results file in either JSON or Pickle format, which contains numerous properties of the data and analysis, and a summary plot.

[ascl:2405.016] ABBHI: Autoregressive binary black hole inference

autoregressive-bbh-inference, written in Python, models the distributions of binary black hole masses, spins, and redshifts to identify physical features appearing in these distributions without the need for strongly-parametrized population models. This allows not only agnostic study of the “known unknowns” of the black hole population but also reveals the “unknown unknowns," the unexpected and impactful features that may otherwise be missed by the standard building-block method.

[ascl:2405.015] sunbather: Escaping exoplanet atmospheres and transit spectra simulator

sunbather simulates the upper atmospheres of exoplanets and their observational signatures. The code constructs 1D Parker wind profiles using p-winds (ascl:2111.011) to simulate these with Cloudy (ascl:9910.001), and postprocesses the output with a custom radiative transfer module to predict the transmission spectra of exoplanets.

[ascl:2405.014] EF-TIGRE: Effective Field Theory of Interacting dark energy with Gravitational REdshift

EF-TIGRE (Effective Field Theory of Interacting dark energy with Gravitational REdshift) constrains interacting Dark Energy/Dark Matter models in the Effective Field Theory framework through Large Scale Structures observables. In particular, the observables include the effect of gravitational redshift, a distortion of time from galaxy clustering. This generates a dipole in the correlation function which is detectable with two distinct populations of galaxies, thus making it possible to break degeneracies among parameters of the EFT description.

[ascl:2405.013] LTdwarfIndices: Variable brown dwarf identifier

LTdwarfIndices studies spectral indices to determine whether one or more brown dwarfs are photometric variable candidates. For a single brown dwarf, it analyzes a given set of indices and outputs the number of graphs the object appears in in the variable area, whether it is a variable or non-variable candidate, and, optionally, an index-index or histogram plot. Using another code module, LTdwarftIndices can also analyze a set of sample indices for many brown dwarfs.

[ascl:2405.012] fitramp: Likelihood-based jump detection

fitramp fits a ramp to a series of nondestructive reads and detects and rejects jumps. The software performs likelihood-based jump detection for detectors read out up-the-ramp; it uses the entire set of reads to compute likelihoods. The code compares the χ2 value of a fit with and without a jump for every possible jump location. fitramp can fit ramps with and without fitting the reset value (the pedestal), and fit and mask jumps within or between groups of reads. It can also compute the bias of ramp fitting.

[ascl:2405.011] DirectSHT: Direct spherical harmonic transform

DirectSHT performs direct spherical harmonic transforms for point sets on the sphere. Given a set of points, defined by arrays of theta and phi (in radians) and weights, it provides the spherical harmonic transform coefficients alm. JAX (ascl:2111.002) can be used to speed up the computation; the code will automatically fall back to numpy if JAX is not present. The code is much faster when run on GPUs. When they are available and JAX is installed, the code automatically distributes computation and memory across them.

[submitted] Estimating photo-z of quasars based on a cross-modal contrastive learning method

MMLPhoto-z is a cross-modal contrastive learning approach for estimating photo-z of quasars. This method employs adversarial training and contrastive loss functions to promote the mutual conversion between multi-band photometric data features (magnitude, color) and photometric image features, while extracting modality-invariant features.

[ascl:2405.010] riddler: Type Ia supernovae spectral time series fitter

riddler automates fitting of type Ia supernovae spectral time series. The code is comprised of a series of neural networks trained to emulate radiative transfer simulations from TARDIS (ascl:1402.018). Emulated spectra are then fit to observations using nested sampling implemented in UltraNest (ascl:1611.001) to estimate the posterior distributions of model parameters and evidences.

[ascl:2405.009] morphen: Astronomical image analysis and processing functions

morphen performs image analysis, multi-Sersic image fitting decomposition, and radio interferometric self-calibration, thus measuring basic image morphology and photometry. The code provides a state-of-the-art Python-based image fitting implementation based on the Sersic function. Geared, though not exclusively, toward radio astronomy, morphen's tools involve pure python, but also are integrated with CASA (ascl:1107.013) in order to work with common casatasks as well as WSClean (ascl:1408.023).

[ascl:2405.008] i-SPin: Multicomponent Schrodinger-Poisson systems with self-interactions

i-SPin simulates 3-component Schrodinger systems with and without gravity and with and without self-interactions while obeying SO(3) symmetry. The code allows the user to input desired parameters, along with initial conditions for the Schrodinger fields. Its three function modules then perform the main (drift-kick-drift) steps of the algorithm, track the fractional changes in total mass and spin in the system, and then plot results. The default plots are mass and spin density projections along with total mass and spin fractional changes.

[ascl:2405.007] GauPro: R package for Gaussian process modeling

GauPro fits a Gaussian process regression model to a dataset. A Gaussian process (GP) is a commonly used model in computer simulation. It assumes that the distribution of any set of points is multivariate normal. A major benefit of GP models is that they provide uncertainty estimates along with their predictions.

[ascl:2405.006] ICPertFLRW: Cactus Code thorn for initial conditions

ICPertFLRW, a Cactus code (ascl:1102.013) thorn, provides as initial conditions an FLRW metric perturbed with the comoving curvature perturbation Rc in the synchronous comoving gauge. Rc is defined as a sum of sinusoidals (20 in each x, y, and z direction) whose amplitude, wavelength, and phase shift are all parameters in param.ccl. While the metric and extrinsic curvature only have first order scalar perturbations, the energy density is computed exactly in full from the Hamiltonian constraint, hence vector and tensor perturbations are initially present at higher order. These are then passed to the CT_Dust thorn to be evolved.

[ascl:2405.005] pySPEDAS: Python-based Space Physics Environment Data Analysis Software

pySPEDAS (Python-based Space Physics Environment Data Analysis Software) supports multi-mission, multi-instrument retrieval, analysis, and visualization of heliophysics time series data. A Python implementation of SPEDAS (ascl:2405.001), it supports most of the capabilities of SPEDAS; it can load heliophysics data sets from more than 30 space-based and ground-based missions, coordinate transforms, interpolation routines, and unit conversions, and provide interactive access to numerous data sets. pySPEDAS also creates multi-mission, multi-instrument figures, includes field and wave analysis tools, and performs magnetic field modeling, among other functions.

[ascl:2405.004] pyADfit: Nested sampling approach to quasi-stellar object (QSO) accretion disc fitting

pyADfit models accretion discs around astrophysical objects. The code provides functions to calculate physical quantities related to accretion disks and perform parameter estimation using observational data. The accretion disc model is the alpha-disc model while the parameter estimation can be performed with Nessai (ascl:2405.002), Raynest (ascl:2405.003), or CPnest (ascl:2205.021).

[ascl:2405.003] raynest: Parallel nested sampling based on ray

raynest, written in Python, computes Bayesian evidences and probability distributions using parallel chains.

[ascl:2405.002] nessai: Nested sampling with artificial intelligence

nessai performs nested sampling for Bayesian Inference and incorporates normalizing flows. It is designed for applications where the Bayesian likelihood is computationally expensive. nessai uses PyTorch and also supports the use of bilby (ascl:1901.011).

[ascl:2405.001] SPEDAS: Space Physics Environment Data Analysis System

The SPEDAS (Space Physics Environment Data Analysis Software) framework supports multi-mission data ingestion, analysis and visualization for the Space Physics community. It standardizes the retrieval of data from distributed repositories, the scientific processing with a powerful set of legacy routines, the quick visualization with full output control and the graph creation for use in papers and presentations. SPEDAS includes a GUI for ease of use by novice users, works on multiple platforms, and though based on IDL, can be used with or without an IDL license. The framework supports plugin modules for multiple projects such as THEMIS, MMS, and WIND, and provides interfaces for software modules developed by the individual teams of those missions. A Python implementation of the framework, PySPEDAS (ascl:2405.005), is also available.

[submitted] Swiftest

Swiftest is a software package designed to model the long-term dynamics of system of bodies in orbit around a dominant central body, such a planetary system around a star, or a satellite system around a planet. The main body of the program is written in Modern Fortran, taking advantage of the object-oriented capabilities included with Fortran 2003 and the parallel capabilities included with Fortran 2008 and Fortran 2018. Swiftest also includes a Python package that allows the user to quickly generate input, run simulations, and process output from the simulations. Swiftest uses a NetCDF output file format which makes data analysis with the Swiftest Python package a streamlined and flexible process for the user. Building off a strong legacy, including its predecessors Swifter and Swift, Swiftest takes the next step in modeling the dynamics of planetary systems by improving the performance and ease of use of software, and by introducing a new collisional fragmentation model. Currently, Swiftest includes the four main symplectic integrators included in its predecessors: WHM, RMVS, HELIO, and SyMBA. In addition, Swiftest also contains the Fraggle model for generating products of collisional fragmentation.

[submitted] PypeIt-NIRSPEC: A PypeIt Module for Reducing Keck/NIRSPEC High Resolution Spectra

We present a module built into the PypeIt Python package to reduce high resolution Y, J, H, K, and L band spectra from the W. M. Keck Observatory NIRSPEC spectrograph. This data reduction pipeline is capable of spectral extraction, wavelength calibration, and telluric correction of data taken before and after the 2018 detector upgrade, all in a single package. The procedure for reducing data is thoroughly documented in an expansive tutorial.

[submitted] BFast

A fast GPU-based bispectrum estimator implemented using JAX.

[ascl:2404.030] RhoPop: Small-planet populations identifier

RhoPop identifies compositionally distinct populations of small planets (R≲2R). It employs mixture models in a hierarchical framework and the dynesty (ascl:1809.013) nested sampler for parameter and evidence estimates. RhoPop includes a density-mass grid of water-rich compositions from water mass fraction (WMF) 0-1.0 and a grid of volatile-free rocky compositions over a core mass fraction (CMF) range of 0.006-0.95. Both grids were calculated using the ExoPlex mass-radius-composition calculator (ascl:2404.029).

[ascl:2404.029] ExoPlex: Thermodynamically self-consistent mass-radius-composition calculator

ExoPlex is a thermodynamically self-consistent mass-radius-composition calculator. Users input a bulk molar composition and a mass or radius, and ExoPlex will calculate the resulting radius or mass. Additionally, it will produce the planet's core mass fraction, interior mineralogy and the pressure, adiabatic temperature, gravity and density profiles as a function of depth.

[ascl:2404.028] binary_precursor: Light curve model of supernova precursors powered by compact object companions

binary_precursor models light curves of supernova (SN) precursors powered by a pre-SN outburst accompanying accretion onto a compact object companion. Though it is only one of the possible models, it is useful for interpretations of (bright) SN precursors highly exceeding the Eddington limit of massive stars, which are observed in a fraction of SNe with dense circumstellar matter (CSM) around the progenitor. It offers a number of editable parameters, including compact object mass, progenitor mass, progenitor radii, and opacity. Initial CSM velocity can be normalized by the progenitor escape velocity (xi parameter), and the CSM mass, ionization temperature, and binary separation can also be specified.

[ascl:2404.027] s2fft: Differentiable and accelerated spherical transforms

S2FFT computes Fourier transforms on the sphere and rotation group using JAX (ascl:2111.002) or PyTorch. It leverages autodiff to provide differentiable transforms, which are also deployable on hardware accelerators (e.g., GPUs and TPUs). More specifically, S2FFT provides support for spin spherical harmonic and Wigner transforms (for both real and complex signals), with support for adjoint transformations where needed, and comes with different optimisations (precompute or not) that one may select depending on available resources and desired angular resolution L.

[ascl:2404.026] LEO-vetter: Automated vetting for TESS planet candidates

LEO-vetter automatically vets transit signals found in light curve data. Inspired by the Kepler Robovetter (ascl:2012.006), LEO-vetter computes vetting metrics to be compared to a series of pass-fail thresholds. If a signal passes all tests, it is considered a planet candidate (PC). If a signal fails at least one test, it may be either an astrophysical false positive (FP; e.g., eclipsing binary, nearby eclipsing signal) or false alarm (FA; e.g., systematic, stellar variability). Pass-fail thresholds can be changed to suit individual research purposes, and LEO-vetter produces vetting reports for manual inspection of signals. Flux-level vetting can be applied to any light curve dataset (such as Kepler, K2, and TESS), including light curves with mixes of cadences, while pixel-level vetting has been implemented for TESS.

[ascl:2404.025] stringgen: Scattering based cosmic string emulation

stringgen creates emulations of cosmic string maps with statistics similar to those of a single (or small ensemble) of reference simulations. It uses wavelet phase harmonics to calculate a compressed representation of these reference simulations, which may then be used to synthesize new realizations with accurate statistical properties, e.g., 2 and 3 point correlations, skewness, kurtosis, and Minkowski functionals.

[ascl:2404.024] pAGN: AGN disk model equations solver

Written in Python, pAGN solves AGN disk model equations. The code is highly customizable and, with the correct inputs, provides a fully evolved AGN disk model through parametric 1D curves for key disk parameters such as temperature and density. pAGN can be used to study migration torques in AGN disks, simulations of compact object formation inside gas disks, and comparisons with new, more complex models of AGN disks.

[ascl:2404.023] mhealpy: Object-oriented healpy wrapper with support for multi-resolution maps

mhealpy extends the functionalities of the HEALPix (ascl:1107.018) wrapper healpy (ascl:2008.022) to handle single and multi-resolution maps (a.k.a. multi-order coverage maps or MOC maps). In addition to creating and analyzes MOC maps, it supports arithmetic operations, adaptive grids, resampling of existing multi-resolution maps, and plotting, among other functions, and reads and writes to FITS, which enables sharing spatial information for multiwavelength and multimessenger analyses.

[ascl:2404.022] jetsimpy: Hydrodynamic model of gamma-ray burst jet and afterglow

jetsimpy creates hydrodynamic simulations of relativistic blastwaves with tabulated angular energy and Lorentz factor profiles and efficiently models Gamma-Ray Burst afterglows. It supports tabulated angular energy and tabulated angular Lorentz factor profiles. jetsimpy also supports ISM, wind, and mixed external density profile, including synthetic afterglow light curves, apparent superluminal motion, and sky map and Gaussian equivalent image sizes. Additionally, you can add your own emissivity model by defining a lambda function in a c++ source file, allowing the package to be used for more complicated models such as Synchrotron self-absorption.

[ascl:2404.021] cudisc: CUDA-accelerated 2D code for protoplanetary disc evolution simulations

cuDisc simulates the evolution of protoplanetary discs in both the radial and vertical dimensions, assuming axisymmetry. The code performs 2D dust advection-diffusion, dust coagulation/fragmentation, and radiative transfer. A 1D evolution model is also included, with the 2D gas structure calculated via vertical hydrostatic equilibrium. cuDisc requires a NVIDIA GPU.

[ascl:2404.020] NbodyIMRI: N-body solver for intermediate-mass ratio inspirals of black holes and dark matter spikes

NbodyIMRI uses N-body simulations to study Dark Matter-dressed intermediate-mass ratio inspirals (IMRI) and extreme mass ratio inspiral (EMRI) systems. The code calculates all BH-BH forces and BH-DM forces directly while neglecting DM-DM pairwise interactions. This allows the code to scale up to very large numbers of DM particles in order to study stochastic processes like dynamical friction.

[ascl:2404.019] PySSED: Python Stellar Spectral Energy Distributions

PySSED (Python Stellar Spectral Energy Distributions) downloads and extracts data on multi-wavelength catalogs of astronomical objects and regions of interest and automatically proceses photometry into one or more stellar SEDs. It then fits those SEDs with stellar parameters. PySSED can be run directly from the command line or as a module within a Python environment. The package offers a wide variety plots, including Hertzsprung–Russell diagrams of analyzed objects, angular separation between sources in specific catalogs, and two-dimensional offset between cross-matches.

[ascl:2404.018] GPUniverse: Quantum fields in finite dimensional Hilbert spaces modeler

GPUniverse models quantum fields in finite dimensional Hilbert spaces with Generalised Pauli Operators (GPOs) and overlapping degrees of freedom. In addition, the package can simulate sets of qubits that are only quasi independent (i.e., the Pauli algebras of different qubits have small, but non-zero anti-commutator), which is useful for validating analytical results for holographic versions of the Weyl field.

[ascl:2404.017] pyilc: Needlet ILC in Python

pyilc implements the needlet internal linear combination (NILC) algorithm for CMB component separation in pure Python; it also implements harmonic-space ILC. The code can also perform Cross-ILC, where the covariance matrices are computed only from independent splits of the maps. In addition, pyilc includes an inpainting code, diffusive_inpaint, that diffusively inpaints a masked region with the mean of the unmasked neighboring pixels.

[ascl:2404.016] MLTPC: Machine Learning Telescope Pointing Correction

The Machine Learning Telescope Pointing Correction code trains and tests machine learning models for correcting telescope pointing. Using historical APEX data from 2022, including pointing corrections, and other data such as weather conditions, position and rotation of the secondary mirror, pointing offsets observed during pointing scans, and the position of the sun, among other data, the code treats the data in two different ways to test which factors are the most likely to account for pointing errors.

[ascl:2404.015] EBWeyl: Compute the electric and magnetic parts of the Weyl tensor

EBWeyl computes the electric and magnetic parts of the Weyl tensor, Eαβ and Bαβ, using a 3+1 slicing formulation. The module provides a Finite Differencing class with 4th (default) and 6th order backward, centered, and forward schemes. Periodic boundary conditions are used by default; otherwise, a combination of the 3 schemes is available. It also includes a Weyl class that computes for a given metric the variables of the 3+1 formalism, the spatial Christoffel symbols, spatial Ricci tensor, electric and magnetic parts of the Weyl tensor projected along the normal to the hypersurface and fluid flow, the Weyl scalars and invariant scalars. EBWeyl can also compute the determinant and inverse of a 3x3 or 4x4 matrice in every position of a data box.

[ascl:2404.014] astroNN: Deep learning for astronomers with Tensorflow

astroNN creates neural networks for deep learning using Keras for model and training prototyping while taking advantage of Tensorflow's flexibility. It contains tools for use with APOGEE, Gaia and LAMOST data, though is primarily designed to apply neural nets on APOGEE spectra analysis and predict luminosity from spectra using data from Gaia parallax with reasonable uncertainty from Bayesian Neural Net. astroNN can handle 2D and 2D colored images, and the package contains custom loss functions and layers compatible with Tensorflow or Keras with Tensorflow backend to deal with incomplete labels. The code contains demo for implementing Bayesian Neural Net with Dropout Variational Inference for reasonable uncertainty estimation and other neural nets.

[ascl:2404.013] Meanoffset: Photometric image alignment with row and column means

Meanoffset performs astronomical image alignment. The code uses the means of the rows and columns of an original image for alignment and finds the optimal offset corresponding to the maximum similarity by comparing different offsets between images. The similarity is evaluated by the standard deviation of the quotient divided by the means. The code is fast and robust.

Would you like to view a random code?