galfast (Juric et al. 2008, see also Juric et al. 2010, 401.25, BAAS, 42) is a GPU-accelerated generator of flux-limited realizations of Galactic models. It is capable of generating catalogs for arbitrary, user-supplied Milky Way models, including empirically derived ones. The built-in model set is based on fits to SDSS stellar observations over 8000 deg2 of the sky, described in the “Milky Way Tomography with SDSS” series of papers (Juric et al. 2008, Ivezic et al. 2008, Bond et al. 2010). It includes a three-dimensional dust distribution map based on Amores and Lepine (2005), and Schlegel et al. (1998).
Because of the capability to use empirically derived models, galfast typically produces closer matches to the actual observed counts and color-magnitude diagrams. In particular, galfast-generated catalogs are used to derive the stellar component of “Universe Model” catalogs used by the LSST Project.
A key distinguishing characteristic of galfast is its speed. Galfast uses the GPU (with kernels written in NVIDIA C/C++ for CUDA) to offload compute intensive model sampling computations to the GPU. On a mid-range GPU (single Tesla S1070 GPU), it is typically 25x to 200x faster than similar CPU implementations, depending on the details of the simulation being performed. This enables generation of realistic catalogs to full LSST depth in hours (instead of days or weeks), making it possible to study proposed science cases with high precision.
Ultimately, using the acceleration techniques developed for galfast, observational error distributions derived from observations or simulations (e.g., ImSim, in case of LSST), and the storage/query engines from LSD, we should be able to fully forward-model large survey datasets. Besides understanding the global properties of the observed datasets, this will allow us to mine for and discover rare and unexpected features.