Tuesdays 10:30 - 11:30 | Fridays 11:30 - 12:30
Showing votes from 2018-02-09 12:30 to 2018-02-13 11:30 | Next meeting is Tuesday Aug 5th, 10:30 am.
We present the gauge-invariant formalism of cosmological weak lensing, accounting for all the relativistic effects due to the scalar, vector, and tensor perturbations at the linear order. While the light propagation is fully described by the geodesic equation, the relation of the photon wavevector to the physical quantities requires the specification of the frames, where they are defined. By constructing the local tetrad bases at the observer and the source positions, we clarify the relation of the weak lensing observables such as the convergence, the shear, and the rotation to the physical size and shape defined in the source rest-frame and the observed angle and redshift measured in the observer rest-frame. Compared to the standard lensing formalism, additional relativistic effects contribute to all the lensing observables. We explicitly verify the gauge-invariance of the lensing observables and compare our results to previous work. In particular, we demonstrate that even in the presence of the vector and tensor perturbations, the physical rotation of the lensing observables vanishes at the linear order, while the tetrad basis rotates along the light propagation compared to a FRW coordinate. Though the latter is often used as a probe of primordial gravitational waves, the rotation of the tetrad basis is indeed not a physical observable. We further clarify its relation to the E-B decomposition in weak lensing. Our formalism provides a transparent and comprehensive perspective of cosmological weak lensing.
The Hubble constant ($H_0$) estimated from the local Cepheid-supernova (SN) distance ladder is in 3-$\sigma$ tension with the value extrapolated from cosmic microwave background (CMB) data assuming the standard cosmological model. Whether this tension represents new physics or systematic effects is the subject of intense debate. Here, we investigate how new, independent $H_0$ estimates can arbitrate this tension, assessing whether the measurements are consistent with being derived from the same model using the posterior predictive distribution (PPD). We show that, with existing data, the inverse distance ladder formed from BOSS baryon acoustic oscillation measurements and the Pantheon SN sample yields an $H_0$ posterior near-identical to the Planck CMB measurement. The observed local distance ladder value is a very unlikely draw from the resulting PPD. Turning to the future, we find that a sample of $\sim50$ binary neutron star "standard sirens" (detectable within the next decade) will be able to adjudicate between the local and CMB estimates.
We develop a methodology to optimise the measurement of Baryon Acoustic Oscillation (BAO) from a given galaxy sample. In our previous work, we demonstrated that one can measure BAO from tracers in under-dense regions (voids). In this study, we combine the over-dense and under-dense tracers (galaxies & voids) to obtain better constraints on the BAO scale. To this end, a generalised de-wiggled BAO model with an additional parameter is developed to describe both the BAO peak and the underlying exclusion pattern of void 2PCFs. We show that after applying BAO reconstruction to galaxies, the BAO peak scale of both galaxies and voids are unbiased using the modified model. Furthermore, we exploit a new description of the combined 2PCF for a multi-tracer analysis with galaxies and voids. In simulations, the joint sample improves by more than 10% the constraint for the post-reconstruction BAO peak position compared to the result from galaxies alone, which is equivalent to an enlargement of the survey volume by 20%. Applying this method to the BOSS DR12 data, we have an 11% improvement for the low-z sample (0.2 < z < 0.5), but a worse constraint for the high-z sample (0.5 < z < 0.75), which is consistent with statistical fluctuations for the current survey volume. We further find that a larger sample gives a more robust improvement due to less statistical fluctuations.
The dark matter(DM) minihalo around a massive black hole(MBH) can be redistributed into a cusp, called the DM minispike. A stellar compact object inspirals into such a MBH (from $10^3 \sim 10^5$ solar masses) harbored in a DM minispike, forms an intermediate or extreme mass-ratio inspiral (IMRI or EMRI). The gravitational waves (GWs) produced by such kind of systems will be important sources for space-based interferometers like as LISA, Taiji and Tianqin. We find that due to gravitational pull and dynamical friction of the dark matter minispike, the merger time of IMRIs and EMRIs will be dramatically reduced. Our analysis shows that this effect can greatly increase the event rates of IMRIs for space-based GW detectors comparing with the previous estimation. We point out just based on the IMRI detection events by LISA, Taiji and Tianqin, we can constrain the density profile and physical models of dark matter. Furthermore, due to the faster merger of small objects with intermediate black holes, the faster growing up of black holes will be, and then the mass distribution of the MBHs will be different from the popular opinion.
Compressing large data sets to a manageable number of summaries that are informative about the underlying parameters vastly simplifies both frequentist and Bayesian inference. When only simulations are available, these summaries are typically chosen heuristically, so they may inadvertently miss important information. We introduce a simulation-based reinforcement learning technique that trains artificial neural networks to find non-linear functionals of data that maximize Fisher information: information maximizing neural networks (IMNNs). In test cases where the posterior can be derived exactly, likelihood-free inference based on automatically derived IMNN summaries produces nearly exact posteriors, showing that these summaries are good approximations to sufficient statistics. In a series of numerical examples of increasing complexity and astrophysical relevance we show that IMNNs are robustly capable of automatically finding optimal, non-linear summaries of the data even in cases where linear compression fails: inferring the variance of Gaussian signal in the presence of noise; inferring cosmological parameters from mock simulations of the Lyman-$\alpha$ forest in quasar spectra; and inferring frequency-domain parameters from LISA-like detections of gravitational waveforms. In this final case, the IMNN summary outperforms linear data compression by avoiding the introduction of spurious likelihood maxima. We anticipate that the automatic physical inference method described in this paper will be essential to obtain both accurate and precise cosmological parameter estimates from complex and large astronomical data sets, including those from LSST, Euclid, and WFIRST.
Differential equations with distributional sources---in particular, involving delta distributions and/or derivatives thereof---have become increasingly ubiquitous in numerous areas of physics and applied mathematics. It is often of considerable interest to obtain numerical solutions for such equations, but the singular ("point-like") modeling of the sources in these problems typically introduces nontrivial obstacles for devising a satisfactory numerical implementation. A common method to circumvent these is through some form of delta function approximation procedure on the computational grid, yet this strategy often carries significant limitations. In this paper, we present an alternative technique for tackling such equations: the "Particle-without-Particle" method. Previously introduced in the context of the self-force problem in gravitational physics, the idea is to discretize the computational domain into two (or more) disjoint pseudospectral (Chebyshev-Lobatto) grids in such a way that the "particle" (the singular source location) is always at the interface between them; in this way, one only needs to solve homogeneous equations in each domain, with the source effectively replaced by jump (boundary) conditions thereon. We prove here that this method is applicable to any linear PDE (of arbitrary order) the source of which is a linear combination of one-dimensional delta distributions and derivatives thereof supported at an arbitrary number of particles. We furthermore apply this method to obtain numerical solutions for various types of distributionally-sourced PDEs: we consider first-order hyperbolic equations with applications to neuroscience models (describing neural populations), parabolic equations with applications to financial models (describing price formation), second-order hyperbolic equations with applications to wave acoustics, and finally elliptic (Poisson) equations.