objective Bayes; really old data

Today was day two with the Galactic Center Group at UCLA. Again, a huge argument about priors broke out. As my loyal reader knows, I am a subjective Bayesian, not an objective Bayesian. Or more correctly “I don't always adopt Bayes, but when I do, I adopt subjective Bayes!” But the argument was about the best way to set objective-Bayes priors. My position is that you can't set them in the space of your parameters, because your parameterization itself is subjective. So you have to set them in the space of your data. That's exactly what the Galactic Center Group at UCLA is doing, and they can show that it gives them much better results (in terms of bias and coverage) than setting the priors in dumber “flat” ways (which is standard in the relevant literature).

One incredible thing about the work of this group is that they are still using, and still re-reducing, imaging data taken in the 1990s! That means that they are an amazing example of curation and preservation of data and reproducibility and workflow and etc. For this reason, there were information scientists at the meeting this week. It is an interesting consideration when thinking about how a telescope facility is going to be used: Will your data still be interesting 22 years from now? In the case of the Galactic Center, the answer turns out to be a resounding yes.


Galactic Center review

I spent the day at UCLA, reviewing the data-analysis work of the Galactic Center Group there, for reporting to the Keck Foundation. It was a great day on a great project. They have collected large amounts of data (for more than 20 years!), both imaging and spectroscopy, to tie down the orbits of the stars near the Galactic Center black hole, and also to tie down the Newtonian reference frame. The approach is to process imaging and spectroscopy into astrometric and kinematic measurements, and then fit those measurements with a physical model. Among the highlights of the day were arguments about priors on orbital parameters, and descriptions of post-Newtonian terms that matter if you want to test General Relativity. Or test for the presence of dark matter concentrated at the center of the Galaxy.


the assumptions underlying EPRV

The conversation on Friday with Cisewski and Bedell got me thinking all weekend. It appears that the problem of precise RV difference measurement becomes ill-posed once we permit the stellar spectrum to vary with time. I felt like I nearly had a breakthrough on this today. Let me start by backing up.

It is impossible to obtain exceedingly precise absolute radial velocities (RVs) of stars, because to get an absolute RV, you need a spectral model that puts the centroids of the absorption lines in precisely the correct locations. Right now physical models of convecting photospheres have imperfections that lead to small systematic differences in line shapes, depths, and locations between the models of stars and the observations of stars. Opinions vary, but most astronomers would agree that this limits absolute RV accuracy at the 0.3-ish km/s level (not m/s level, km/s level).

How is it, then, that we measure at the m/s level with extreme-precision RV (EPRV) projects? The answer is that as long as the stellar spectrum doesn't change with time, we can measure relative velocity changes to arbitrary accuracy! That has been an incredibly productive realization, leading as it did to the discovery, confirmation, or characterization of many hundreds of planets around other stars!

The issue is: Stellar spectra do change with time! There is activity, and also turbulent convection, and also rotation. This puts a long-term wrench in the long-term EPRV plans. It might even partially explain why current EPRV projects never beat m/s accuracy, even when the data (on the face of it) seem good enough to do better. Now the question is: Do the time variations of stellar spectra put an absolute floor on relative-RV measurement? That is, do they limit ultimate precision?

I think the answer is no. But the Right Thing To Do (tm) might be hard. It will involve making some new assumptions. No longer will we assume that the stellar spectrum is constant with time. But we will have to assume that spectral variations are somehow uncorrelated (in the long run) with exoplanet phase. We might also have to assume that the exoplanet-induced RV variations are dynamically predictable. Time to work out exactly what we need to assume and how.


all about radial velocities

The day started with a conversation among Stuermer (Chicago), Montet (Chicago), Bedell (Flatiron), and me about the problem of deriving radial velocities from two-d spectroscopic images rather than going through one-d extractions. We tried to find scope for a minimal paper on the subject.

The day ended with a great talk by Jessi Cisewski (Yale) about topological data analysis. She finally convinced me that there is some there there. I asked about using automation to find best statistics, and she agreed that it must be possible. Afterwards, Ben Wandelt (Paris) told me he has a nearly-finished project on this very subject. Before Cisewski's talk, she spoke to Bedell and me about our EPRV plans. That conversation got me concerned about the non-identifiability of radial velocity if you let the stellar spectrum vary with time. Hmm.


what's the circular acceleration?

Ana Bonaca (Harvard) and I started the day with a discussion that was in part about how to present our enormous, combinatoric range of results we have created with our information-theory project. One tiny point there: How do you define the equivalent of the circular velocity in a non-axi-symmetric potential? There is no clear answer. One is to do something relating to averaging the acceleration around a circular ring. Another is to use v2/R locally. Another is to use that locally, but on the radial component of the acceleration.

While I was proctoring an exam, Megan Bedell (Flatiron) wrote me to say that our one-d, data-driven spectroscopic RV extraction code is now performing almost as well as the HARPS pipeline, on real data. That's exciting. We had a short conversation about extending our analysis to more stars to make the point better. We believe that our special sauce is our treatment of the tellurics, but we are not yet certain of this.


Gaia-based training data, GANs, and optical interferometry

In today's Gaia DR2 working meeting, I worked with Christina Eilers (MPIA) to build the APOGEE+TGAS training set we could use to train her post-Cannon model of stellar spectra. The important idea behind the new model is that we are no longer trying to specify the latent parameters that control the spectral generation; we are using uninterpreted latents. For this reason, we don't need complete labels (or any labels!) for the training set. That means we can train on, and predict, any labels or label subset we like. We are going to use absolute magnitude, and thereby put distances onto all APOGEE giants. And thereby map the Milky Way!

In stars group meeting, Richard Galvez (NYU) started a lively discussion by showing how generative adversarial networks work and giving some impressive examples on astronomical imaging data. This led into some good discussion about uses and abuses of complex machine-learning methods in astrophysics.

Also in stars meeting, Oliver Pfuhl (MPA) described to us how the VLT four-telescope interferometric imager GRAVITY works. It is a tremendously difficult technical problem to perform interferometric imaging in the optical: You have to keep everything aligned in real time to a tiny fraction of a micron, and you have little carts with mirrors zipping down tunnels at substantial speeds! The instrument is incredibly impressive: It is performing milli-arcsecond astrometry of the Galactic Center, and it can see star S2 move on a weekly basis!.


purely geometric spectroscopic parallaxes

Today was a low research day; it got cut short. But Eilers made progress on the semi-supervised GPLVM model we have been working on. One thing we have been batting around is scope for this paper. Scope is challenging, because the GPLVM is not going to be high performance for big problems. Today we conceived a scope that is a purely geometric spectroscopic parallax method. That is, a spectroscopic parallax method (inferring distances from spectra) that makes no use of stellar physical models whatsoever, not even in training!


Spitzer death; nearest neighbors

Today was spent at the Spitzer Science Center for the 39th meeting of the Oversight Committee, on which I have served since 2008. This meeting was just like every other: I learned a huge amount! This time about how the mission comes to a final end, with the exercise of various un-exercised mechanisms, and then the expenditure of all propellants and batteries. We discussed also the plans for the final proposal call, and the fitness of the observatory to observe way beyond its final day. On that latter note: We learned that NASA will transfer operations of Spitzer to a third party, for about a million USD per month. That's an interesting opportunity for someone. Or some consortium.

In unrelated news, Christina Eilers (MPIA) executed a very simple (but unprecedented) idea today: She asked what would happen with a data-driven model of stellar spectra (APOGEE data) if the model is simply nearest neighbor: That is, if each test-set object is given the labels of its nearest (in a chi-squared sense) training-set object. The answer is impressive: the nearest-neighbor method is only slightly worse than the quadratic data-driven model known as The Cannon. This all relates to the point that most machine-learning methods are—in some sense—nearest-neighbor methods!


seeing giants shrink in real time? the dark matter

At parallel-working session in my office at NYU, I worked with Lauren Blackburn (TransPerfect) to specify a project on clustering and classification of red-giant asteroseismic spectra. The idea (from Tim Bedding's group at Sydney) is to distinguish the stars that are going up the red-giant branch from the ones coming down. Blackburn asked if we could just see the spectra change with time for the stars coming down. I said “hell no” and then we wondered: Maybe?. That's not the plan, but we certainly should check that!

In the NYU Astro Seminar, Vera Glusevic (IAS) gave a great talk on inferring the physical properties of the dark matter (that is, not just the mass and cross-section, but real interaction parameters in natural models. She has results that combinations of different direct-detection targets, being differently sensitive to spin-dependent interactions, could be very discriminatory. But she did have to assume large cross sections, so her results are technically optimistic. She then blew us away with strong limits on dark-matter models using the CMB (and the dragging of nuclei by dark-matter particles in the early universe). Great, and ruling out some locally popular models!

Late in the day, Bedell and I did a writing workshop on our EPRV paper. We got a tiny bit done, which should be called not “tiny” but really a significant achievement. Writing is hard.


so much Gaussian processes

The day was all GPs. Markus Bonse (Darmstadt) showed various of us very promising GPLVM results for spectra, where he is constraining part of the (usually unobserved) latent space to look like the label space (like stellar parameters). This fits into the set of things we are doing to enrich the causal structure of existing machine-learning methods, to make them more generalizable and interpretable. In the afternoon, Dan Foreman-Mackey (Flatiron) found substantial issues with GP code written by me and Christina Eilers (MPIA), causing Eilers and I to have to re-derive and re-write some analytic derivatives. That hurt!
Especially since the derivatives involve some hand-coded sparse linear algebra. But right at the end of the day (like with 90 seconds to spare), we got the new derivatives working in the fixed code. Feelings were triumphant.


what's our special sauce? and Schwarzshild modeling

My day started with Dan Foreman-Mackey (Flatiron) smacking me down about my position that it is causal structure that makes our data analyses and inferences good. The context is: Why don't we just turn on the machine learning (like convnets and GANs and etc). My position is: We need to make models that have correct causal structure (like noise sources and commonality of nuisances and so on). But his position is that, fundamentally, it is because we control model complexity well (which is hard to do with extreme machine-learning methods) and we have a likelihood function: We can compute a probability in the space of the data. This gets back to old philosophical arguments that have circulated around my group for years. Frankly, I am confused.

In our Gaia DR2 prep meeting, I had a long conversation with Wyn Evans (Cambridge) about detecting and characterizing halo substructure with a Schwarzschild model. I laid out a possible plan (pictured below). It involves some huge numbers, so I need some clever data structures to trim the tree before we compute 1020 data–model comparisons!

Late in the day, I worked with Christina Eilers (MPIA) to speed up her numpy code. We got a factor of 40! (Primarily by capitalizing on sparseness of some operators to make the math faster.)


empirical yields; galaxy alignments; linear algebra foo.

Early in the day, Kathryn Johnston (Columbia) convened the local Local Group group (yes, I wrote that right) at Columbia. We had presentations from various directions (and I could only be at half of the day). Subjective highlights for me included the following: Andrew Emerick (Columbia) showed that there is a strong prediction that in dwarf galaxies, AGB-star yields will be differently distributed than supernovae yields. That should be observable, and might be an important input to my life-long goal of deriving nucleosynthetic yields from the data (rather than theory). Wyn Evans (Cambridge) showed that you can measure some statistics of the alignments of dwarf satellite galaxies with respect to their primary-galaxy hosts, using the comparison of the Milky Way and M31. M31 is more constraining, because we aren't sitting near the center of it! These alignments appear to have the right sign (but maybe the wrong amplitude?) to match theoretical predictions.

Late in the day, Christina Eilers (MPIA) showed up and we discussed with Dan Foreman-Mackey (Flatiron) our code issues. He noted that we are doing the same linear-algebra operations (or very similar ones) over and over again. We should not use solve, but rather use cho_factor and then cho_solve that permits fast operation given the pre-existing factorization. He also pointed out that in the places where we have missing data, the factorization can be updated in a fast way rather than fully re-computed. Those are good ideas! As I often like to say, many of my super-powers boil down to just knowing who to ask about linear algebra.


self-calibration for EPRV; visualizations of the halo

The morning started with Bedell and Foreman-Mackey and me devising a self-calibration approach to combining the individual-order radial velocities we are getting for the different orders at the different epochs for a particular star in the HARPS archive. We need inverse variances for weighting in the fit, so we got those too. The velocity-combination model is just like the uber-calibration of the SDSS imaging we did so many years ago. We discussed optimization vs marginalization of nuisances, and decided that the data are going to be good enough that probably it doesn't matter which we do. I have to think about whether we have a think-o there.

After that, I worked with Anderson and Belokurov on finding kinematic (phase-space) halo substructure in fake data, in SDSS, and in Gaia DR2. We have been looking at proper motions, because for halo stars, these are better measured than parallaxes! Anderson made some great visualizations of the proper-motion distribution in sky (celestial-coordinate) pixels. Today she made some visualizations of celestial-coordinate distribution in proper-motion pixels. I am betting this latter approach will be more productive. However, Belokurov and I switched roles today, with me arguing for “visualize first, think later” and him arguing for making sensible metrics or models for measuring overdensity significances.

Andy Casey (Monash) is in town! I had a speedy conversation with him about calibration, classification, asteroseismology, and The Cannon.


finding and characterizing halo streams in Gaia

Our weekly Gaia DR2 prep meeting once again got us into long arguments about substructure in the Milky Way halo, how to find it and how to characterize it. Wyn Evans (Cambridge) showed that when he looks at halo substructures he has found in terms of actions, they show larger spreads in action in some potentials and smaller in others. Will this lead to constraints on dynamics? Robyn Sanderson (Caltech) thinks so, and so did everyone in the room. Kathryn Johnston (Columbia) and I worked through some ideas for empirical or quasi-empirical stream finding in the data space, some of them inspired by the Schwarzschild-style modeling suggested by Sanderson in my office last Friday. And Lauren Anderson showed plots of Gaia expectations for substructure from simulations, visualized in the data space. We discussed many other things!


gradients in cosmological power, and EPRV

In the morning, Kate Storey-Fisher (NYU) dropped by to discuss our projects on finding anomalies in the large-scale structure. We discussed the use of mocks to build code that will serve as a pre-registration of hypotheses before we test them. We also looked at a few different kinds of anomalies for which we could easily search. One thing we came up with is a generalization of the real-space two-point function estimators currently used in large-scale structure into estimators not just of the correlation function, but also its gradient with respect to spatial position. That is, we could detect arbitrary generalizations of the hemispheric asymmetry seen in Planck but in a large-scale structure survey, and with any scale-dependence (or different gradients at different scales). Our estimator is related to the concept of marked correlation functions, I think.

Late in the day, Bedell (Flatiron), Montet (Chicago), and Foreman-Mackey (Flatiron) showed great progress on measuring RVs for stars in high-resolution spectroscopy. Their innovation is to simultaneously fit all velocities, a stellar spectrum, and a telluric spectrum, all data-driven. The method scales well (linearly with data size) and seems to suggest that we might beat the m/s barrier in measuring RVs. This hasn't been demonstrated, but the day ended with great hopes. We have been working on this model for weeks or months (depending on how you count) but today all the pieces came together. And it easily generalizes to include various kinds of variability.