a prior on the CMD isn't a prior on distance, exactly

Today my research time was spent writing in the paper by Lauren Anderson (Flatiron) about the TGAS color–magnitude diagram. I think of it as being a probabilistic inference in which we put a prior on stellar distances and then infer the distance. But that isn't correct! It is an inference in which we put a prior on the color–magnitude diagram, and then, given noisy color and (apparent) magnitude information, this turns into an (effective, implicit) prior on distance. This Duh! moment led to some changes to the method section!


what's in an astronomical catalog?

The stars group meeting today wandered into dangerous territory, because it got me on my soap box! The points of discussion were: Are there biases in the Gaia TGAS parallaxes? and How could we use proper motions responsibly to constrain stellar parallaxes? Keith Hawkins (Columbia) is working a bit on the former, and I am thinking of writing something short with Boris Leistedt (NYU) on the latter.

The reason it got me on my soap-box is a huge set of issues about whether catalogs should deliver likelihood or posterior information. My view—and (I think) the view of the Gaia DPAC—is that the TGAS measurements and uncertainties are parameters of a parameterized model of the likelihood function. They are not parameters of a posterior, nor the output of any Bayesian inference. If they were outputs of a Bayesian inference, they could not be used in hierarchical models or other kinds of subsequent inferences without a factoring out of the Gaia-team prior.

This view (and this issue) has implications for what we are doing with our (Liestedt, Hawkins, Anderson) models of the color–magnitude diagram. If we output posterior information, we have to also output prior information for our stuff to be used by normals, down-stream. Even with such output, the results are hard to use correctly. We have various papers, but they are hard to read!

One comment is that, if the Gaia TGAS contains likelihood information, then the right way to consider its possible biases or systematic errors is to build a better model of the likelihood function, given their outputs. That is, the systematics should be created to be adjustments to the likelihood function, not posterior outputs, if at all possible.

Another comment is that negative parallaxes make sense for a likelihood function, but not (really) for a posterior pdf. Usually a sensible prior will rule out negative parallaxes! But a sensible likelihood function will permit them. The fact that the Gaia catalogs will have negative parallaxes is related to the fact that it is better to give likelihood information. This all has huge implications for people (like me, like Portillo at Harvard, like Lang at Toronto) who are thinking about making probabilistic catalogs. It's a big, subtle, and complex deal.


snow day

[Today was a NYC snow day, with schools and NYU closed, and Flatiron on a short day.] I made use of my incarceration at home writing in the nascent paper about the TGAS color–magnitude diagram with Lauren Anderson (Flatiron). And doing lots of other non-research things.


planning a paper sprint, completing a square

Lauren Anderson (Flatiron) are going to sprint this week on her paper on the noise-deconvolved color–magnitude diagram from the overlap of Gaia TGAS, 2MASS, and the PanSTARRS 3-d dust map. We started the day by making a long to-do list for the week, that could end in submission of the paper. My first job is to write down the data model for the data release we will do with the paper.

At lunch time I got distracted by my project to find a better metric than chi-squared to determine whether two noisily-observed objects (think: stellar spectra or detailed stellar abundance vectors) are identical or indistinguishable, statistically. The math involved completing a huge square (in linear-algebra space) twice. Yes, twice. And then the result is—in a common limit—exactly chi-squared! So my intuition is justified, and I know where it will under-perform.


the Milky Way halo

At the NYU Astro Seminar, Ana Bonaca (Harvard) gave a great talk, about trying to understand the dynamics and origin of the Milky Way halo. She has a plausible argument that the higher-metallicity halo stars are the halo stars that formed in situ and migrated out, while the lower-metallicity stars were accreted. If this holds up, I think it will probably test a lot of things about the Galaxy's formation, history, and dark-matter distribution. She also talked about stream fitting to see the dark-matter component.

On that note, we started a repo for a paper on the information theory of cold stellar streams. We re-scoped the paper around information rather than the LMC and other peculiarities of the Local Group. Very late in the day I drafted a title and abstract. This is how I start most projects: I need to be able to write a title and abstract to know that we have sufficient scope for a paper.


The Cannon and APOGEE

I discussed some more the Cramér-Rao bound (or Fisher-matrix) computations on cold stellar streams being performed by Ana Bonaca (Harvard). We discussed how things change as we increase the numbers of parameters, and designed some possible figures for a possible paper.

I had a long phone call with Andy Casey (Monash) about The Cannon, which is being run inside APOGEE2 to deliver parameters in a supplemental table in data release 14. We discussed issues of flagging stars that are far from the training set. This might get strange in high dimensions.

In further APOGEE2 and The Cannon news, I dropped an email on the mailing lists about the radial-velocity measurements that Jason Cao (NYU) has been making for me and Adrian Price-Whelan (Princeton). His RV values look much better than the pipeline defaults, which is perhaps not surprising: The pipeline uses some cross-correlation templates, while Cao uses a very high-quality synthetic spectrum from The Cannon. This email led to some useful discussion about other work that has been done along these lines within the survey.


does the Milky Way disk have spiral structure?

At stars group meeting, David Spergel (Flatiron) was tasked with convincing us (and Price-Whelan and I are skeptics!) that the Milky Way really does have spiral arms. His best evidence came from infrared emission in the Galactic disk plane, but he brought together a lot of relevant evidence, and I am closer to being convinced than ever before. As my loyal reader knows, I think we ought to be able to see the arms in any (good) 3-d dust map. So, what gives? That got Boris Leistedt (NYU), Keith Hawkins (Columbia), and me thinking about whether we can do this now, with things we have in-hand.

Also at group meeting, Semyeong Oh (Princeton) showed a large group-of-groups she has found by linking together co-moving pairs into connected components by friends-of-friends. It is rotating with the disk but at a strange angle. Is it an accreted satellite? That explanation is unlikely, but if it turns out to be true, OMG. She is off to get spectroscopy next week, though John Brewer (Yale) pointed out that he might have some of the stars already in his survey.


finding the dark matter with streams

Today was a cold-stream science day. Ana Bonaca (Harvard) computed derivatives today of stream properties with respect to a few gravitational-potential parameters, holding the present-day position and orientation of the stream fixed. This permits computation of the Cramér-Rao bound on any inference or estimate of those parameters. We sketched out some ideas about what a paper along these lines would look like. We can identify the most valuable streams, the streams most sensitive to particular potential parameters, the best combinations of streams to fit simultaneously, and the best new measurements to make of existing streams.

Separately from this, I had a phone conversation with Adrian Price-Whelan (Princeton) about the point of doing stream-fitting. It is clear (from Bonaca's work) that fitting streams in toy potentials is giving us way-under-estimated error bars. This means that we have to add a lot more potential flexibility to get more accurate results. We debated the value of things like basis-function expansions, given that these are still in the regime of toy (but highly parameterized toy) models. We are currently agnostic about whether stream fitting is really going to reveal the detailed properties of the Milky Way's dark-matter halo. That is, for example, the properties that might lead to changes in what we think is the dark-matter particle.


LMC effect on streams; dust corrections

Ana Bonaca (Harvard) showed up for a week of (cold) stellar streams inference. Our job is either to resurrect her project to fit multiple streams simultaneously, or else choose a smaller project to hack on quickly. One thing we have been discussing by email is the influence of the LMC (and SMC and M31 and so on) on the streams. Will it be degenerate with halo quadrupole or other parameters? We discussed how we might answer this question without doing full probabilistic inferences: In principle we only need to take some derivatives. This is possible, because Bonaca's generative stream model is fast. We discussed the scope of a minimum-scope paper that looks at these things, and Bonaca started computing derivatives.

Lauren Anderson (Flatiron) and I looked at her dust estimates for the stars in Gaia DR1 TGAS. She is building a model of the color–magnitude diagram with an iterative dust optimization: At zeroth iteration, the distances are (generally) over-estimated; we dust-correct, fit the CMD, and re-estimate distances. Then we re-estimate dust corrections, and do it again. The dust corrections oscillate between over- and under-corrections as the distances oscillate between over- and under-estimates. But it does seem to converge!


similarities of stars; getting started in data science

I met with Keith Hawkins (Columbia) in the morning, to discuss how to find stellar pairs in spectroscopy. I fundamentally advocated chi-squared difference, but with some modifications, like masking things we don't care about, removing trends on length-scales (think: continuum) that we don't care about, and so on. I noted that there are things to do that are somewhat better than chi-squared difference, that relate to either hypothesis testing or else parameter estimation. I promised him a note about this, and I also owe the same to Melissa Ness (MPIA), who has similar issues but in chemical-abundance (rather than purely spectral) space. Late in the day I worked on this problem over a beer. I think there is a very nice solution, but it involves (as so many things like this do) a non-trivial completion of a square.

In the afternoon, I met with my undergrad-and-masters research group. Everyone is learning how to install software, and how to plot spectra, light curves, and rectangular data. We talked about projects with the Boyajian Star, and also with exoplanets in 1:1 resonances (!).


D. E. Shaw

The research highlight of my day was a trip to D. E. Shaw, to give an academic seminar (of all things) on extra-solar planet research. I was told that the audience would be very mathematically able and familiar with physics and engineering, and it was! I talked about the stationary and non-stationary Gaussian Processes we use to model stellar (stationary) and spacecraft (non-stationary) variability, how we detect exoplanet signals by brute-force search, and how we build and evaluate hierarchical models to learn the full population of extra-solar planets, given noisy observations. The audience was interactive and the questions were on-point. Of course many of the things we do in astrophysics are not that different—from a data-analysis perspective—from things the hedge funds do in finance. I spent my time with the D. E. Shaw trying to understand the atmosphere in the firm. It seems very academic and research-based, and (unlike at many banks), the quantitative researchers run the show.


fitting stellar spectra and deblending galaxy images

Today was group meetings day. In the Stars meeting, John Brewer (Yale) told us about fitting stellar spectra with temperature, gravity, and composition, epoch-by-epoch for a multi-epoch radial-velocity survey. He is trying to understand how consistent his fitting is, what degeneracies there are, and whether there are any changes in temperature or gravity that co-vary with radial-velocity jitter. No results yet, but we had suggestions for tests to do. His presentation reinforced my idea (with Megan Bedell) to beat spectral variations against asteroseismological oscillation phase.

In the Cosmology meeting, Peter Melchior (Princeton) told us about attempts to turn de-blending into a faster and better method that is appropriate for HSC and LSST-generation surveys. He blew us away with a tiny piece of deep HSC imaging, and then described a method for deblending that looks like non-negative matrix factorization, plus convex regularizations. He has done his research on the mathematics around convex regularizations, reminding me that we should do a more general workshop on these techniques. We discussed many things in the context of Melchior's project; one interesting point is that the deblending problem doesn't necessarily require good models of galaxies (Dustin Lang and I always think of it as a modeling problem); it just needs to deliver a good set of weights for dividing up photons.


#DtU17, day two

Today I dropped in on Detecting the Unexpected in Baltimore, to provide a last-minute talk replacement. In the question period of my talk, Tom Loredo (Cornell) got us talking about precision vs accuracy. My position is a hard one: We never have ground truth about things like chemical abundances of stars; every chemical abundance is a latent variable; there is no external information we can use to determine whether our abundance measurements are really accurate. My view is that a model is accurate only inasmuch as it makes correct predictions about qualitatively different data. So we are left with only precision for many of our questions of greatest interest. More on this in some longer form, later.

Highlights (for me; very subjective) of the days' talks were stories about citizen science. Chris Lintott (Oxford) told us about tremendous lessons learned from years of Zooniverse, and the non-trivial connections between how you structure a project and how engaged users will become. He also talked about a long-term vision for partnering machine learning and human actors. He answered very thoughtfully a question about the ethical aspects of crowd-sourcing. Brooke Simmons (UCSD) showed us how easy it is to set up a crowd-sourcing project on Zooniverse; they have built an amazingly simple interface and toolkit. Steven Silverberg (Oklahoma) told us about Disk Detective and Julie Banfield (ANU) told us about Radio Galaxy Zoo. They both have amazing super-users, who have contributed to published papers. In the latter project, they have found (somewhat serendipitously) the largest radio galaxy ever found! One take-away from my perspective is that essentially all of the discoveries of the Unexpected have happened in the forums—in the deep social interaction parts of the citizen-science sites.


galaxy masses; text as data

After a morning working on terminology and notation for the color–magnitude diagram model paper with Lauren Anderson (Flatiron), I went to two seminars. The first was Jeremy Tinker (NYU) talking about the relationship between galaxy stellar mass and dark-matter halo mass as revealed by fitting of number-count and clustering data in large-scale structure simulations. He finds that only models with extremely small scatter (less—maybe far less—than 0.18 dex) are consistent with the data, and that the result is borne out by follow-ups with galaxy–galaxy lensing and other tests. This is very hard to understand within any realistic model for how galaxies form, and constitutes a new puzzle for standard cosmology plus gastrophysics.

In the afternoon there was a very wide-ranging talk by Mark Drezde (JHU) on data-science methods for social science, intervention in health issues, and language encoding. He is interested in taking topic models and either deepening them (to make better features) or else enriching their probabilistic structure. It is all very promising, though these subjects are—despite their extreme mathematical sophistication—in their infancy.


one paragraph per day

[I have been on vacation for a week.]

All I have done in the last week is (fail to) keep up with email (apologies y'all) and write one paragraph per day in the nascent paper with Lauren Anderson (Flatiron) about our data-driven model of the color–magnitude diagram. The challenge is to figure out what to emphasize: the fact that we de-noise the parallaxes, or the fact that we can extend geometric parallaxes to more distant stars, or the fact that we don't need stellar models?