Set up framework for PolCal QA calculations

Merged
#63 · Created  · Last updated

Merged pull request

Merged in save_polcal_QA (pull request #63)

7abf69c·Author: ·Closed by: ·2022-04-21

Description

This PR stems from work done to get Dave the data he needed to analyze polcals on a local machine. Eventually he will tell us the math to use to produce numbers that can be put in a QA report. This PR is an attempt to build some of the framework that we can use in the future to construct these Polcal QA metrics. It has 3 parts, in order of importance:

  1. A new function, record_polcal_quality_metrics, that is called after each beam has finished it’s calculation. This is where I imagine we would place the code (or calls to the code) that do the analysis we get from Dave. Right now this function makes sort of intermediate products for Dave to look at. Eventually we’ll carry the QA analysis to the end.

  2. Example of usage of the new *-pac function prepare_model_objects. This is a hopefully useful tool that allows us quickly set up objects as if a PAC fit is about to be done. It saves a lot of copy/paste from *-pac

  3. Capturing of a new output from *-pac's run_core that is a numpy array of lmfit fitting result objects. These aren’t used now, but they contain a lot of information on the actual fit itself (chisq, number of iterations, error codes, etc.) so might be useful later.

So to say it again, none of this is really required for anything right now, but I had it on a branch for Dave and it’s probably best to merge it in sooner rather than later. I hope this groundwork will be useful later.

0 attachments

0 comments

Loading commits...